凸優(yōu)化算法(清華版雙語教學(xué)用書)
定 價(jià):89 元
- 作者:(美)Dimitri P. Bertsekas 著
- 出版時(shí)間:2016/4/22
- ISBN:9787302430704
- 出 版 社:清華大學(xué)出版社
- 中圖法分類:O174.13
- 頁碼:564
- 紙張:膠版紙
- 版次:1
- 開本:16K
《凸優(yōu)化算法》幾乎囊括了所有主流的凸優(yōu)化算法。包括梯度法、次梯度法、多面體逼近法、鄰近法和內(nèi)點(diǎn)法等。
這些方法通常依賴于代價(jià)函數(shù)和約束條件的凸性(而不一定依賴于其可微性),并與對偶性有著直接或問接的聯(lián)系。作者針對具體問題的特定結(jié)構(gòu),給出了大量的例題,來充分展示算法的應(yīng)用。各章的內(nèi)容如下:第一章,凸優(yōu)化模型概述;第2章,優(yōu)化算法概述;第3章,次梯度算法;第4章,多面體逼近算法;第5章,鄰近算法;第6章,其他算法問題!锻箖(yōu)化算法》的一個(gè)特色是在強(qiáng)調(diào)問題之間的對偶性的同時(shí),也十分重視建立在共軛概念上的算法之間的對偶性,這常常能為選擇合適的算法實(shí)現(xiàn)方式提供新的靈感和計(jì)算上的便利。
《凸優(yōu)化算法》均取材于作者過去15年在美國麻省理工學(xué)院的凸優(yōu)化方面課堂教學(xué)的內(nèi)容!锻箖(yōu)化算法》和《凸優(yōu)化理論》這兩《凸優(yōu)化算法》合起來可以作為一個(gè)學(xué)期的凸優(yōu)化課程的教材;《凸優(yōu)化算法》也可以用作非線性規(guī)劃課程的補(bǔ)充材料。
德梅萃·博塞克斯(Dimitri P.Bertsekas),教授是優(yōu)化理論的國際學(xué)者、美國國家工程院院士,現(xiàn)任美國麻省理工學(xué)院電氣工程與計(jì)算機(jī)科學(xué)系教授,曾在斯坦福大學(xué)工程經(jīng)濟(jì)系和伊利諾伊大學(xué)電氣工程系任教,在優(yōu)化理論、控制工程、通信工程、計(jì)算機(jī)科學(xué)等領(lǐng)域有豐富的科研教學(xué)經(jīng)驗(yàn),成果豐碩。博塞克斯教授是一位多產(chǎn)作者,著有14本專著和教科書。
1. Convex Optimization Models: An Overview
1.1. Lagrange Duality
1.1.1. Separable Problems - Decomposition
1.1.2. Partitioning
1.2. Fenchel Duality and Conic Programming
1.2.1. Linear Conic Problems
1.2.2. Second Order Cone Programming
1.2.3. Semidefinite Programming
1.3. Additive Cost Problems
1.4. Large Number of Constraints
1.5. Exact Penalty ~nctions
1.6. Notes, Sources, and Exercises
2. Optimization Algorithms: An Overview
2.1. Iterative Descent Algorithms
2.1.1. Differentiable Cost Function Descent - Unconstrained Problems
2.1.2. Constrained Problems - Feasible Direction Methods
2.1.3. Nondifferentiable Problems - Subgradient Methods
2.1.4. Alternative Descent Methods
2.1.5. Incremental Algorithms
2.1.6. Distributed Asynchronous Iterative Algorithms
2.2. Approximation Methods
2.2.1. Polyhedral Approximation
2.2.2. Penalty, Augmented Lagrangian, and Interior Point Methods
2.2.3. Proximal Algorithm, Bundle Methods, and Tikhonov Regularization
2.2.4. Alternating Direction Method of Multipliers
2.2.5. Smoothing of Nondifferentiable Problems
2.3. Notes, Sources, and Exercises
3. Subgradient Methods
3.1. Subgradients of Convex Real-Valued Functions
3.1.1. Characterization of the Subdifferential
3.2. Convergence Analysis of Subgradient Methods
3.3. e-Subgradient Methods
3.3.1. Connection with Incremental Subgradient Methods
3.4. Notes, Sources, and Exercises
4. Polyhedral Approximation Methods
4.1. Outer Linearization Cutting Plane Methods
4.2. Inner Linearization - Simplicial Decomposition
4.3. Duality of Outer and Inner Linearization
4.4. Generalized Polyhedral Approximation
4.5. Generalized Simplicial Decomposition
4.5.1. Differentiable Cost Case
4.5.2. Nondifferentiable Cost and Side Constraints
4.6. Polyhedral Approximation for Conic Programming
4.7. Notes, Sources, and Exercises
5. Proximal Algorithms
5.1. Basic Theory of Proximal Algorithms
5.1.1. Convergence
5.1.2. Rate of Convergence
5.1.3. Gradient Interpretation
5.1.4. Fixed Point Interpretation, Overrelaxation and Generalization
5.2. Dual Proximal Algorithms
5.2.1. Augmented Lagrangian Methods
5.3. Proximal Algorithms with Linearization
5.3.1. Proximal Cutting Plane Methods
5.3.2. Bundle Methods
5.3.3. Proximal Inner Linearization Methods
5.4. Alternating Direction Methods of Multipliers
5.4.1. Applications in Machine Learning
5.4.2. ADMM Applied to Separable Problems
5.5. Notes, Sources, and Exercises
6. Additional Algorithmic Topics
6.1. Gradient Projection Methods
6.2. Gradient Projection with Extrapolation
6.2.1. An Algorithm with Optimal Iteration Complexity
6.2.2. Nondifferentiable Cost Smoothing
6.3. Proximal Gradient Methods
6.4. Incremental Subgradient Proximal Methods
6.4.1. Convergence for Methods with Cyclic Order
6.4.2. Convergence for Methods with Randomized Order
6.4.3. Application in Specially Structured Problems
6.4.4. Incremental Constraint Projection Methods
6.5. Coordinate Descent Methods
6.5.1. Variants of Coordinate Descent
6.5.2. Distributed Asynchronous Coordinate Descent
6.6. Generalized Proximal Methods
6.7. e-Descent and Extended Monotropic Programming
6.7.1. e-Subgradients
6.7.2. e-Descent Method
6.7.3. Extended Monotropic Programming Duality
6.7.4. Special Cases of Strong Duality
6.8. Interior Point Methods
6.8.1. Primal-Dual Methods for Linear Programming
6.8.2. Interior Point Methods for Conic Programming
6.8.3. Central Cutting Plane Methods
6.9. Notes, Sources, and Exercises
Appendix A" Mathematical Background
A.1. Linear Algebra
A.2. Topological Properties
A.3. Derivatives
A.4. Convergence Theorems
Appendix B: Convex Optimization Theory: A Summary
B.1. Basic Concepts of Convex Analysis
B.2. Basic Concepts of Polyhedral Convexity
B.3. Basic Concepts of Convex Optimization
B.4. Geometric Duality Framework
B.5. Duality and Optimization
References
Index