-21%
Convex Optimization with Computational Errors
Original price was: ₹ 8,558.00.₹ 6,846.00Current price is: ₹ 6,846.00.
The book is devoted to the study of approximate solutions of optimization problems in the presence of computational errors. It contains a number of results on the convergence behavior of algorithms in a Hilbert space, which are known as important tools for solving optimization problems. The research presented in the book is the continuation and the further development of the author's (c) 2016 book Numerical Optimization with Computational Errors, Springer 2016. Both books study the algorithms taking into account computational errors which are always present in practice. The main goal is, for a known computational error, to find out what an approximate solution can be obtained and how many iterates one needs for this.
The main difference between this new book and the 2016 book is that in this present book the discussion takes into consideration the fact that for every algorithm, its iteration consists of several steps and that computational errors for different steps are generally, different. This fact, which was not taken into account in the previous book, is indeed important in practice. For example, the subgradient projection algorithm consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error and these two computational errors are different in general. It may happen that the feasible set is simple and the objective function is complicated. As a result, the computational error, made when one calculates the projection, is essentially smaller than the computational error of the calculation of the subgradient. Clearly, an opposite case is possible too. Another feature of this book is a study of a number of important algorithms which appeared recently in the literature and which are not discussed in the previous book.
This monograph contains 12 chapters. Chapter 1 is an introduction. In Chapter 2 we study the subgradient projection algorithm for minimization of convex and nonsmooth functions. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 3 we analyze the mirror descent algorithm for minimization of convex and nonsmooth functions, under the presence of computational errors. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we solve an auxiliary minimization problem on the set of feasible points. In each of these two steps there is a computational error. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 4 we analyze the projected gradient algorithm with a smooth objective function under the presence of computational errors. In Chapter 5 we consider an algorithm, which is an extension of the projection gradient algorithm used for solving linear inverse problems arising in signal/image processing. In Chapter 6 we study continuous subgradient method and continuous subgradient projection algorithm for minimization of convex nonsmooth functions and for computing the saddle points of convex-concave functions, under the presence of computational errors. All the results of this chapter has no prototype in [NOCE]. In Chapters 7-12 we analyze several algorithms under the presence of computational errors which were not considered in [NOCE]. Again, each step of an iteration has a computational errors and we take into account that these errors are, in general, different. An optimization problems with a composite objective function is studied in Chapter 7. A zero-sum game with two-players is considered in Chapter 8. A predicted decrease approximatio
-21%
Convex Optimization with Computational Errors
Original price was: ₹ 8,558.00.₹ 6,846.00Current price is: ₹ 6,846.00.
The book is devoted to the study of approximate solutions of optimization problems in the presence of computational errors. It contains a number of results on the convergence behavior of algorithms in a Hilbert space, which are known as important tools for solving optimization problems. The research presented in the book is the continuation and the further development of the author's (c) 2016 book Numerical Optimization with Computational Errors, Springer 2016. Both books study the algorithms taking into account computational errors which are always present in practice. The main goal is, for a known computational error, to find out what an approximate solution can be obtained and how many iterates one needs for this.
The main difference between this new book and the 2016 book is that in this present book the discussion takes into consideration the fact that for every algorithm, its iteration consists of several steps and that computational errors for different steps are generally, different. This fact, which was not taken into account in the previous book, is indeed important in practice. For example, the subgradient projection algorithm consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error and these two computational errors are different in general. It may happen that the feasible set is simple and the objective function is complicated. As a result, the computational error, made when one calculates the projection, is essentially smaller than the computational error of the calculation of the subgradient. Clearly, an opposite case is possible too. Another feature of this book is a study of a number of important algorithms which appeared recently in the literature and which are not discussed in the previous book.
This monograph contains 12 chapters. Chapter 1 is an introduction. In Chapter 2 we study the subgradient projection algorithm for minimization of convex and nonsmooth functions. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 3 we analyze the mirror descent algorithm for minimization of convex and nonsmooth functions, under the presence of computational errors. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we solve an auxiliary minimization problem on the set of feasible points. In each of these two steps there is a computational error. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 4 we analyze the projected gradient algorithm with a smooth objective function under the presence of computational errors. In Chapter 5 we consider an algorithm, which is an extension of the projection gradient algorithm used for solving linear inverse problems arising in signal/image processing. In Chapter 6 we study continuous subgradient method and continuous subgradient projection algorithm for minimization of convex nonsmooth functions and for computing the saddle points of convex-concave functions, under the presence of computational errors. All the results of this chapter has no prototype in [NOCE]. In Chapters 7-12 we analyze several algorithms under the presence of computational errors which were not considered in [NOCE]. Again, each step of an iteration has a computational errors and we take into account that these errors are, in general, different. An optimization problems with a composite objective function is studied in Chapter 7. A zero-sum game with two-players is considered in Chapter 8. A predicted decrease approximatio
-21%
Convex Optimization with Computational Errors
Original price was: ₹ 8,558.00.₹ 6,846.00Current price is: ₹ 6,846.00.
The book is devoted to the study of approximate solutions of optimization problems in the presence of computational errors. It contains a number of results on the convergence behavior of algorithms in a Hilbert space, which are known as important tools for solving optimization problems. The research presented in the book is the continuation and the further development of the author's (c) 2016 book Numerical Optimization with Computational Errors, Springer 2016. Both books study the algorithms taking into account computational errors which are always present in practice. The main goal is, for a known computational error, to find out what an approximate solution can be obtained and how many iterates one needs for this.
The main difference between this new book and the 2016 book is that in this present book the discussion takes into consideration the fact that for every algorithm, its iteration consists of several steps and that computational errors for different steps are generally, different. This fact, which was not taken into account in the previous book, is indeed important in practice. For example, the subgradient projection algorithm consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error and these two computational errors are different in general.
It may happen that the feasible set is simple and the objective function is complicated. As a result, the computational error, made when one calculates the projection, is essentially smaller than the computational error of the calculation of the subgradient. Clearly, an opposite case is possible too. Another feature of this book is a study of a number of important algorithms which appeared recently in the literature and which are not discussed in the previous book. This monograph contains 12 chapters. Chapter 1 is an introduction. In Chapter 2 we study the subgradient projection algorithm for minimization of convex and nonsmooth functions. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 3 we analyze the mirror descent algorithm for minimization of convex and nonsmooth functions, under the presence of computational errors. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we solve an auxiliary minimization problem on the set of feasible points. In each of these two steps there is a computational error. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 4 we analyze the projected gradient algorithm with a smooth objective function under the presence of computational errors. In Chapter 5 we consider an algorithm, which is an extension of the projection gradient algorithm used for solving linear inverse problems arising in signal/image processing. In Chapter 6 we study continuous subgradient method and continuous subgradient projection algorithm for minimization of convex nonsmooth functions and for computing the saddle points of convex-concave functions, under the presence of computational errors. All the results of this chapter has no prototype in [NOCE]. In Chapters 7-12 we analyze several algorithms under the presence of computational errors which were not considered in [NOCE]. Again, each step of an iteration has a computational errors and we take into account that these errors are, in general, different. An optimization problems with a composite objective function is studied in Chapter 7. A zero-sum game with two-players is considered in Chapter 8. A predicted decrease approximation.
-21%
Convex Optimization with Computational Errors
Original price was: ₹ 8,558.00.₹ 6,846.00Current price is: ₹ 6,846.00.
The book is devoted to the study of approximate solutions of optimization problems in the presence of computational errors. It contains a number of results on the convergence behavior of algorithms in a Hilbert space, which are known as important tools for solving optimization problems. The research presented in the book is the continuation and the further development of the author's (c) 2016 book Numerical Optimization with Computational Errors, Springer 2016. Both books study the algorithms taking into account computational errors which are always present in practice. The main goal is, for a known computational error, to find out what an approximate solution can be obtained and how many iterates one needs for this.
The main difference between this new book and the 2016 book is that in this present book the discussion takes into consideration the fact that for every algorithm, its iteration consists of several steps and that computational errors for different steps are generally, different. This fact, which was not taken into account in the previous book, is indeed important in practice. For example, the subgradient projection algorithm consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error and these two computational errors are different in general.
It may happen that the feasible set is simple and the objective function is complicated. As a result, the computational error, made when one calculates the projection, is essentially smaller than the computational error of the calculation of the subgradient. Clearly, an opposite case is possible too. Another feature of this book is a study of a number of important algorithms which appeared recently in the literature and which are not discussed in the previous book. This monograph contains 12 chapters. Chapter 1 is an introduction. In Chapter 2 we study the subgradient projection algorithm for minimization of convex and nonsmooth functions. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 3 we analyze the mirror descent algorithm for minimization of convex and nonsmooth functions, under the presence of computational errors. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we solve an auxiliary minimization problem on the set of feasible points. In each of these two steps there is a computational error. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 4 we analyze the projected gradient algorithm with a smooth objective function under the presence of computational errors. In Chapter 5 we consider an algorithm, which is an extension of the projection gradient algorithm used for solving linear inverse problems arising in signal/image processing. In Chapter 6 we study continuous subgradient method and continuous subgradient projection algorithm for minimization of convex nonsmooth functions and for computing the saddle points of convex-concave functions, under the presence of computational errors. All the results of this chapter has no prototype in [NOCE]. In Chapters 7-12 we analyze several algorithms under the presence of computational errors which were not considered in [NOCE]. Again, each step of an iteration has a computational errors and we take into account that these errors are, in general, different. An optimization problems with a composite objective function is studied in Chapter 7. A zero-sum game with two-players is considered in Chapter 8. A predicted decrease approximation.
-21%
Genericity in Nonlinear Analysis
Original price was: ₹ 9,509.00.₹ 7,607.00Current price is: ₹ 7,607.00.
This book presents an extensive collection of state-of-the-art results and references in nonlinear functional analysis demonstrating how the generic approach proves to be very useful in solving many interesting and important problems. Nonlinear analysis plays an ever-increasing role in theoretical and applied mathematics, as well as in many other areas of science such as engineering, statistics, computer science, economics, finance, and medicine. The text may be used as supplementary material for graduate courses in nonlinear functional analysis, optimization theory and approximation theory, and is a treasure trove for instructors, researchers, and practitioners in mathematics and in the mathematical sciences.Each chapter is self-contained; proofs are solid and carefully communicated. Genericity in Nonlinear Analysis is the first book to systematically present the generic approach to nonlinear analysis. Topics presented include convergence analysis of powers and infinite products via the Baire Category Theorem, fixed point theory of both single- and set-valued mappings, best approximation problems, discrete and continuous descent methods for minimization in a general Banach space, and the structure of minimal energy configurations with rational numbers in the Aubry–Mather theory.
-21%
Genericity in Nonlinear Analysis
Original price was: ₹ 9,509.00.₹ 7,607.00Current price is: ₹ 7,607.00.
This book presents an extensive collection of state-of-the-art results and references in nonlinear functional analysis demonstrating how the generic approach proves to be very useful in solving many interesting and important problems. Nonlinear analysis plays an ever-increasing role in theoretical and applied mathematics, as well as in many other areas of science such as engineering, statistics, computer science, economics, finance, and medicine. The text may be used as supplementary material for graduate courses in nonlinear functional analysis, optimization theory and approximation theory, and is a treasure trove for instructors, researchers, and practitioners in mathematics and in the mathematical sciences.Each chapter is self-contained; proofs are solid and carefully communicated. Genericity in Nonlinear Analysis is the first book to systematically present the generic approach to nonlinear analysis. Topics presented include convergence analysis of powers and infinite products via the Baire Category Theorem, fixed point theory of both single- and set-valued mappings, best approximation problems, discrete and continuous descent methods for minimization in a general Banach space, and the structure of minimal energy configurations with rational numbers in the Aubry–Mather theory.
-21%
Genericity in Nonlinear Analysis
Original price was: ₹ 9,509.00.₹ 7,607.00Current price is: ₹ 7,607.00.
This book presents an extensive collection of state-of-the-art results and references in nonlinear functional analysis demonstrating how the generic approach proves to be very useful in solving many interesting and important problems. Nonlinear analysis plays an ever-increasing role in theoretical and applied mathematics, as well as in many other areas of science such as engineering, statistics, computer science, economics, finance, and medicine. The text may be used as supplementary material for graduate courses in nonlinear functional analysis, optimization theory and approximation theory, and is a treasure trove for instructors, researchers, and practitioners in mathematics and in the mathematical sciences.Each chapter is self-contained; proofs are solid and carefully communicated. Genericity in Nonlinear Analysis is the first book to systematically present the generic approach to nonlinear analysis. Topics presented include convergence analysis of powers and infinite products via the Baire Category Theorem, fixed point theory of both single- and set-valued mappings, best approximation problems, discrete and continuous descent methods for minimization in a general Banach space, and the structure of minimal energy configurations with rational numbers in the Aubry–Mather theory.
-21%
Genericity in Nonlinear Analysis
Original price was: ₹ 9,509.00.₹ 7,607.00Current price is: ₹ 7,607.00.
This book presents an extensive collection of state-of-the-art results and references in nonlinear functional analysis demonstrating how the generic approach proves to be very useful in solving many interesting and important problems. Nonlinear analysis plays an ever-increasing role in theoretical and applied mathematics, as well as in many other areas of science such as engineering, statistics, computer science, economics, finance, and medicine. The text may be used as supplementary material for graduate courses in nonlinear functional analysis, optimization theory and approximation theory, and is a treasure trove for instructors, researchers, and practitioners in mathematics and in the mathematical sciences.Each chapter is self-contained; proofs are solid and carefully communicated. Genericity in Nonlinear Analysis is the first book to systematically present the generic approach to nonlinear analysis. Topics presented include convergence analysis of powers and infinite products via the Baire Category Theorem, fixed point theory of both single- and set-valued mappings, best approximation problems, discrete and continuous descent methods for minimization in a general Banach space, and the structure of minimal energy configurations with rational numbers in the Aubry–Mather theory.
-20%
Optimal Control Problems Related to the Robinson–Solow–Srinivasan Model
Original price was: ₹ 11,411.00.₹ 9,129.00Current price is: ₹ 9,129.00.
This book is devoted to the study of classes of optimal control problems arising in economic growth theory, related to the Robinson–Solow–Srinivasan (RSS) model. The model was introduced in the 1960s by economists Joan Robinson, Robert Solow, and Thirukodikaval Nilakanta Srinivasan and was further studied by Robinson, Nobuo Okishio, and Joseph Stiglitz. Since then, the study of the RSS model has become an important element of economic dynamics. In this book, two large general classes of optimal control problems, both of them containing the RSS model as a particular case, are presented for study. For these two classes, a turnpike theory is developed and the existence of solutions to the corresponding infinite horizon optimal control problems is established.
The book contains 9 chapters. Chapter 1 discusses turnpike properties for some optimal control problems that are known in the literature, including problems corresponding to the RSS model. The first class of optimal control problems is studied in Chaps. 2–6. In Chap. 2, infinite horizon optimal control problems with nonautonomous optimality criteria are considered. The utility functions, which determine the optimality criterion, are nonconcave. This class of models contains the RSS model as a particular case. The stability of the turnpike phenomenon of the one-dimensional nonautonomous concave RSS model is analyzed in Chap. 3. The following chapter takes up the study of a class of autonomous nonconcave optimal control problems, a subclass of problems considered in Chap. 2. The equivalence of the turnpike property and the asymptotic turnpike property, as well as the stability of the turnpike phenomenon, is established. Turnpike conditions and the stability of the turnpike phenomenon for nonautonomous problems are examined in Chap. 5, with Chap. 6 devoted to the study of the turnpike properties for the one-dimensional nonautonomous nonconcave RSS model. The utility functions, which determine the optimality criterion, are nonconcave. The class of RSS models is identified with a complete metric space of utility functions. Using the Baire category approach, the turnpike phenomenon is shown to hold for most of the models. Chapter 7 begins the study of the second large class of autonomous optimal control problems, and turnpike conditions are established. The stability of the turnpike phenomenon for this class of problems is investigated further in Chaps. 8 and 9.
-20%
Optimal Control Problems Related to the Robinson–Solow–Srinivasan Model
Original price was: ₹ 11,411.00.₹ 9,129.00Current price is: ₹ 9,129.00.
This book is devoted to the study of classes of optimal control problems arising in economic growth theory, related to the Robinson–Solow–Srinivasan (RSS) model. The model was introduced in the 1960s by economists Joan Robinson, Robert Solow, and Thirukodikaval Nilakanta Srinivasan and was further studied by Robinson, Nobuo Okishio, and Joseph Stiglitz. Since then, the study of the RSS model has become an important element of economic dynamics. In this book, two large general classes of optimal control problems, both of them containing the RSS model as a particular case, are presented for study. For these two classes, a turnpike theory is developed and the existence of solutions to the corresponding infinite horizon optimal control problems is established.
The book contains 9 chapters. Chapter 1 discusses turnpike properties for some optimal control problems that are known in the literature, including problems corresponding to the RSS model. The first class of optimal control problems is studied in Chaps. 2–6. In Chap. 2, infinite horizon optimal control problems with nonautonomous optimality criteria are considered. The utility functions, which determine the optimality criterion, are nonconcave. This class of models contains the RSS model as a particular case. The stability of the turnpike phenomenon of the one-dimensional nonautonomous concave RSS model is analyzed in Chap. 3. The following chapter takes up the study of a class of autonomous nonconcave optimal control problems, a subclass of problems considered in Chap. 2. The equivalence of the turnpike property and the asymptotic turnpike property, as well as the stability of the turnpike phenomenon, is established. Turnpike conditions and the stability of the turnpike phenomenon for nonautonomous problems are examined in Chap. 5, with Chap. 6 devoted to the study of the turnpike properties for the one-dimensional nonautonomous nonconcave RSS model. The utility functions, which determine the optimality criterion, are nonconcave. The class of RSS models is identified with a complete metric space of utility functions. Using the Baire category approach, the turnpike phenomenon is shown to hold for most of the models. Chapter 7 begins the study of the second large class of autonomous optimal control problems, and turnpike conditions are established. The stability of the turnpike phenomenon for this class of problems is investigated further in Chaps. 8 and 9.
-21%
Optimal Control Problems Related to the Robinson–Solow–Srinivasan Model
Original price was: ₹ 8,083.00.₹ 6,466.00Current price is: ₹ 6,466.00.
This book is devoted to the study of classes of optimal control problems arising in economic growth theory, related to the Robinson–Solow–Srinivasan (RSS) model. The model was introduced in the 1960s by economists Joan Robinson, Robert Solow, and Thirukodikaval Nilakanta Srinivasan and was further studied by Robinson, Nobuo Okishio, and Joseph Stiglitz. Since then, the study of the RSS model has become an important element of economic dynamics. In this book, two large general classes of optimal control problems, both of them containing the RSS model as a particular case, are presented for study. For these two classes, a turnpike theory is developed and the existence of solutions to the corresponding infinite horizon optimal control problems is established.
The book contains 9 chapters. Chapter 1 discusses turnpike properties for some optimal control problems that are known in the literature, including problems corresponding to the RSS model. The first class of optimal control problems is studied in Chaps. 2–6. In Chap. 2, infinite horizon optimal control problems with nonautonomous optimality criteria are considered. The utility functions, which determine the optimality criterion, are nonconcave. This class of models contains the RSS model as a particular case. The stability of the turnpike phenomenon of the one-dimensional nonautonomous concave RSS model is analyzed in Chap. 3. The following chapter takes up the study of a class of autonomous nonconcave optimal control problems, a subclass of problems considered in Chap. 2. The equivalence of the turnpike property and the asymptotic turnpike property, as well as the stability of the turnpike phenomenon, is established. Turnpike conditions and the stability of the turnpike phenomenon for nonautonomous problems are examined in Chap. 5, with Chap. 6 devoted to the study of the turnpike properties for the one-dimensional nonautonomous nonconcave RSS model. The utility functions, which determine the optimality criterion, are nonconcave. The class of RSS models is identified with a complete metric space of utility functions. Using the Baire category approach, the turnpike phenomenon is shown to hold for most of the models. Chapter 7 begins the study of the second large class of autonomous optimal control problems, and turnpike conditions are established. The stability of the turnpike phenomenon for this class of problems is investigated further in Chaps. 8 and 9.
-21%
Optimal Control Problems Related to the Robinson–Solow–Srinivasan Model
Original price was: ₹ 8,083.00.₹ 6,466.00Current price is: ₹ 6,466.00.
This book is devoted to the study of classes of optimal control problems arising in economic growth theory, related to the Robinson–Solow–Srinivasan (RSS) model. The model was introduced in the 1960s by economists Joan Robinson, Robert Solow, and Thirukodikaval Nilakanta Srinivasan and was further studied by Robinson, Nobuo Okishio, and Joseph Stiglitz. Since then, the study of the RSS model has become an important element of economic dynamics. In this book, two large general classes of optimal control problems, both of them containing the RSS model as a particular case, are presented for study. For these two classes, a turnpike theory is developed and the existence of solutions to the corresponding infinite horizon optimal control problems is established.
The book contains 9 chapters. Chapter 1 discusses turnpike properties for some optimal control problems that are known in the literature, including problems corresponding to the RSS model. The first class of optimal control problems is studied in Chaps. 2–6. In Chap. 2, infinite horizon optimal control problems with nonautonomous optimality criteria are considered. The utility functions, which determine the optimality criterion, are nonconcave. This class of models contains the RSS model as a particular case. The stability of the turnpike phenomenon of the one-dimensional nonautonomous concave RSS model is analyzed in Chap. 3. The following chapter takes up the study of a class of autonomous nonconcave optimal control problems, a subclass of problems considered in Chap. 2. The equivalence of the turnpike property and the asymptotic turnpike property, as well as the stability of the turnpike phenomenon, is established. Turnpike conditions and the stability of the turnpike phenomenon for nonautonomous problems are examined in Chap. 5, with Chap. 6 devoted to the study of the turnpike properties for the one-dimensional nonautonomous nonconcave RSS model. The utility functions, which determine the optimality criterion, are nonconcave. The class of RSS models is identified with a complete metric space of utility functions. Using the Baire category approach, the turnpike phenomenon is shown to hold for most of the models. Chapter 7 begins the study of the second large class of autonomous optimal control problems, and turnpike conditions are established. The stability of the turnpike phenomenon for this class of problems is investigated further in Chaps. 8 and 9.
-21%
Optimization in Banach Spaces
Original price was: ₹ 4,279.00.₹ 3,423.00Current price is: ₹ 3,423.00.
The book is devoted to the study of constrained minimization problems on closed and convex sets in Banach spaces with a Frechet differentiable objective function. Such problems are well studied in a finite-dimensional space and in an infinite-dimensional Hilbert space. When the space is Hilbert there are many algorithms for solving optimization problems including the gradient projection algorithm which is one of the most important tools in the optimization theory, nonlinear analysis and their applications. An optimization problem is described by an objective function and a set of feasible points. For the gradient projection algorithm each iteration consists of two steps. The first step is a calculation of a gradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error. In our recent research we show that the gradient projection algorithm generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. It should be mentioned thatThe properties of a Hilbert space play an important role. When we consider an optimization problem in a general Banach space the situation becomes more difficult and less understood. On the other hand such problems arise in the approximation theory. The book is of interest for mathematicians working in optimization. It also can be useful in preparation courses for graduate students.The main feature of the book which appeals specifically to this audience is the study of algorithms for convex and nonconvex minimization problems in a general Banach space. The book is of interest for experts in applications of optimization to the approximation theory.
In this book the goal is to obtain a good approximate solution of the constrained optimization problem in a general Banach space underThe presence of computational errors. It is shown that the algorithm generates a good approximate solution, if the sequence of computational errors is bounded from above by a small constant. The book consists of four chapters. In the first we discuss several algorithms which are studied in the book and prove a convergence result for an unconstrained problem which is a prototype of our results for the constrained problem. In Chapter 2 we analyze convex optimization problems. Nonconvex optimization problems are studied in Chapter 3. In Chapter 4 we study continuous algorithms for minimization problems under the presence of computational errors. The algorithm generates a good approximate solution, if the sequence of computational errors is bounded from above by a small constant. The book consists of four chapters. In the first we discuss several algorithms which are studied in the book and prove a convergence result for an unconstrained problem which is a prototype of our results for the constrained problem. In Chapter 2 we analyze convex optimization problems. Nonconvex optimization problems are studied in Chapter 3. In Chapter 4 we study continuous algorithms for minimization problems under the presence of computational errors.
-21%
Optimization in Banach Spaces
Original price was: ₹ 4,279.00.₹ 3,423.00Current price is: ₹ 3,423.00.
The book is devoted to the study of constrained minimization problems on closed and convex sets in Banach spaces with a Frechet differentiable objective function. Such problems are well studied in a finite-dimensional space and in an infinite-dimensional Hilbert space. When the space is Hilbert there are many algorithms for solving optimization problems including the gradient projection algorithm which is one of the most important tools in the optimization theory, nonlinear analysis and their applications. An optimization problem is described by an objective function and a set of feasible points. For the gradient projection algorithm each iteration consists of two steps. The first step is a calculation of a gradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error. In our recent research we show that the gradient projection algorithm generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. It should be mentioned thatThe properties of a Hilbert space play an important role. When we consider an optimization problem in a general Banach space the situation becomes more difficult and less understood. On the other hand such problems arise in the approximation theory. The book is of interest for mathematicians working in optimization. It also can be useful in preparation courses for graduate students.The main feature of the book which appeals specifically to this audience is the study of algorithms for convex and nonconvex minimization problems in a general Banach space. The book is of interest for experts in applications of optimization to the approximation theory.
In this book the goal is to obtain a good approximate solution of the constrained optimization problem in a general Banach space underThe presence of computational errors. It is shown that the algorithm generates a good approximate solution, if the sequence of computational errors is bounded from above by a small constant. The book consists of four chapters. In the first we discuss several algorithms which are studied in the book and prove a convergence result for an unconstrained problem which is a prototype of our results for the constrained problem. In Chapter 2 we analyze convex optimization problems. Nonconvex optimization problems are studied in Chapter 3. In Chapter 4 we study continuous algorithms for minimization problems under the presence of computational errors. The algorithm generates a good approximate solution, if the sequence of computational errors is bounded from above by a small constant. The book consists of four chapters. In the first we discuss several algorithms which are studied in the book and prove a convergence result for an unconstrained problem which is a prototype of our results for the constrained problem. In Chapter 2 we analyze convex optimization problems. Nonconvex optimization problems are studied in Chapter 3. In Chapter 4 we study continuous algorithms for minimization problems under the presence of computational errors.