Neatra Groups: On Women Empowerment Mission
Filter
-21%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

Convex Optimization in Normed Spaces

Original price was: ₹ 6,181.00.Current price is: ₹ 4,944.00.
This work is intended to serve as a guide for graduate students and researchers who wish to get acquainted with the main theoretical and practical tools for the numerical minimization of convex functions on Hilbert spaces. Therefore, it contains the main tools that are necessary to conduct independent research on the topic. It is also a concise, easy-to-follow and self-contained textbook, which may be useful for any researcher working on related fields, as well as teachers giving graduate-level courses on the topic. It will contain a thorough revision of the extant literature including both classical and state-of-the-art references.
-21%
Quick View
Add to Wishlist

Convex Optimization in Normed Spaces

Original price was: ₹ 6,181.00.Current price is: ₹ 4,944.00.
This work is intended to serve as a guide for graduate students and researchers who wish to get acquainted with the main theoretical and practical tools for the numerical minimization of convex functions on Hilbert spaces. Therefore, it contains the main tools that are necessary to conduct independent research on the topic. It is also a concise, easy-to-follow and self-contained textbook, which may be useful for any researcher working on related fields, as well as teachers giving graduate-level courses on the topic. It will contain a thorough revision of the extant literature including both classical and state-of-the-art references.
Add to cartView cart
-21%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

Convex Optimization with Computational Errors

Original price was: ₹ 8,558.00.Current price is: ₹ 6,846.00.
The book is devoted to the study of approximate solutions of optimization problems in the presence of computational errors. It contains a number of results on the convergence behavior of algorithms in a Hilbert space, which are known as important tools for solving optimization problems. The research presented in the book is the continuation and the further development of the author's (c) 2016 book Numerical Optimization with Computational Errors, Springer 2016. Both books study the algorithms taking into account computational errors which are always present in practice. The main goal is, for a known computational error, to find out what an approximate solution can be obtained and how many iterates one needs for this. The main difference between this new book and the 2016 book is that in this present book the discussion takes into consideration the fact that for every algorithm, its iteration consists of several steps and that computational errors for different steps are generally, different. This fact, which was not taken into account in the previous book, is indeed important in practice. For example, the subgradient projection algorithm consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error and these two computational errors are different in general. It may happen that the feasible set is simple and the objective function is complicated. As a result, the computational error, made when one calculates the projection, is essentially smaller than the computational error of the calculation of the subgradient. Clearly, an opposite case is possible too. Another feature of this book is a study of a number of important algorithms which appeared recently in the literature and which are not discussed in the previous book. This monograph contains 12 chapters. Chapter 1 is an introduction. In Chapter 2 we study the subgradient projection algorithm for minimization of convex and nonsmooth functions. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 3 we analyze the mirror descent algorithm for minimization of convex and nonsmooth functions, under the presence of computational errors. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we solve an auxiliary minimization problem on the set of feasible points. In each of these two steps there is a computational error. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 4 we analyze the projected gradient algorithm with a smooth objective function under the presence of computational errors. In Chapter 5 we consider an algorithm, which is an extension of the projection gradient algorithm used for solving linear inverse problems arising in signal/image processing. In Chapter 6 we study continuous subgradient method and continuous subgradient projection algorithm for minimization of convex nonsmooth functions and for computing the saddle points of convex-concave functions, under the presence of computational errors. All the results of this chapter has no prototype in [NOCE]. In Chapters 7-12 we analyze several algorithms under the presence of computational errors which were not considered in [NOCE]. Again, each step of an iteration has a computational errors and we take into account that these errors are, in general, different. An optimization problems with a composite objective function is studied in Chapter 7. A zero-sum game with two-players is considered in Chapter 8. A predicted decrease approximatio
-21%
Quick View
Add to Wishlist

Convex Optimization with Computational Errors

Original price was: ₹ 8,558.00.Current price is: ₹ 6,846.00.
The book is devoted to the study of approximate solutions of optimization problems in the presence of computational errors. It contains a number of results on the convergence behavior of algorithms in a Hilbert space, which are known as important tools for solving optimization problems. The research presented in the book is the continuation and the further development of the author's (c) 2016 book Numerical Optimization with Computational Errors, Springer 2016. Both books study the algorithms taking into account computational errors which are always present in practice. The main goal is, for a known computational error, to find out what an approximate solution can be obtained and how many iterates one needs for this. The main difference between this new book and the 2016 book is that in this present book the discussion takes into consideration the fact that for every algorithm, its iteration consists of several steps and that computational errors for different steps are generally, different. This fact, which was not taken into account in the previous book, is indeed important in practice. For example, the subgradient projection algorithm consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error and these two computational errors are different in general. It may happen that the feasible set is simple and the objective function is complicated. As a result, the computational error, made when one calculates the projection, is essentially smaller than the computational error of the calculation of the subgradient. Clearly, an opposite case is possible too. Another feature of this book is a study of a number of important algorithms which appeared recently in the literature and which are not discussed in the previous book. This monograph contains 12 chapters. Chapter 1 is an introduction. In Chapter 2 we study the subgradient projection algorithm for minimization of convex and nonsmooth functions. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 3 we analyze the mirror descent algorithm for minimization of convex and nonsmooth functions, under the presence of computational errors. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we solve an auxiliary minimization problem on the set of feasible points. In each of these two steps there is a computational error. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 4 we analyze the projected gradient algorithm with a smooth objective function under the presence of computational errors. In Chapter 5 we consider an algorithm, which is an extension of the projection gradient algorithm used for solving linear inverse problems arising in signal/image processing. In Chapter 6 we study continuous subgradient method and continuous subgradient projection algorithm for minimization of convex nonsmooth functions and for computing the saddle points of convex-concave functions, under the presence of computational errors. All the results of this chapter has no prototype in [NOCE]. In Chapters 7-12 we analyze several algorithms under the presence of computational errors which were not considered in [NOCE]. Again, each step of an iteration has a computational errors and we take into account that these errors are, in general, different. An optimization problems with a composite objective function is studied in Chapter 7. A zero-sum game with two-players is considered in Chapter 8. A predicted decrease approximatio
Add to cartView cart
-21%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

Convex Optimization with Computational Errors

Original price was: ₹ 8,558.00.Current price is: ₹ 6,846.00.
The book is devoted to the study of approximate solutions of optimization problems in the presence of computational errors. It contains a number of results on the convergence behavior of algorithms in a Hilbert space, which are known as important tools for solving optimization problems. The research presented in the book is the continuation and the further development of the author's (c) 2016 book Numerical Optimization with Computational Errors, Springer 2016. Both books study the algorithms taking into account computational errors which are always present in practice. The main goal is, for a known computational error, to find out what an approximate solution can be obtained and how many iterates one needs for this. The main difference between this new book and the 2016 book is that in this present book the discussion takes into consideration the fact that for every algorithm, its iteration consists of several steps and that computational errors for different steps are generally, different. This fact, which was not taken into account in the previous book, is indeed important in practice. For example, the subgradient projection algorithm consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error and these two computational errors are different in general. It may happen that the feasible set is simple and the objective function is complicated. As a result, the computational error, made when one calculates the projection, is essentially smaller than the computational error of the calculation of the subgradient. Clearly, an opposite case is possible too. Another feature of this book is a study of a number of important algorithms which appeared recently in the literature and which are not discussed in the previous book. This monograph contains 12 chapters. Chapter 1 is an introduction. In Chapter 2 we study the subgradient projection algorithm for minimization of convex and nonsmooth functions. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 3 we analyze the mirror descent algorithm for minimization of convex and nonsmooth functions, under the presence of computational errors. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we solve an auxiliary minimization problem on the set of feasible points. In each of these two steps there is a computational error. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 4 we analyze the projected gradient algorithm with a smooth objective function under the presence of computational errors. In Chapter 5 we consider an algorithm, which is an extension of the projection gradient algorithm used for solving linear inverse problems arising in signal/image processing. In Chapter 6 we study continuous subgradient method and continuous subgradient projection algorithm for minimization of convex nonsmooth functions and for computing the saddle points of convex-concave functions, under the presence of computational errors. All the results of this chapter has no prototype in [NOCE]. In Chapters 7-12 we analyze several algorithms under the presence of computational errors which were not considered in [NOCE]. Again, each step of an iteration has a computational errors and we take into account that these errors are, in general, different. An optimization problems with a composite objective function is studied in Chapter 7. A zero-sum game with two-players is considered in Chapter 8. A predicted decrease approximation.
-21%
Quick View
Add to Wishlist

Convex Optimization with Computational Errors

Original price was: ₹ 8,558.00.Current price is: ₹ 6,846.00.
The book is devoted to the study of approximate solutions of optimization problems in the presence of computational errors. It contains a number of results on the convergence behavior of algorithms in a Hilbert space, which are known as important tools for solving optimization problems. The research presented in the book is the continuation and the further development of the author's (c) 2016 book Numerical Optimization with Computational Errors, Springer 2016. Both books study the algorithms taking into account computational errors which are always present in practice. The main goal is, for a known computational error, to find out what an approximate solution can be obtained and how many iterates one needs for this. The main difference between this new book and the 2016 book is that in this present book the discussion takes into consideration the fact that for every algorithm, its iteration consists of several steps and that computational errors for different steps are generally, different. This fact, which was not taken into account in the previous book, is indeed important in practice. For example, the subgradient projection algorithm consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error and these two computational errors are different in general. It may happen that the feasible set is simple and the objective function is complicated. As a result, the computational error, made when one calculates the projection, is essentially smaller than the computational error of the calculation of the subgradient. Clearly, an opposite case is possible too. Another feature of this book is a study of a number of important algorithms which appeared recently in the literature and which are not discussed in the previous book. This monograph contains 12 chapters. Chapter 1 is an introduction. In Chapter 2 we study the subgradient projection algorithm for minimization of convex and nonsmooth functions. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 3 we analyze the mirror descent algorithm for minimization of convex and nonsmooth functions, under the presence of computational errors. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we solve an auxiliary minimization problem on the set of feasible points. In each of these two steps there is a computational error. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 4 we analyze the projected gradient algorithm with a smooth objective function under the presence of computational errors. In Chapter 5 we consider an algorithm, which is an extension of the projection gradient algorithm used for solving linear inverse problems arising in signal/image processing. In Chapter 6 we study continuous subgradient method and continuous subgradient projection algorithm for minimization of convex nonsmooth functions and for computing the saddle points of convex-concave functions, under the presence of computational errors. All the results of this chapter has no prototype in [NOCE]. In Chapters 7-12 we analyze several algorithms under the presence of computational errors which were not considered in [NOCE]. Again, each step of an iteration has a computational errors and we take into account that these errors are, in general, different. An optimization problems with a composite objective function is studied in Chapter 7. A zero-sum game with two-players is considered in Chapter 8. A predicted decrease approximation.
Add to cartView cart
-21%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

Dynamics and Control of Trajectory Tubes

Original price was: ₹ 4,754.00.Current price is: ₹ 3,803.00.
This monograph presents theoretical methods involving the Hamilton–Jacobi–Bellman formalism in conjunction with set-valued techniques of nonlinear analysis to solve significant problems in dynamics and control. The emphasis is on issues of reachability, feedback control synthesis under complex state constraints, hard or double bounds on controls, and performance in finite time. Guaranteed state estimation, output feedback control, and hybrid dynamics are also discussed. Although the focus is on systems with linear structure, the authors indicate how to apply each approach to nonlinear and nonconvex systems. The main theoretical results lead to computational schemes based on extensions of ellipsoidal calculus that provide complete solutions to the problems. These computational schemes in turn yield software tools that can be applied effectively to high-dimensional systems. Ellipsoidal Techniques for Problems of Dynamics and Control: Theory and Computation will interest graduate and senior undergraduate students, as well as researchers and practitioners interested in control theory, its applications, and its computational realizations.
-21%
Quick View
Add to Wishlist

Dynamics and Control of Trajectory Tubes

Original price was: ₹ 4,754.00.Current price is: ₹ 3,803.00.
This monograph presents theoretical methods involving the Hamilton–Jacobi–Bellman formalism in conjunction with set-valued techniques of nonlinear analysis to solve significant problems in dynamics and control. The emphasis is on issues of reachability, feedback control synthesis under complex state constraints, hard or double bounds on controls, and performance in finite time. Guaranteed state estimation, output feedback control, and hybrid dynamics are also discussed. Although the focus is on systems with linear structure, the authors indicate how to apply each approach to nonlinear and nonconvex systems. The main theoretical results lead to computational schemes based on extensions of ellipsoidal calculus that provide complete solutions to the problems. These computational schemes in turn yield software tools that can be applied effectively to high-dimensional systems. Ellipsoidal Techniques for Problems of Dynamics and Control: Theory and Computation will interest graduate and senior undergraduate students, as well as researchers and practitioners interested in control theory, its applications, and its computational realizations.
Add to cartView cart
-21%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

Dynamics and Control of Trajectory Tubes

Original price was: ₹ 4,754.00.Current price is: ₹ 3,803.00.
This monograph presents theoretical methods involving the Hamilton–Jacobi–Bellman formalism in conjunction with set-valued techniques of nonlinear analysis to solve significant problems in dynamics and control. The emphasis is on issues of reachability, feedback control synthesis under complex state constraints, hard or double bounds on controls, and performance in finite time. Guaranteed state estimation, output feedback control, and hybrid dynamics are also discussed. Although the focus is on systems with linear structure, the authors indicate how to apply each approach to nonlinear and nonconvex systems. The main theoretical results lead to computational schemes based on extensions of ellipsoidal calculus that provide complete solutions to the problems. These computational schemes in turn yield software tools that can be applied effectively to high-dimensional systems. Ellipsoidal Techniques for Problems of Dynamics and Control: Theory and Computation will interest graduate and senior undergraduate students, as well as researchers and practitioners interested in control theory, its applications, and its computational realizations.
-21%
Quick View
Add to Wishlist

Dynamics and Control of Trajectory Tubes

Original price was: ₹ 4,754.00.Current price is: ₹ 3,803.00.
This monograph presents theoretical methods involving the Hamilton–Jacobi–Bellman formalism in conjunction with set-valued techniques of nonlinear analysis to solve significant problems in dynamics and control. The emphasis is on issues of reachability, feedback control synthesis under complex state constraints, hard or double bounds on controls, and performance in finite time. Guaranteed state estimation, output feedback control, and hybrid dynamics are also discussed. Although the focus is on systems with linear structure, the authors indicate how to apply each approach to nonlinear and nonconvex systems. The main theoretical results lead to computational schemes based on extensions of ellipsoidal calculus that provide complete solutions to the problems. These computational schemes in turn yield software tools that can be applied effectively to high-dimensional systems. Ellipsoidal Techniques for Problems of Dynamics and Control: Theory and Computation will interest graduate and senior undergraduate students, as well as researchers and practitioners interested in control theory, its applications, and its computational realizations.
Add to cartView cart
-20%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

Lectures on Variational Analysis

Original price was: ₹ 7,607.00.Current price is: ₹ 6,086.00.
This book presents an introduction to variational analysis, a field which unifies theories and techniques developed in calculus of variations, optimization, and control, and covers convex analysis, nonsmooth analysis, and set-valued analysis. It focuses on problems with constraints, the analysis of which involves set-valued mappings and functions that are not differentiable. Applications of variational analysis are interdisciplinary, ranging from financial planning to steering a flying object. The book is addressed to graduate students, researchers, and practitioners in mathematical sciences, engineering, economics, and finance. A typical reader of the book should be familiar with multivariable calculus and linear algebra. Some basic knowledge in optimization, control, and elementary functional analysis is desirable, but all necessary background material is included in the book.
-20%
Quick View
Add to Wishlist

Lectures on Variational Analysis

Original price was: ₹ 7,607.00.Current price is: ₹ 6,086.00.
This book presents an introduction to variational analysis, a field which unifies theories and techniques developed in calculus of variations, optimization, and control, and covers convex analysis, nonsmooth analysis, and set-valued analysis. It focuses on problems with constraints, the analysis of which involves set-valued mappings and functions that are not differentiable. Applications of variational analysis are interdisciplinary, ranging from financial planning to steering a flying object. The book is addressed to graduate students, researchers, and practitioners in mathematical sciences, engineering, economics, and finance. A typical reader of the book should be familiar with multivariable calculus and linear algebra. Some basic knowledge in optimization, control, and elementary functional analysis is desirable, but all necessary background material is included in the book.
Add to cartView cart
-20%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

Lectures on Variational Analysis

Original price was: ₹ 10,460.00.Current price is: ₹ 8,368.00.
This book presents an introduction to variational analysis, a field which unifies theories and techniques developed in calculus of variations, optimization, and control, and covers convex analysis, nonsmooth analysis, and set-valued analysis. It focuses on problems with constraints, the analysis of which involves set-valued mappings and functions that are not differentiable. Applications of variational analysis are interdisciplinary, ranging from financial planning to steering a flying object. The book is addressed to graduate students, researchers, and practitioners in mathematical sciences, engineering, economics, and finance. A typical reader of the book should be familiar with multivariable calculus and linear algebra. Some basic knowledge in optimization, control, and elementary functional analysis is desirable, but all necessary background material is included in the book.
-20%
Quick View
Add to Wishlist

Lectures on Variational Analysis

Original price was: ₹ 10,460.00.Current price is: ₹ 8,368.00.
This book presents an introduction to variational analysis, a field which unifies theories and techniques developed in calculus of variations, optimization, and control, and covers convex analysis, nonsmooth analysis, and set-valued analysis. It focuses on problems with constraints, the analysis of which involves set-valued mappings and functions that are not differentiable. Applications of variational analysis are interdisciplinary, ranging from financial planning to steering a flying object. The book is addressed to graduate students, researchers, and practitioners in mathematical sciences, engineering, economics, and finance. A typical reader of the book should be familiar with multivariable calculus and linear algebra. Some basic knowledge in optimization, control, and elementary functional analysis is desirable, but all necessary background material is included in the book.
Add to cartView cart
-20%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

Non-Smooth and Complementarity-Based Distributed Parameter Systems

Original price was: ₹ 7,607.00.Current price is: ₹ 6,086.00.
Many of the most challenging problems in the applied sciences involve non-differentiable structures as well as partial differential operators, thus leading to non-smooth distributed parameter systems.This edited volume aims to establish a theoretical and numerical foundation and develop new algorithmic paradigms for the treatment of non-smooth phenomena and associated parameter influences. Other goals include the realization and further advancement of these concepts in the context of robust and hierarchical optimization, partial differential games, and nonlinear partial differential complementarity problems, as well as their validation in the context of complex applications. Areas for which applications are considered include optimal control of multiphase fluids and of superconductors, image processing, thermoforming, and the formation of rivers and networks. Chapters are written by leading researchers and present results obtained in the first funding phase of the DFG Special Priority Program on Nonsmooth and Complementarity Based Distributed Parameter Systems: Simulation and Hierarchical Optimization that ran from 2016 to 2019.
-20%
Quick View
Add to Wishlist

Non-Smooth and Complementarity-Based Distributed Parameter Systems

Original price was: ₹ 7,607.00.Current price is: ₹ 6,086.00.
Many of the most challenging problems in the applied sciences involve non-differentiable structures as well as partial differential operators, thus leading to non-smooth distributed parameter systems.This edited volume aims to establish a theoretical and numerical foundation and develop new algorithmic paradigms for the treatment of non-smooth phenomena and associated parameter influences. Other goals include the realization and further advancement of these concepts in the context of robust and hierarchical optimization, partial differential games, and nonlinear partial differential complementarity problems, as well as their validation in the context of complex applications. Areas for which applications are considered include optimal control of multiphase fluids and of superconductors, image processing, thermoforming, and the formation of rivers and networks. Chapters are written by leading researchers and present results obtained in the first funding phase of the DFG Special Priority Program on Nonsmooth and Complementarity Based Distributed Parameter Systems: Simulation and Hierarchical Optimization that ran from 2016 to 2019.
Add to cartView cart
-20%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

Non-Smooth and Complementarity-Based Distributed Parameter Systems

Original price was: ₹ 10,460.00.Current price is: ₹ 8,368.00.
Many of the most challenging problems in the applied sciences involve non-differentiable structures as well as partial differential operators, thus leading to non-smooth distributed parameter systems.This edited volume aims to establish a theoretical and numerical foundation and develop new algorithmic paradigms for the treatment of non-smooth phenomena and associated parameter influences. Other goals include the realization and further advancement of these concepts in the context of robust and hierarchical optimization, partial differential games, and nonlinear partial differential complementarity problems, as well as their validation in the context of complex applications. Areas for which applications are considered include optimal control of multiphase fluids and of superconductors, image processing, thermoforming, and the formation of rivers and networks. Chapters are written by leading researchers and present results obtained in the first funding phase of the DFG Special Priority Program on Nonsmooth and Complementarity Based Distributed Parameter Systems: Simulation and Hierarchical Optimization that ran from 2016 to 2019.
-20%
Quick View
Add to Wishlist

Non-Smooth and Complementarity-Based Distributed Parameter Systems

Original price was: ₹ 10,460.00.Current price is: ₹ 8,368.00.
Many of the most challenging problems in the applied sciences involve non-differentiable structures as well as partial differential operators, thus leading to non-smooth distributed parameter systems.This edited volume aims to establish a theoretical and numerical foundation and develop new algorithmic paradigms for the treatment of non-smooth phenomena and associated parameter influences. Other goals include the realization and further advancement of these concepts in the context of robust and hierarchical optimization, partial differential games, and nonlinear partial differential complementarity problems, as well as their validation in the context of complex applications. Areas for which applications are considered include optimal control of multiphase fluids and of superconductors, image processing, thermoforming, and the formation of rivers and networks. Chapters are written by leading researchers and present results obtained in the first funding phase of the DFG Special Priority Program on Nonsmooth and Complementarity Based Distributed Parameter Systems: Simulation and Hierarchical Optimization that ran from 2016 to 2019.
Add to cartView cart
Select the fields to be shown. Others will be hidden. Drag and drop to rearrange the order.
  • Image
  • SKU
  • Rating
  • Price
  • Stock
  • Availability
  • Add to cart
  • Description
  • Content
  • Weight
  • Dimensions
  • Additional information
Click outside to hide the comparison bar
Compare
    0
    Your Cart
    Your cart is emptyReturn to Shop
    ×