Neatra Groups: On Women Empowerment Mission
Filter
-20%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

Convex and Stochastic Optimization

Original price was: ₹ 5,705.00.Current price is: ₹ 4,564.00.
This textbook provides an introduction to convex duality for optimization problems in Banach spaces, integration theory, and their application to stochastic programming problems in a static or dynamic setting. It introduces and analyses the main algorithms for stochastic programs, while the theoretical aspects are carefully dealt with. The reader is shown how these tools can be applied to various fields, including approximation theory, semidefinite and second-order cone programming and linear decision rules. This textbook is recommended for students, engineers and researchers who are willing to take a rigorous approach to the mathematics involved in the application of duality theory to optimization with uncertainty.
-20%
Quick View
Add to Wishlist

Convex and Stochastic Optimization

Original price was: ₹ 5,705.00.Current price is: ₹ 4,564.00.
This textbook provides an introduction to convex duality for optimization problems in Banach spaces, integration theory, and their application to stochastic programming problems in a static or dynamic setting. It introduces and analyses the main algorithms for stochastic programs, while the theoretical aspects are carefully dealt with. The reader is shown how these tools can be applied to various fields, including approximation theory, semidefinite and second-order cone programming and linear decision rules. This textbook is recommended for students, engineers and researchers who are willing to take a rigorous approach to the mathematics involved in the application of duality theory to optimization with uncertainty.
Add to cartView cart
-21%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

Convex Optimization in Normed Spaces

Original price was: ₹ 6,181.00.Current price is: ₹ 4,944.00.
This work is intended to serve as a guide for graduate students and researchers who wish to get acquainted with the main theoretical and practical tools for the numerical minimization of convex functions on Hilbert spaces. Therefore, it contains the main tools that are necessary to conduct independent research on the topic. It is also a concise, easy-to-follow and self-contained textbook, which may be useful for any researcher working on related fields, as well as teachers giving graduate-level courses on the topic. It will contain a thorough revision of the extant literature including both classical and state-of-the-art references.
-21%
Quick View
Add to Wishlist

Convex Optimization in Normed Spaces

Original price was: ₹ 6,181.00.Current price is: ₹ 4,944.00.
This work is intended to serve as a guide for graduate students and researchers who wish to get acquainted with the main theoretical and practical tools for the numerical minimization of convex functions on Hilbert spaces. Therefore, it contains the main tools that are necessary to conduct independent research on the topic. It is also a concise, easy-to-follow and self-contained textbook, which may be useful for any researcher working on related fields, as well as teachers giving graduate-level courses on the topic. It will contain a thorough revision of the extant literature including both classical and state-of-the-art references.
Add to cartView cart
-21%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

Convex Optimization with Computational Errors

Original price was: ₹ 8,558.00.Current price is: ₹ 6,846.00.
The book is devoted to the study of approximate solutions of optimization problems in the presence of computational errors. It contains a number of results on the convergence behavior of algorithms in a Hilbert space, which are known as important tools for solving optimization problems. The research presented in the book is the continuation and the further development of the author's (c) 2016 book Numerical Optimization with Computational Errors, Springer 2016. Both books study the algorithms taking into account computational errors which are always present in practice. The main goal is, for a known computational error, to find out what an approximate solution can be obtained and how many iterates one needs for this. The main difference between this new book and the 2016 book is that in this present book the discussion takes into consideration the fact that for every algorithm, its iteration consists of several steps and that computational errors for different steps are generally, different. This fact, which was not taken into account in the previous book, is indeed important in practice. For example, the subgradient projection algorithm consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error and these two computational errors are different in general. It may happen that the feasible set is simple and the objective function is complicated. As a result, the computational error, made when one calculates the projection, is essentially smaller than the computational error of the calculation of the subgradient. Clearly, an opposite case is possible too. Another feature of this book is a study of a number of important algorithms which appeared recently in the literature and which are not discussed in the previous book. This monograph contains 12 chapters. Chapter 1 is an introduction. In Chapter 2 we study the subgradient projection algorithm for minimization of convex and nonsmooth functions. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 3 we analyze the mirror descent algorithm for minimization of convex and nonsmooth functions, under the presence of computational errors. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we solve an auxiliary minimization problem on the set of feasible points. In each of these two steps there is a computational error. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 4 we analyze the projected gradient algorithm with a smooth objective function under the presence of computational errors. In Chapter 5 we consider an algorithm, which is an extension of the projection gradient algorithm used for solving linear inverse problems arising in signal/image processing. In Chapter 6 we study continuous subgradient method and continuous subgradient projection algorithm for minimization of convex nonsmooth functions and for computing the saddle points of convex-concave functions, under the presence of computational errors. All the results of this chapter has no prototype in [NOCE]. In Chapters 7-12 we analyze several algorithms under the presence of computational errors which were not considered in [NOCE]. Again, each step of an iteration has a computational errors and we take into account that these errors are, in general, different. An optimization problems with a composite objective function is studied in Chapter 7. A zero-sum game with two-players is considered in Chapter 8. A predicted decrease approximatio
-21%
Quick View
Add to Wishlist

Convex Optimization with Computational Errors

Original price was: ₹ 8,558.00.Current price is: ₹ 6,846.00.
The book is devoted to the study of approximate solutions of optimization problems in the presence of computational errors. It contains a number of results on the convergence behavior of algorithms in a Hilbert space, which are known as important tools for solving optimization problems. The research presented in the book is the continuation and the further development of the author's (c) 2016 book Numerical Optimization with Computational Errors, Springer 2016. Both books study the algorithms taking into account computational errors which are always present in practice. The main goal is, for a known computational error, to find out what an approximate solution can be obtained and how many iterates one needs for this. The main difference between this new book and the 2016 book is that in this present book the discussion takes into consideration the fact that for every algorithm, its iteration consists of several steps and that computational errors for different steps are generally, different. This fact, which was not taken into account in the previous book, is indeed important in practice. For example, the subgradient projection algorithm consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error and these two computational errors are different in general. It may happen that the feasible set is simple and the objective function is complicated. As a result, the computational error, made when one calculates the projection, is essentially smaller than the computational error of the calculation of the subgradient. Clearly, an opposite case is possible too. Another feature of this book is a study of a number of important algorithms which appeared recently in the literature and which are not discussed in the previous book. This monograph contains 12 chapters. Chapter 1 is an introduction. In Chapter 2 we study the subgradient projection algorithm for minimization of convex and nonsmooth functions. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 3 we analyze the mirror descent algorithm for minimization of convex and nonsmooth functions, under the presence of computational errors. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we solve an auxiliary minimization problem on the set of feasible points. In each of these two steps there is a computational error. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 4 we analyze the projected gradient algorithm with a smooth objective function under the presence of computational errors. In Chapter 5 we consider an algorithm, which is an extension of the projection gradient algorithm used for solving linear inverse problems arising in signal/image processing. In Chapter 6 we study continuous subgradient method and continuous subgradient projection algorithm for minimization of convex nonsmooth functions and for computing the saddle points of convex-concave functions, under the presence of computational errors. All the results of this chapter has no prototype in [NOCE]. In Chapters 7-12 we analyze several algorithms under the presence of computational errors which were not considered in [NOCE]. Again, each step of an iteration has a computational errors and we take into account that these errors are, in general, different. An optimization problems with a composite objective function is studied in Chapter 7. A zero-sum game with two-players is considered in Chapter 8. A predicted decrease approximatio
Add to cartView cart
-21%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

Convex Optimization with Computational Errors

Original price was: ₹ 8,558.00.Current price is: ₹ 6,846.00.
The book is devoted to the study of approximate solutions of optimization problems in the presence of computational errors. It contains a number of results on the convergence behavior of algorithms in a Hilbert space, which are known as important tools for solving optimization problems. The research presented in the book is the continuation and the further development of the author's (c) 2016 book Numerical Optimization with Computational Errors, Springer 2016. Both books study the algorithms taking into account computational errors which are always present in practice. The main goal is, for a known computational error, to find out what an approximate solution can be obtained and how many iterates one needs for this. The main difference between this new book and the 2016 book is that in this present book the discussion takes into consideration the fact that for every algorithm, its iteration consists of several steps and that computational errors for different steps are generally, different. This fact, which was not taken into account in the previous book, is indeed important in practice. For example, the subgradient projection algorithm consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error and these two computational errors are different in general. It may happen that the feasible set is simple and the objective function is complicated. As a result, the computational error, made when one calculates the projection, is essentially smaller than the computational error of the calculation of the subgradient. Clearly, an opposite case is possible too. Another feature of this book is a study of a number of important algorithms which appeared recently in the literature and which are not discussed in the previous book. This monograph contains 12 chapters. Chapter 1 is an introduction. In Chapter 2 we study the subgradient projection algorithm for minimization of convex and nonsmooth functions. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 3 we analyze the mirror descent algorithm for minimization of convex and nonsmooth functions, under the presence of computational errors. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we solve an auxiliary minimization problem on the set of feasible points. In each of these two steps there is a computational error. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 4 we analyze the projected gradient algorithm with a smooth objective function under the presence of computational errors. In Chapter 5 we consider an algorithm, which is an extension of the projection gradient algorithm used for solving linear inverse problems arising in signal/image processing. In Chapter 6 we study continuous subgradient method and continuous subgradient projection algorithm for minimization of convex nonsmooth functions and for computing the saddle points of convex-concave functions, under the presence of computational errors. All the results of this chapter has no prototype in [NOCE]. In Chapters 7-12 we analyze several algorithms under the presence of computational errors which were not considered in [NOCE]. Again, each step of an iteration has a computational errors and we take into account that these errors are, in general, different. An optimization problems with a composite objective function is studied in Chapter 7. A zero-sum game with two-players is considered in Chapter 8. A predicted decrease approximation.
-21%
Quick View
Add to Wishlist

Convex Optimization with Computational Errors

Original price was: ₹ 8,558.00.Current price is: ₹ 6,846.00.
The book is devoted to the study of approximate solutions of optimization problems in the presence of computational errors. It contains a number of results on the convergence behavior of algorithms in a Hilbert space, which are known as important tools for solving optimization problems. The research presented in the book is the continuation and the further development of the author's (c) 2016 book Numerical Optimization with Computational Errors, Springer 2016. Both books study the algorithms taking into account computational errors which are always present in practice. The main goal is, for a known computational error, to find out what an approximate solution can be obtained and how many iterates one needs for this. The main difference between this new book and the 2016 book is that in this present book the discussion takes into consideration the fact that for every algorithm, its iteration consists of several steps and that computational errors for different steps are generally, different. This fact, which was not taken into account in the previous book, is indeed important in practice. For example, the subgradient projection algorithm consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error and these two computational errors are different in general. It may happen that the feasible set is simple and the objective function is complicated. As a result, the computational error, made when one calculates the projection, is essentially smaller than the computational error of the calculation of the subgradient. Clearly, an opposite case is possible too. Another feature of this book is a study of a number of important algorithms which appeared recently in the literature and which are not discussed in the previous book. This monograph contains 12 chapters. Chapter 1 is an introduction. In Chapter 2 we study the subgradient projection algorithm for minimization of convex and nonsmooth functions. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 3 we analyze the mirror descent algorithm for minimization of convex and nonsmooth functions, under the presence of computational errors. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we solve an auxiliary minimization problem on the set of feasible points. In each of these two steps there is a computational error. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 4 we analyze the projected gradient algorithm with a smooth objective function under the presence of computational errors. In Chapter 5 we consider an algorithm, which is an extension of the projection gradient algorithm used for solving linear inverse problems arising in signal/image processing. In Chapter 6 we study continuous subgradient method and continuous subgradient projection algorithm for minimization of convex nonsmooth functions and for computing the saddle points of convex-concave functions, under the presence of computational errors. All the results of this chapter has no prototype in [NOCE]. In Chapters 7-12 we analyze several algorithms under the presence of computational errors which were not considered in [NOCE]. Again, each step of an iteration has a computational errors and we take into account that these errors are, in general, different. An optimization problems with a composite objective function is studied in Chapter 7. A zero-sum game with two-players is considered in Chapter 8. A predicted decrease approximation.
Add to cartView cart
-21%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

Dynamics and Control of Trajectory Tubes

Original price was: ₹ 4,754.00.Current price is: ₹ 3,803.00.
This monograph presents theoretical methods involving the Hamilton–Jacobi–Bellman formalism in conjunction with set-valued techniques of nonlinear analysis to solve significant problems in dynamics and control. The emphasis is on issues of reachability, feedback control synthesis under complex state constraints, hard or double bounds on controls, and performance in finite time. Guaranteed state estimation, output feedback control, and hybrid dynamics are also discussed. Although the focus is on systems with linear structure, the authors indicate how to apply each approach to nonlinear and nonconvex systems. The main theoretical results lead to computational schemes based on extensions of ellipsoidal calculus that provide complete solutions to the problems. These computational schemes in turn yield software tools that can be applied effectively to high-dimensional systems. Ellipsoidal Techniques for Problems of Dynamics and Control: Theory and Computation will interest graduate and senior undergraduate students, as well as researchers and practitioners interested in control theory, its applications, and its computational realizations.
-21%
Quick View
Add to Wishlist

Dynamics and Control of Trajectory Tubes

Original price was: ₹ 4,754.00.Current price is: ₹ 3,803.00.
This monograph presents theoretical methods involving the Hamilton–Jacobi–Bellman formalism in conjunction with set-valued techniques of nonlinear analysis to solve significant problems in dynamics and control. The emphasis is on issues of reachability, feedback control synthesis under complex state constraints, hard or double bounds on controls, and performance in finite time. Guaranteed state estimation, output feedback control, and hybrid dynamics are also discussed. Although the focus is on systems with linear structure, the authors indicate how to apply each approach to nonlinear and nonconvex systems. The main theoretical results lead to computational schemes based on extensions of ellipsoidal calculus that provide complete solutions to the problems. These computational schemes in turn yield software tools that can be applied effectively to high-dimensional systems. Ellipsoidal Techniques for Problems of Dynamics and Control: Theory and Computation will interest graduate and senior undergraduate students, as well as researchers and practitioners interested in control theory, its applications, and its computational realizations.
Add to cartView cart
-21%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

Dynamics and Control of Trajectory Tubes

Original price was: ₹ 4,754.00.Current price is: ₹ 3,803.00.
This monograph presents theoretical methods involving the Hamilton–Jacobi–Bellman formalism in conjunction with set-valued techniques of nonlinear analysis to solve significant problems in dynamics and control. The emphasis is on issues of reachability, feedback control synthesis under complex state constraints, hard or double bounds on controls, and performance in finite time. Guaranteed state estimation, output feedback control, and hybrid dynamics are also discussed. Although the focus is on systems with linear structure, the authors indicate how to apply each approach to nonlinear and nonconvex systems. The main theoretical results lead to computational schemes based on extensions of ellipsoidal calculus that provide complete solutions to the problems. These computational schemes in turn yield software tools that can be applied effectively to high-dimensional systems. Ellipsoidal Techniques for Problems of Dynamics and Control: Theory and Computation will interest graduate and senior undergraduate students, as well as researchers and practitioners interested in control theory, its applications, and its computational realizations.
-21%
Quick View
Add to Wishlist

Dynamics and Control of Trajectory Tubes

Original price was: ₹ 4,754.00.Current price is: ₹ 3,803.00.
This monograph presents theoretical methods involving the Hamilton–Jacobi–Bellman formalism in conjunction with set-valued techniques of nonlinear analysis to solve significant problems in dynamics and control. The emphasis is on issues of reachability, feedback control synthesis under complex state constraints, hard or double bounds on controls, and performance in finite time. Guaranteed state estimation, output feedback control, and hybrid dynamics are also discussed. Although the focus is on systems with linear structure, the authors indicate how to apply each approach to nonlinear and nonconvex systems. The main theoretical results lead to computational schemes based on extensions of ellipsoidal calculus that provide complete solutions to the problems. These computational schemes in turn yield software tools that can be applied effectively to high-dimensional systems. Ellipsoidal Techniques for Problems of Dynamics and Control: Theory and Computation will interest graduate and senior undergraduate students, as well as researchers and practitioners interested in control theory, its applications, and its computational realizations.
Add to cartView cart
-20%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

First-order and Stochastic Optimization Methods for Machine Learning

Original price was: ₹ 12,362.00.Current price is: ₹ 9,890.00.
This book covers not only foundational materials but also the most recent progresses made during the past few years on the area of machine learning algorithms. In spite of the intensive research and development in this area, there does not exist a systematic treatment to introduce the fundamental concepts and recent progresses on machine learning algorithms, especially on those based on stochastic optimization methods, randomized algorithms, nonconvex optimization, distributed and online learning, and projection free methods. This book will benefit the broad audience in the area of machine learning, artificial intelligence and mathematical programming community by presenting these recent developments in a tutorial style, starting from the basic building blocks to the most carefully designed and complicated algorithms for machine learning.
-20%
Quick View
Add to Wishlist

First-order and Stochastic Optimization Methods for Machine Learning

Original price was: ₹ 12,362.00.Current price is: ₹ 9,890.00.
This book covers not only foundational materials but also the most recent progresses made during the past few years on the area of machine learning algorithms. In spite of the intensive research and development in this area, there does not exist a systematic treatment to introduce the fundamental concepts and recent progresses on machine learning algorithms, especially on those based on stochastic optimization methods, randomized algorithms, nonconvex optimization, distributed and online learning, and projection free methods. This book will benefit the broad audience in the area of machine learning, artificial intelligence and mathematical programming community by presenting these recent developments in a tutorial style, starting from the basic building blocks to the most carefully designed and complicated algorithms for machine learning.
Add to cartView cart
-20%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

First-order and Stochastic Optimization Methods for Machine Learning

Original price was: ₹ 12,362.00.Current price is: ₹ 9,890.00.
This book covers not only foundational materials but also the most recent progresses made during the past few years on the area of machine learning algorithms. In spite of the intensive research and development in this area, there does not exist a systematic treatment to introduce the fundamental concepts and recent progresses on machine learning algorithms, especially on those based on stochastic optimization methods, randomized algorithms, nonconvex optimization, distributed and online learning, and projection free methods. This book will benefit the broad audience in the area of machine learning, artificial intelligence and mathematical programming community by presenting these recent developments in a tutorial style, starting from the basic building blocks to the most carefully designed and complicated algorithms for machine learning.
-20%
Quick View
Add to Wishlist

First-order and Stochastic Optimization Methods for Machine Learning

Original price was: ₹ 12,362.00.Current price is: ₹ 9,890.00.
This book covers not only foundational materials but also the most recent progresses made during the past few years on the area of machine learning algorithms. In spite of the intensive research and development in this area, there does not exist a systematic treatment to introduce the fundamental concepts and recent progresses on machine learning algorithms, especially on those based on stochastic optimization methods, randomized algorithms, nonconvex optimization, distributed and online learning, and projection free methods. This book will benefit the broad audience in the area of machine learning, artificial intelligence and mathematical programming community by presenting these recent developments in a tutorial style, starting from the basic building blocks to the most carefully designed and complicated algorithms for machine learning.
Add to cartView cart
-21%
Quick View
Add to Wishlist
CompareCompare
Add to cartView cart

From Approximate Variation to Pointwise Selection Principles

Original price was: ₹ 4,754.00.Current price is: ₹ 3,803.00.
The book addresses the minimization of special lower semicontinuous functionals over closed balls in metric spaces, called the approximate variation. The new notion of approximate variation contains more information about the bounded variation functional and has the following features: the infimum in the definition of approximate variation is not attained in general and the total Jordan variation of a function is obtained by a limiting procedure as a parameter tends to zero. By means of the approximate variation, we are able to characterize regulated functions in a generalized sense and provide powerful compactness tools in the topology of pointwise convergence, conventionally called pointwise selection principles. The book presents a thorough, self-contained study of the approximate variation and results which were not published previously in book form. The approximate variation is illustrated by a large number of examples designed specifically for this study. The discussion elaborates on the state-of-the-art pointwise selection principles applied to functions with values in metric spaces, normed spaces, reflexive Banach spaces, and Hilbert spaces.The highlighted feature includes a deep study of special type of lower semicontinuous functionals though the applied methods are of a general nature. The content is accessible to students with some background in real analysis, general topology, and measure theory. Among the new results presented are properties of the approximate variation: semi-additivity, change of variable formula, subtle behavior with respect to uniformly and pointwise convergent sequences of functions, and the behavior on improper metric spaces. These properties are crucial for pointwise selection principles in which the key role is played by the limit superior of the approximate variation. Interestingly, pointwise selection principles may be regular, treating regulated limit functions, and irregular, treating highly irregular functions (e.g., Dirichlet-type functions), in which a significant role is played by Ramsey’s Theorem from formal logic.
-21%
Quick View
Add to Wishlist

From Approximate Variation to Pointwise Selection Principles

Original price was: ₹ 4,754.00.Current price is: ₹ 3,803.00.
The book addresses the minimization of special lower semicontinuous functionals over closed balls in metric spaces, called the approximate variation. The new notion of approximate variation contains more information about the bounded variation functional and has the following features: the infimum in the definition of approximate variation is not attained in general and the total Jordan variation of a function is obtained by a limiting procedure as a parameter tends to zero. By means of the approximate variation, we are able to characterize regulated functions in a generalized sense and provide powerful compactness tools in the topology of pointwise convergence, conventionally called pointwise selection principles. The book presents a thorough, self-contained study of the approximate variation and results which were not published previously in book form. The approximate variation is illustrated by a large number of examples designed specifically for this study. The discussion elaborates on the state-of-the-art pointwise selection principles applied to functions with values in metric spaces, normed spaces, reflexive Banach spaces, and Hilbert spaces.The highlighted feature includes a deep study of special type of lower semicontinuous functionals though the applied methods are of a general nature. The content is accessible to students with some background in real analysis, general topology, and measure theory. Among the new results presented are properties of the approximate variation: semi-additivity, change of variable formula, subtle behavior with respect to uniformly and pointwise convergent sequences of functions, and the behavior on improper metric spaces. These properties are crucial for pointwise selection principles in which the key role is played by the limit superior of the approximate variation. Interestingly, pointwise selection principles may be regular, treating regulated limit functions, and irregular, treating highly irregular functions (e.g., Dirichlet-type functions), in which a significant role is played by Ramsey’s Theorem from formal logic.
Add to cartView cart
Select the fields to be shown. Others will be hidden. Drag and drop to rearrange the order.
  • Image
  • SKU
  • Rating
  • Price
  • Stock
  • Availability
  • Add to cart
  • Description
  • Content
  • Weight
  • Dimensions
  • Additional information
Click outside to hide the comparison bar
Compare
    1
    Your Cart
    ×