Sciweavers

311 search results - page 17 / 63
» Gradient Convergence in Gradient methods with Errors
Sort
View
MOC
2002
77views more  MOC 2002»
13 years 7 months ago
Directional Newton methods in n variables
Directional Newton methods for functions f of n variables are shown to converge, under standard assumptions, to a solution of f(x) = 0. The rate of convergence is quadratic, for ne...
Yuri Levin, Adi Ben-Israel
OL
2011
332views Neural Networks» more  OL 2011»
13 years 2 months ago
A robust implementation of a sequential quadratic programming algorithm with successive error restoration
We consider sequential quadratic programming (SQP) methods for solving constrained nonlinear programming problems. It is generally believed that SQP methods are sensitive to the a...
Klaus Schittkowski
CVPR
2011
IEEE
12 years 11 months ago
Aggregating Gradient Distributions into Intensity Orders: A Novel Local Image Descriptor
A novel local image descriptor is proposed in this paper, which combines intensity orders and gradient distributions in multiple support regions. The novelty lies in three aspects...
Bin Fan, Fuchao Wu, Zhanyi Hu
SIAMCO
2002
121views more  SIAMCO 2002»
13 years 7 months ago
Consistent Approximations and Approximate Functions and Gradients in Optimal Control
As shown in [7], optimal control problems with either ODE or PDE dynamics can be solved efficiently using a setting of consistent approximations obtained by numerical discretizati...
Olivier Pironneau, Elijah Polak
ICASSP
2011
IEEE
12 years 11 months ago
Application specific loss minimization using gradient boosting
Gradient boosting is a flexible machine learning technique that produces accurate predictions by combining many weak learners. In this work, we investigate its use in two applica...
Bin Zhang, Abhinav Sethy, Tara N. Sainath, Bhuvana...