Limited-memory quasi-newton optimization algorithm for...

Data processing: artificial intelligence – Machine learning

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C706S025000, C704S009000, C704S256000, C704S257000

Reexamination Certificate

active

07933847

ABSTRACT:
An algorithm that employs modified methods developed for optimizing differential functions but which can also handle the special non-differentiabilities that occur with the L1-regularization. The algorithm is a modification of the L-BFGS (limited-memory Broyden-Fletcher-Goldfarb-Shanno) quasi-Newton algorithm, but which can now handle the discontinuity of the gradient using a procedure that chooses a search direction at each iteration and modifies the line search procedure. The algorithm includes an iterative optimization procedure where each iteration approximately minimizes the objective over a constrained region of the space on which the objective is differentiable (in the case of L1-regularization, a given orthant), models the second-order behavior of the objective by considering the loss component alone, using a “line-search” at each iteration that projects search points back onto the chosen orthant, and determines when to stop the line search.

REFERENCES:
patent: 5210691 (1993-05-01), Freedman et al.
patent: 7010486 (2006-03-01), Peters
patent: 2005/0119885 (2005-06-01), Axelrod et al.
patent: 2006/0074908 (2006-04-01), Selvaraj et al.
patent: 2007/0011110 (2007-01-01), Selvaraj et al.
patent: 2007/0094180 (2007-04-01), Rifkin et al.
patent: 2007/0110292 (2007-05-01), Bi et al.
Perkins, Simon et al.; “Grafting: Fast, Incremental Feature Selection by Gradient Descent in Function Space”; 2003; Journal of Machine Learning Research 3; pp. 1333-1356.
Collins, Michael et al; “Discriminative Reranking for Natural Language Parsing”; 2005; MIT Press; Computational Linguistics vol. 31, Issue 1; pp. 25-70.
Bertsekas, D.; “On the Goldstein-Levitin-Polyak gradient projection method”; Apr. 1976; IEEE Transactions on Automatic Control, vol. AC-21, No. 2; pp. 174-184.
Gao, Jianfeng et al.; “A Comparative Study of Parameter Estimation Methods for Statistical Natural Language Processing”; Jun. 2007; Association for Computational Linguistics; pp. 824-831.
Koh, Kwangmoo et al.; “An Efficient Method for Large-Scale L1-Regularized Convex Loss Minimization”; Jan. 2007; Proceedings IEEE Information Theory and Applications Workshop; pp. 1-8.
Figueiredo, Mario A. T.; “Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems”; 2007; pp. 586-597.
Schmidt; et al, “Fast Optimization Methods for L1Regularization: A Comparative Study and Two New Approaches”, pp. 12.
LaLee; et al, “On the Implementation of an Algorithm for Large-Scale Equality Constrained Optimization”, vol. 8, No. 3, 1998, Society for Industrial and Applied Mathematics, pp. 682-706.
Byrd; et al, “A Limited Memory Algorithm for Bound Constrained Optimization”, 1994, pp. 25.
Vail; et al, “Feature Selection in Conditional Random Fields for Activity Recognition”, pp. 7.
Koh et al.,“An Interior-Point Method for Large-Scale '1-Regularized Logistic Regression”, 2007, Journal of Machine Learning Research 8 pp. 1519-1555.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Limited-memory quasi-newton optimization algorithm for... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Limited-memory quasi-newton optimization algorithm for..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Limited-memory quasi-newton optimization algorithm for... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2676884

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.