Training a DL model to find a local minima in n-dimensions can be a challenge. Often, data scientists and ML engineers would use a gradient descent to optimize the path.
Starting delta may be anywhere between 1e-3 or 1e-4. Having a constant gridient would not fast-approach a local minima.
There are few issues with this approach.
1) The first found local minima may not be the best minima. It can be stuck in a sharp valley, where any deriviate change would raise the error rate above 50% or more.
2) The first found local minima may be a local mixima, as shown in the saddle point graph below.
Starting delta may be anywhere between 1e-3 or 1e-4. Having a constant gridient would not fast-approach a local minima.
There are few issues with this approach.
1) The first found local minima may not be the best minima. It can be stuck in a sharp valley, where any deriviate change would raise the error rate above 50% or more.
2) The first found local minima may be a local mixima, as shown in the saddle point graph below.
When optimizing on n-th dimensions of space of a DL model, the best approach is to find a flat valley, when the SGD can locate a stable ground and where error rates stay low or relatively small to what it landed in the best optimization.
However, there are a better way than this.
Instead of manually entering an initial gradient decent value and updating it every epoch or mini-batch, why don't we use a cyclically variant gradient decent?
Here, the GD value actually follows a value of half cosine for the initial mini-batch. The GD value changes, only when the validation set error doesn't change much.
The benefit of using this cyclic path of the learning rate is to kick the stuck optimized GD out of a sharp valley, so the error rate stays stable, as the learning rate stabilizes.
The better approach, is then, to accelerate the learning rate in a shorter cosine path, and de-accelerate the learning rate in a longer cosine path, as shown below. This ensures the SGD can land on the flat valley in n-dimension space, since the number of dimension exponentially increase the number of potential local minima to explore.
References
1. Leslie N. Smith. Cyclical Learning Rates for Training Neural Networks. arXiv preprint arXiv:1506.01186
2. Y. N. Dauphin, H. de Vries, J. Chung, and Y. Bengio. Rmsprop and equilibrated adaptive learning rates for non-convex optimization.
arXiv preprint arXiv:1502.04390, 2015.
3 I. Loshchilov and F. Hutter. Sgdr: Stochastic gradient descent with restarts.
arXiv preprint arXiv:1608.03983, 2016.
4. Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima.
arXiv preprint arXiv:1609.04836, 2016
5. Gao Huang and Yixuan Li. Snapshots Ensemble: Train 1, get M for free. arXiv preprint arXiv:1704.00109
Comments
Post a Comment