This blog post is an addendum to a 3 post miniseries1.
Here were present 2 notebooks.
- 02a—SVD-with-pytorch-optimizer-SGD.ipynb
Is a dropin replacement of the stochastic gradient + momentum method shown earlier2, but with using the inbuilt pytorch sgd optimiser.
- 02b—SVD-with-pytorch-optimizer-adam.ipynb
Uses the inbuild pytorch adam optimizer – rather than the sgd optimiser. As known in the literature, the adam optimiser shows better results3.
So here’s the notebook using the inbuilt pytorch sgd optimiser.
[nbconvert url=”https://github.com/hfwittmann/matrix-operations-with-pytorch/blob/master/Matrix-operations-with-pytorch-optimizer/02a—SVD-with-pytorch-optimizer-SGD.ipynb” ]So here’s the notebook using the inbuilt pytorch adam optimiser.
[nbconvert url=”https://github.com/hfwittmann/matrix-operations-with-pytorch/blob/master/Matrix-operations-with-pytorch-optimizer/02b—SVD-with-pytorch-optimizer-adam.ipynb” ]- 1.Matrix operations with pytorch – optimizer – part 1. Arthought. https://arthought.com/matrix-operations-with-pytorch-optimizer-part-1/. Published February 7, 2020. Accessed February 7, 2020.
- 2.Matrix operations with pytorch – optimizer – part 3. Arthought. https://arthought.com/matrix-operations-with-pytorch-optimizer-part-3/. Published February 7, 2020. Accessed February 7, 2020.
- 3.Stochastic gradient descent. Wikipedia. https://en.wikipedia.org/wiki/Stochastic_gradient_descent. Published February 6, 2020. Accessed February 6, 2020.