Matrix operations with pytorch – optimizer – addendum

This blog post is an addendum to a 3 post miniseries​1​

Here were present 2 notebooks.

  • 02a—SVD-with-pytorch-optimizer-SGD.ipynb

Is a dropin replacement of the stochastic gradient + momentum method shown earlier​2​, but with using the inbuilt pytorch sgd optimiser.

  • 02b—SVD-with-pytorch-optimizer-adam.ipynb

Uses the inbuild pytorch adam optimizer – rather than the sgd optimiser. As known in the literature, the adam optimiser shows better results​3​.

Continue reading “Matrix operations with pytorch – optimizer – addendum”

Matrix operations with pytorch – optimizer – part 3

SVD with pytorch optimizer

This blog post is part of a 3 post miniseries. 

Today’s post in particular covers the topic SVD with pytorch optimizer.

The point of the entire miniseries is to reproduce matrix operations such as matrix inverse and svd using pytorch’s automatic differentiation capability.

These algorithms are already implemented in pytorch itself and other libraries such as scikit-learn. However, we will solve this problem in a general way using gradient descent. We hope that this will provide an understanding of the power of the gradient method in general and the capabilities of pytorch in particular.

Continue reading “Matrix operations with pytorch – optimizer – part 3”

Matrix operations with pytorch – optimizer – part 2

pytorch – matrix inverse with pytorch optimizer

This blog post is part of a 3 post miniseries. 

Today’s post in particular covers the topic pytorch – matrix inverse with pytorch optimizer.

The point of the entire miniseries is to reproduce matrix operations such as matrix inverse and svd using pytorch’s automatic differentiation capability.

These algorithms are already implemented in pytorch itself and other libraries such as scikit-learn. However, we will solve this problem in a general way using gradient descent. We hope that this will provide an understanding of the power of the gradient method in general and the capabilities of pytorch in particular.

Continue reading “Matrix operations with pytorch – optimizer – part 2”

Matrix operations with pytorch – optimizer – part 1

pytorch – playing with tensors

This blog post is part of a 3 post miniseries. 

The point of the entire miniseries is to reproduce matrix operations such as matrix inverse and svd using pytorch’s automatic differentiation capability.

These algorithms are already implemented in pytorch itself and other libraries such as scikit-learn. However, we will solve this problem in a general way using gradient descent. We hope that this will provide an understanding of the power of the gradient method in general and the capabilities of pytorch in particular.

To avoid reader fatigue, we present the material in 3 posts:

  • A introductory section: pytorch – playing with tensors demonstrates some basic tensor usage. This also shows how to calculate various derivatives.
  • A main section: pytorch – matrix inverse with pytorch optimizer shows how to calculate the matrix inverse​1​ using gradient descent.
  • An advanced section: SVD with pytorch optimizer shows how to do singular value decomposition​1,2​ with gradient descent.

The code of this post is provided in a jupyter notebook on github:

https://github.com/hfwittmann/matrix-operations-with-pytorch/blob/master/Matrix-operations-with-pytorch-optimizer/00—pytorch-playing-with-tensors.ipynb

Remark: the following part of the post is directly written in a Jupyter notebook. It is displayed via a very nice wordpress plugin nbconvert​3​.

Continue reading “Matrix operations with pytorch – optimizer – part 1”