Matrix operations with pytorch – optimizer – addendum

This blog post is an addendum to a 3 post miniseries​1​.  Here were present 2 notebooks. 02a—SVD-with-pytorch-optimizer-SGD.ipynb Is a dropin replacement of the stochastic gradient + momentum method shown earlier​2​, but with using the inbuilt pytorch sgd optimiser. 02b—SVD-with-pytorch-optimizer-adam.ipynb Uses the inbuild pytorch adam optimizer – rather than the sgd optimiser. As known in the […]

Matrix operations with pytorch – optimizer – part 3

SVD with pytorch optimizer This blog post is part of a 3 post miniseries.  Today’s post in particular covers the topic SVD with pytorch optimizer. The point of the entire miniseries is to reproduce matrix operations such as matrix inverse and svd using pytorch’s automatic differentiation capability. These algorithms are already implemented in pytorch itself […]

Matrix operations with pytorch – optimizer – part 2

pytorch – matrix inverse with pytorch optimizer This blog post is part of a 3 post miniseries.  Today’s post in particular covers the topic pytorch – matrix inverse with pytorch optimizer. The point of the entire miniseries is to reproduce matrix operations such as matrix inverse and svd using pytorch’s automatic differentiation capability. These algorithms […]

Matrix operations with pytorch – optimizer – part 1

pytorch – playing with tensors This blog post is part of a 3 post miniseries.  The point of the entire miniseries is to reproduce matrix operations such as matrix inverse and svd using pytorch’s automatic differentiation capability. These algorithms are already implemented in pytorch itself and other libraries such as scikit-learn. However, we will solve […]

Loss surface with multiple valleys

This post is a follow-up of 1. We start off with an eye-catching plot, representing the functioning of an optimiser using the stochastic gradient method. The plot is explained in more detail further below. Visualisation of a loss surface with multiple minima. The surface is in gray, the exemplary path taken by the optimiser  is […]

Comparison of a very simple regression in tensorflow and keras

In this short post we perform a comparative  analysis of a very simple regression problem in tensorflow and keras. We start off with an eye-catching plot, representing the functioning of an optimizer using the stochastic gradient method. The plot is explained in more detail further below.  A 3 rotatable version of the Loss function […]

Dash for timeseries

Dash is an amazing dasboarding framework. If you’re looking for an easy-to- setup dashboarding framework that is able to produce amazing plots that wow your audience, chances are that this is your perfect fit. Further it is also friendly to your CPU. The solution I will show here is running simultaneously on the same 5€/month digital ocean instance as the WordPress installation hosting the article you’re reading.