This blog post is an addendum to a 3 post miniseries1. Here were present 2 notebooks. 02a—SVD-with-pytorch-optimizer-SGD.ipynb Is a dropin replacement of the stochastic gradient + momentum method shown earlier2, but with using the inbuilt pytorch sgd optimiser. 02b—SVD-with-pytorch-optimizer-adam.ipynb Uses the inbuild pytorch adam optimizer – rather than the sgd optimiser. As known in the […]

# Category archives: Software

## Matrix operations with pytorch – optimizer – part 3

SVD with pytorch optimizer This blog post is part of a 3 post miniseries. Today’s post in particular covers the topic SVD with pytorch optimizer. The point of the entire miniseries is to reproduce matrix operations such as matrix inverse and svd using pytorch’s automatic differentiation capability. These algorithms are already implemented in pytorch itself […]

## Matrix operations with pytorch – optimizer – part 2

pytorch – matrix inverse with pytorch optimizer This blog post is part of a 3 post miniseries. Today’s post in particular covers the topic pytorch – matrix inverse with pytorch optimizer. The point of the entire miniseries is to reproduce matrix operations such as matrix inverse and svd using pytorch’s automatic differentiation capability. These algorithms […]

## Matrix operations with pytorch – optimizer – part 1

pytorch – playing with tensors This blog post is part of a 3 post miniseries. The point of the entire miniseries is to reproduce matrix operations such as matrix inverse and svd using pytorch’s automatic differentiation capability. These algorithms are already implemented in pytorch itself and other libraries such as scikit-learn. However, we will solve […]

## Comparison of a very simple regression in pytorch vs tensorflow and keras

This is an follow up to https://arthought.com/comparison-of-a-very-simple-regression-in-tensorflow-and-keras/ It covers the same topic but in pytorch.

## Colab, MLflow and papermill

Machine learning with the maximum of free GPU currently available plus the ability to keep a neat log of your data science experiments. Interested? My article shows a deep dive solution. Quick summary Colab, MLflow and papermill are individually great. Together they form a dream team. Colab is great for running notebooks, MLflow keeps records […]

## Loss surface with multiple valleys

This post is a follow-up of 1. We start off with an eye-catching plot, representing the functioning of an optimiser using the stochastic gradient method. The plot is explained in more detail further below. Visualisation of a loss surface with multiple minima. The surface is in gray, the exemplary path taken by the optimiser is […]

## Comparison of a very simple regression in tensorflow and keras

In this short post we perform a comparative analysis of a very simple regression problem in tensorflow and keras. We start off with an eye-catching plot, representing the functioning of an optimizer using the stochastic gradient method. The plot is explained in more detail further below. A 3 rotatable version of the Loss function […]

## Dash simple deployment with docker

The previous post was already about dash. So why return to the subject? In some ways I got carried away by the possibilities of dash. I therefore included some concepts that are nice, by themselves, however introduce a level of complexity that is not fully necessary to start you first deployment. This post is really […]

## Dash for timeseries

Dash is an amazing dasboarding framework. If you’re looking for an easy-to- setup dashboarding framework that is able to produce amazing plots that wow your audience, chances are that this is your perfect fit. Further it is also friendly to your CPU. The solution I will show here is running simultaneously on the same 5€/month digital ocean instance as the WordPress installation hosting the article you’re reading.