Dash is an amazing dasboarding framework. If you’re looking for an easy-to- setup dashboarding framework that is able to produce amazing plots that wow your audience, chances are that this is your perfect fit. Further it is also friendly to your CPU. The solution I will show here is running simultaneously on the same 5€/month digital ocean instance as the WordPress installation hosting the article you’re reading.
Dash is fully open source, produced by the makers of plotly. So you can host it yourself, at no cost, or alternatively you can buy a service subscription from plotly, the pricelist you find under this link.
In this post we focus on doing things ourselves, so there is no cost, apart from time spent. And, of course, you need a server, running a standard operating system, and some knowledge.
So let’s work on the latter, and try and augment our knowledge.
The code can be found in the following github repository: https://github.com/hfwittmann/dash
The dashboard can be found under this link: https://dashdax.arthought.com/
Continue reading “Dash for timeseries”
Knime is a very powerful machine learning tool, particularly suitable for the management of complicated workflows as well as rapid prototyping.
It has recently become yet more useful with the arrival of easy-to-use Python nodes. This is true because sometimes the set of nodes – which is large – still may not provide the exact functionality that you need. On the other hand, Python is flexible enough to do anything easily. Therefore the marriage of the two is very powerful.
This post is about the transfer of the three previous time series prediction posts into a knime workflow.
This will provide for a good overview of the dataflow, making it thus more easily manageable.
The workflow code is available on github:
Continue reading “Knime – Multivariate time series”
In this follow up post we apply the same methods we developed previously to a different dataset. In this third post we mix the previous two datasets.
So, on the one hand, we have noise signals, on the other hand we have innovators and followers.
Continue reading “Multivariate Time Series Forecasting with Neural Networks (3) – multivariate signal noise mixtures”
In this post we present the results of a competition between various forecasting techniques applied to multivariate time series. The forecasting techniques we use are some neural networks, and also – as a benchmark – arima.
In particular the neural networks we considered are long short term memory (lstm) networks, and dense networks.
The winner in the setting is lstm, followed by dense neural networks followed by arima.
Of course, arima is actually typically applied to univariate time series, where it works extremely well. For arima we adopt the approach to treat the multivariate time series as a collection of many univariate time series. As stated, arima is not the main focus of this post but used only to demonstrate a benchmark.
To test these forecasting techniques we use random time series. We distinguish between innovator time series and follower time series. Innovator time series are composed of random innovations and can therefore not be forecast. Follower time series are functions of lagged innovator time series and can therefore in principle be forecast.
It turns out that our dense network can only forecast simple functions of innovator time series. For instance the sum of two-legged innovator series can be forecast by our dense network. However a product is already more difficult for a dense network.
In contrast the lstm network deals with products easily and also with more complicated functions.
Continue reading “Multivariate Time Series Forecasting with Neural Networks (1)”