Machine Learning WG updates - May 2021

The Erlang Ecosystem Foundation has recently announced the Machine Learning Working Group which is working on bringing Numerical Computing and Machine Learning libraries to the Erlang Ecosystem. The working group has enjoyed tremendous growth in the last months and today we want to share what we have accomplished in the last 30 days or so.

Nx updates

Nx is a multi-dimensional arrays (tensors) library with multi-staged compilation to the CPU/GPU.

  • It is now possible to slice a tensor based on dynamic indexes, allowing you to access a tensor position based on the value of another tensor. Nx.put_slice/3 has also been added, which allows you to update part of a tensor with another tensor, statically or dynamically (see pull request)

  • Numerical definitions now support while loops. This is important as it allows the whole training loop to run in the GPU without back and forth with host (see pull request)

  • Maps are now supported in numerical definitions. For models that had too many parameters, the positional aspect of tuples were too cumbersome. Using maps give developers more flexibility to work with complex models (see pull request) \

  • New functions such as Nx.argsort/2 (see pull request) and Nx.stack/2 (pull request) have been added

  • Work has started on built-in support for vmap, also known as auto-vectorization. Some operations, such as the dot product, have already been extended to support batching

Axon updates

Axon is a library led by Sean Moriarity that brings deep learning to the Erlang Ecosystem. It is powered by Nx and runs on the CPU/GPU.

  • Axon now supports recurrent layers (Axon.lstm/3, Axon.gru/3, Axon.conv_lstm/3) with the option to dynamically or statically unroll layers over an entire sequence

  • Axon now handles multi-input / multi-output models for accepting data in multiple places and returning multiple predictions from a common model

  • Axon now ships with parameter regularization via Axon.penalty/3 and regularizer options passed to individual layers

  • Axon now provides custom layers via the Axon.layer/5 method. You can specify your own trainable parameters, and implement your layer as a numerical definition

  • Axon’s training API supports training from an initial model state, this is useful for transfer learning and some reinforcement learning applications

  • We are in the process of migrating Axon’s examples to Livebooks. See the MNIST Livebook example. We are accepting PRs for additional examples as well as for converting examples to live markdown. Both issues are a great way to get involved with the project and to learn about Axon!

Livebook updates

Livebook is a collaborative and interactive code notebook maintained by Jonatan Kłosko.

  • Livebook was announced last month. Watch the original announcement by José Valim

  • After the announcement, new features have been added, such as autocompletion of Elixir code (watch an example) and an embedded mode that is useful for running on Nerves devices

  • Most recently, Livebook got user collaboration, which gives each user an avatar and allows you to see all users in a notebook and what they are currently editing (watch an example)

  • Official Livebook Docker images have also been published, all you need to get started is to run: docker run -p 8080:8080 livebook/livebook

  • Users can also import notebooks from a URL, so trying out notebooks from GitHub or Gist is extremely straightforward. You can already get some examples from here and here

Scidata updates

Last month we also announced Scidata, a library by Tom Rutten for downloading and normalizing data sets related to science.

  • Scidata currently supports MNIST, FashionMNIST, CIFAR10 and CIFAR100 data sets

  • We welcome PRs for additional vision and text data sets!

Tweets and other bits

Here is a collection of fun stuff people are doing with the existing libraries and with machine learning on the BEAM:

Are you also interested in Machine Learning and the Erlang Ecosystem? Join us and come chat on our Slack.