June 2, 2020

Use Pytorch Lightning to Decouple Science science code from engineering code

Ayush Chaurasia

Pytorch lighting significantly reduces the boilerplate code by providing definite code structures for defining and training models.

Image for post

Introduction

PyTorch Lightning lets you decouple science code from engineering code. Research often involves editing the boilerplate code with new experimental variations. Most of the errors get introduced into the codebase due to this tinkering process. Pytorch lighting significantly reduces the boilerplate code by providing definite code structures for defining and training models.

Installation

Installing pytorch lightning is very simple:

pip install pytorch-lightning

To use it in our pytorch code, we’ll import the necessary pytorch lightning modules:

import pytorch_lightning as pl
from pytorch_lightning.loggers import WandbLogger

We’ll use WandbLogger to track our experiment results and log them directly to wandb.

Creating Our Lightning Class

To create a neural network class in pytorch we have to import or extend from torch.nn.module. Similarly, when we use pytorch lightning, we import the class pl.LightningModule.

Let’s create our class which we’ll use to train a model for classifying the MNIST dataset. We’ll use the same example as the one in the official documentation in order to compare our results.

As you can see above, except for the base class imported, everything else in the code is pretty much same as the original pytorch code would be.In PyTorch, this data loading can be done anywhere in your main training file.In PyTorch Lightning it is done in the three specific methods of the LightningModule.

And a fourth method meant for data preparation/downloading.

The optimizer code is the same for Lightning, except that it is added to the function configure_optimizers() in the LightningModule.

If we consider a traditional pytorch training pipeline, we’ll need to implement the loop for epochs, iterate the mini-batches, perform feed forward pass for each mini-batch, compute the loss, perform backprop for each batch and then finally update the gradients.To do the same in lightning, we pull out the main parts of the training loop and the validation loop into three functions:

The prototypes of these functions are:

Using these functions, Pytorch lightning will automate the training part of the pipeline. We’ll get to that but before let’s see how pytorch lightning easily integrates with Weights & Buases to track experiments and create visualizations you can monitor from anywhere.

Training Loop

Now, let’s jump into the most important part of training any model, the training loop. As we are using pytorch lightning, most of the things are already taken care of behind the scenes. We just need to specify a few hyper-parameters and the training process will be completed automatically. As an added benefit, you’ll also get a cool progress bar for each iteration.

This is all you need to do in order to train your pytorch model using lightning. This one line code will easily replace your bulky and inefficient vanilla pytorch code.PyTorch also gives you a nice progress bar keeping track of each iteration.

Image for post

Let’s have a look at the visualizations generated for this run in the dashboard.

Train loss and validation loss for the particular run are automatically logged in the dashboard in real-time as the model is being trained.

Image for post

We can repeat the same training step with different hyper-parameters to compare different runs. We’ll change the name of the logger to uniquely identify each run.

Here’s how our models are faring so far.

Image for post
Image for post

These visualizations are stored forever in your project which makes it much easier to compare the performances of variations with different hyperparameters, restore the best performing model and share results with your team.

Multi GPU training

Lightning provides a simple API for performing data parallelism and multi-gpu training. You don’t need to use torch’s data parallelism class in the sampler. You just need to specify the parallelism mode and the number of GPUs you wish to use.

There are multiple ways of training:

We’ll use the data parallel backend in this post. Here’s how we can incorporate it in the existing code.

Here I’m using only 1 GPU as I’m working on google colab.

As you use more GPUs, you’d be able to monitor the difference in memory usage between different configurations, like in the plot below.

Image for post

Pytorch Lightning provides 2 methods to incorporate early stopping. Here’s how you can do use them:

A) Set early_stop_callback to True. Will look for ‘val_loss’

B) Or configure your own callback

16-bit Precision

Depending on the requirements of a project, you might need to increase or decrease the precision of the weights of a model. Reducing precision allows you to fit bigger models into your GPU. Let’s see how we can incorporate 16-bit precision in pytorch lightning.

First, we need to install NVIDIA apex. To do that, we’ll create a shell script in colab and execute it.

Now we can directly pass in the required value in the precision parameter of the trainer.

Comparison With Pytorch

Now that we’ve seen the simplistic framework that lightning provides, let’s have a quick look at how it compares with pytorch. In lightning, we can train the model with automatic callbacks as well as progress bars by just creating a trainer and calling train() method on it.

You can see how complicated the training code can get and we haven’t even included the modifications to incorporate multi GPU training, early stopping or tracking performance with wandb yet.

For adding distributed training in Pytorch, we need to use DistributedSampler for sampling our dataset.

You’ll also need to write a custom function to incorporate early stopping.But when using lightning, all of this can be accomplished by one line of code.

That’s all for this story!