Skip to content

Visualization of PyTorch Experiment Hyperparameters

This topic shows you how to set experiment hyperparams and their effects. The topic builds on the script that resulted from steps in Getting Started for PyTorch with steps.

The steps that are covered are:

  • Define a hyperparam.
  • Set hyperparams to MissingLink's callback.

Note

For each framework that MissingLink supports, there are hyperparameters that will be retrieved automatically. See the lists here and here.

Preparation

Go through Getting Started for PyTorch with steps.

Note

Ensure that you can successfully run the mnist.py training script that resulted from integration with the MissingLink SDK. In the steps that follow below, the script is further developed to include hyperparams.

Write code

  1. Add a dropout rate hyperparam:

    # Training params
    DROPOUT_RATE = 0.1
    EPOCHS = 8
    
  2. Set hyperparams to an experiment:

    Modify the call to create_experiment: Add a hyperparams parameter, which should be a dictionary that maps the names of hyperparams to their values:

    with missinglink_project.create_experiment(
        model,
        metrics={'loss': loss},
        display_name='MNIST multilayer perception',
        description='Two fully connected hidden layers',
        hyperparams={'dropout_rate': DROPOUT_RATE}) as experiment:
    

You should have added hyperparams to your PyTorch visualization experiment successfully.

  • Inspect the resulting script here.
  • Run the new script and see how MissingLink's dashboard helps with monitoring the experiment's hyperparams. A description follows.

Viewing the new functionality on the dashboard

You can see the hyperparams across different experiments on your MissingLink dashboard.

Hyperparameters that are automatically retrieved from an optimizer

When creating an experiment, you can also provide an optimizer used to automatically extract more hyperparameters:

with missinglink_project.create_experiment(
    model,
    metrics={'loss': loss},
    display_name='MNIST multilayer perception',
    description='Two fully connected hidden layers',
    hyperparams={'dropout_rate': DROPOUT_RATE},
    optimizer=optimizer) as experiment:

The following hyperparameters are retrieved automatically if an optimizer is provided and if they are defined:

  • Algorithm
  • Learning rate
  • Learning rate decay
  • Epsilon
  • Rho
  • Beta 1
  • Beta 2
  • Weight decay
  • Lambd
  • Alpha
  • t0
  • Max eval
  • Tolerance gradient
  • Tolerance change
  • History size
  • Eta minus
  • Eta plus
  • Minimum step size
  • Maximum step size
  • Dampening
  • Batch size
  • Total batches
  • Epoch size
  • Total epochs
  • Max iterations

Hyperparameters that are automatically retrieved from a data object

When creating an experiment, you can also provide the data object used to load the data to automatically extract more hyperparameters:

with missinglink_project.create_experiment(
    model,
    metrics={'loss': loss},
    display_name='MNIST multilayer perception',
    description='Two fully connected hidden layers',
    hyperparams={'dropout_rate': DROPOUT_RATE},
    optimizer=optimizer,
    train_data_object=train_loader) as experiment:

The train_data_object can be a torch.utils.data.DataLoader, or one of the torchtext.data iterators: an Iterator, a BucketIterator, or a BPTTIterator.

The following hyperparameters are retrieved automatically if a data object is provided and if they are defined:

  • train
  • repeat
  • shuffle
  • sort
  • sort within batch
  • device
  • number of workers
  • pin memory
  • drop last
  • collate function
  • sampler
  • batch size function
  • dataset
  • samples count
  • epoch size
  • batch size