Skip to content

Visualization of PyTorch Custom Metrics

This topic shows you how to set experiment custom metrics and their effects. The topic builds on Getting Started for PyTorch with steps.

The following steps are covered:

  • Create a custom metric function.
  • Wrap the function using MissingLink's callback.

Preparation

Compare the basic script with the resulting script.

Write code

  1. Create a custom metric function:

    Wherever you see fit, define a function that will be a custom metric. For example:

    def accuracy(correct_count, total):
        # Here you can put any calculation you would like.
        # The function may have any parameters you need,
        # but it must return a single numeric value.
        # In this example, we demonstrate it by calculating the accuracy of the model.
        return (correct_count / total) * 100.0
    
  2. Create an experiment and pass the metrics:

    Similarly to the way you passed in regular metrics to the experiment, you need to pass in the custom metric. Add the accuracy metric to the metrics dictionary in the create_experiment call, so:

    with missinglink_project.create_experiment(
        model,
        metrics={'loss': loss, 'accuracy': accuracy},
        display_name='MNIST multilayer perception',
        description='Two fully connected hidden layers') as experiment:
    

    Then, get the wrapped accuracy function from the experiment object:

    loss_function = experiment.metrics['loss']
    accuracy = experiment.metrics['accuracy']
    
  3. Now, all you need to do is call the wrapped function in your training loop, in your validation loop, or wherever you want to. MissingLink will monitor the result of the function whenever you call it (as long as the experiment is running, of course).

    You can call the accuracy function in a number of ways:

    • If you call it inside the scope of a batch, then you'll get a point on a graph for every batch.
    • If you call it inside the scope of an epoch, then you'll get a point on a graph for every epoch.
    • If you call it inside the validation phase, then these points appear in the validation graph for this metric.
    • If you call it inside the test phase, then these points appear in the test graph for this metric.


You should have added custom metrics to your experiment successfully.

Depending on the way in which you called the metric function, several scripts could result:

  • Here, a single metric (accuracy) is logged in the training phase and in the test phase.

  • Here, the metric (loss) is logged inside the batch loop in all three phases: training, validation and test.

Run a script and see how the MissingLink dashboard helps with monitoring the experiment. A description follows.

Viewing the new functionality on the dashboard

You can see the custom metrics across different experiments on your MissingLink dashboard. Here's an example:

See PyTorch custom metrics in the MissingLink dashboard