Skip to content

Visualization of Custom Metrics in Generic Frameworks

This topic shows you how to set experiment custom metrics and their effects. The topic builds on Generic Integration with Network (With Steps).

The following steps are covered:

  • Create a custom metric function.
  • Wrap the function using MissingLink's callback.

Preparation

Go through Generic Integration with Network (With Steps).

Note

Ensure that you can successfully run the full code sample. In the steps that follow below, the script is further developed to include custom metrics.

Write code

  1. Create a custom metric function:

    Wherever you see fit, define a function that will be a custom metric. For example:

    def accuracy(correct_count, total):
        # Here you can put any calculation you would like.
        # The function may have any parameters you need,
        # but it must return a single numeric value.
        # In this example, we demonstrate it by calculating the accuracy of the model.
        return (correct_count / total) * 100.0
    
  2. Create an experiment and pass the metrics:

    Similarly to the way you passed in regular metrics to the experiment, you need to pass in the custom metric. Add the accuracy metric to the metrics dictionary in the experiment call, so:

    with missinglink_project.experiment(
        model,
        metrics={'loss': loss, 'accuracy': accuracy},
        display_name='MNIST multilayer perception',
        description='Two fully connected hidden layers') as experiment:
    

    Then, get the wrapped accuracy function from the experiment object:

    loss_function = experiment.metrics['loss']
    accuracy = experiment.metrics['accuracy']
    
  3. Now, all you need to do is call the wrapped function in your training loop, in your validation loop, or wherever you want to. MissingLink will monitor the result of the function whenever you call it (as long as the experiment is running, of course).

    You can call the accuracy function in a number of ways:

    • If you call it inside the scope of a batch, then you'll get a point on a graph for every batch.
    • If you call it inside the scope of an epoch, then you'll get a point on a graph for every epoch.
    • If you call it inside the validation phase, then these points appear in the validation graph for this metric.
    • If you call it inside the test phase, then these points appear in the test graph for this metric.


You should have added custom metrics to your experiment successfully.

Inspect the resulting script after the addition .

Run the new script and` see how the MissingLink dashboard helps with monitoring the experiment. A description follows.

Viewing the new functionality on the dashboard

You can see the custom metrics across different experiments on your MissingLink dashboard. Here's an example:

See custom metrics for generic frameworks in the MissingLink dashboard