Skip to content

Visualization of TensorFlow Custom Metrics

This topic shows you how to set experiment custom metrics and their effects. The topic builds on Getting Started for TensorFlow with steps.

The following steps are covered:

  • Create a custom metric function.
  • Set the custom metric function to be evaluated and monitored by MissingLink.

Preparation

Compare the base script with the resulting script.

Write code

  1. Create a custom metric function:

    Right before the train and validation scope, add the following custom metric function.

    def sorensen_dice():
        # Here we can modify this function to
        # calculate the sorensen dice coefficient
        # or any other custom metrics
        # instead of returning 1
        return 1
    
    # Use `experiment.train` scope before the `session.run` which runs the optimizer
    # to let the SDK know it should collect the metrics as training metrics.
    with experiment.train(
        monitored_metrics={'loss': loss, 'acc': eval_correct}):
        # Note that you only need to provide the optimizer op.
        # The SDK will automatically run the metric
        # tensors provided in the `experiment.train`
        # context (and `experiment` context).
        _, loss_value = session.run([train_op, loss], feed_dict=feed_dict)
    
    # Validate the model with the validation dataset
    if (step + 1) % 500 == 0 or (step + 1) == MAX_STEPS:
        with experiment.validation(
            monitored_metrics={'loss': loss, 'acc': eval_correct}):
            do_eval(session, eval_correct, images_placeholder,
                    labels_placeholder, data_sets.validation)
    
  2. Set custom metrics for an experiment to be monitored:

    In the base script, change the train and validation run to the following.

    with experiment.train(
        monitored_metrics={'loss': loss, 'acc': eval_correct,
            'sorensen_dice': sorensen_dice}):
        _, loss_value = session.run([train_op, loss], feed_dict=feed_dict)
    
    if (step + 1) % 500 == 0 or (step + 1) == MAX_STEPS:
        with experiment.validation(
            monitored_metrics={'loss': loss, 'acc': eval_correct},
            custom_metrics={'sorensen_dice': sorensen_dice}
            ):
            do_eval(session, eval_correct, images_placeholder,
                    labels_placeholder, data_sets.validation)
    


You should have added custom metrics to your TensorFlow visualization experiment successfully.

  • Inspect the resulting script.
  • Run the new script and see how the MissingLink dashboard helps with monitoring the experiment. A description follows.

Viewing the new functionality on the dashboard

You can see the custom metrics across different experiments on your MissingLink dashboard.