Skip to content

Integration with a TensorFlow Neural Network (With Steps)

This topic shows you how to integrate the MissingLink SDK with a multilayer perception TensorFlow neural network that is trained on the MNIST dataset.

The following steps are covered:

  • Define a project callback with your credentials.
  • Create a new experiment.
  • Define an experiment context.
  • Change the loop.
  • Define a training context.
  • Define a validation context.
  • Define a testing context.

Note

You can also consider trying the step-by-step tutorial for integrating the MissingLink SDK with an existing TensorFlow example.

Preparation

  • You must have TensorFlow installed in the same working environment that MissingLink SDK is installed. The SDK doesn't enforce TensorFlow as one of its dependencies.

  • You must have created a new project and have its credentials: owner_id and project_token ready. Otherwise, follow the instructions in Creating a project.

Note

Ensure that you can successfully run the basic mnist.py training script. In the steps that follow below, the basic script is integrated with the MissingLink SDK to enable remote monitoring of the training, validation, and testing process.

Compare the basic script with the integrated script.

Write code

  1. Import the SDK and define your credentials at the beginning of the file (before any function definition).

    import missinglink
    
    OWNER_ID = 'Your owner id'
    PROJECT_TOKEN = 'Your project token'
    
  2. Now create a TensorFlowProject instance with your credentials, which helps to monitor the experiment in real time. In the run_training function and before the training loop, add the following statement:

    missinglink_project = TensorFlowProject(OWNER_ID, PROJECT_TOKEN)
    
  3. First, create a new experiment as a context that contains your training loop, so:

    with missinglink_project.create_experiment(
        display_name='MNIST multilayer perception',
        description='Two fully connected hidden layers',
        monitored_metrics={'loss': loss, 'acc': eval_correct}) as experiment:
    

    Parameter descriptions

    • display_name (optional): Name to be dislayed on the experiment
    • description (optional): Experiment description
    • monitored_metrics: Dictionary of all the metrics that will be tracked during the experiment
  4. Within the experiment context, change the for loop to use experiment.loop generator instead of range function.

    # replace
    # for step in range(MAX_STEPS):
    # with 
    for step in experiment.loop(max_iterations=MAX_STEPS):
    

    Note

    Additional implementations of iteration loop

    • Use iterable parameter
      loop can also iterate over an iterable, using the iterable parameter:
    for step, data in experiment.loop(iterable=train_data):
    # Perform a training step on the data
    

    The iterable argument can be any iterable you wish, like a list, a file, a generator function, etc. When used with the iterable parameter, loop yields the index of the step and the data from the iterable.

    • Use lambda condition

    There is an optional parameter that can be added to control the training loop.

    Note that this is not the actual loss value - it's a variable that has been created as an example. The training runs as long as the loss value is greater than 0.5%.

    loss_value = 0.55
    for step in experiment.loop(condition=lambda _: loss_value > 0.5):
    
  5. Next, create various contexts so that the SDK is aware of different steps of your training cycle.

    Before the session.run for a training step, add the experiment.train context.

    with experiment.train():
        _, loss_value = session.run([train_op, loss], feed_dict=feed_dict)
    

    Note

    If you would like to monitor different metrics on this level in addition to what the experiment already does in step 3, you can supply them here.

    For instance, if you would like to also monitor another metric mean_squared_loss in the training stage only, write the following:

    with experiment.train(monitored_metrics={'mean_squared_loss': mean_squared_loss}):
        _, loss_value = session.run(
            [train_op, loss], feed_dict=feed_dict
        )
    
  6. Similarly, add the experiment.validation context.

    if (step + 1) % 500 == 0 or (step + 1) == MAX_STEPS:
        with experiment.validation():
        do_eval(session, eval_correct, images_placeholder,
                labels_placeholder, data_sets.validation)
    

    Note

    If you would like to monitor different metrics on this level in addition to what the experiment already does in step 3, you can supply them here.

    For instance, if you would like to also monitor another metric mean_squared_loss in the validation stage only, write the following:

    with experiment.validation(monitored_metrics={'mean_squared_loss': mean_squared_loss}):
        _, loss_value = session.run(
            [train_op, loss], feed_dict=feed_dict
        )
    
  7. Define the testing context by adding the experiment.test context.

    total_test_iterations = data_set.num_examples
    
        with experiment.test (
            total_test_iterations,
            expected=labels_placeholder,
            predicted=logits 
            ):
            sess.run([train_op, loss], feed_dict=feed_dict)
    

    Parameter descriptions

    • total_test_iterations: Total iterations needed to go over test dataset
    • expected: Tensor for expected values
    • predicted: Tensor for predictions


You should have integrated MissingLink's SDK successfully.

  • Inspect the resulting integrated script.
  • Run the new script and see how the MissingLink.ai dashboard helps with monitoring the experiment. A description follows.

Web dashboard monitoring

You can monitor your experiment on your MissingLink dashboard.

Click on the experiment to view your metric graphs.

Next steps

Learn more about integrating with TensorFlow to enable the following MissingLink features: