Skip to content

Generic Integration with Network (With Steps)

These steps show you how to integrate the MissingLink SDK with a multilayer perception neural network with steps. The topic provides generic steps for situations where you are not using a framework, or are using a framework that is not supported.

The following steps are covered:

  • Create a new experiment.
  • Define an experiment context.
  • Change the loop.
  • Define a training context.
  • Define a validation context.
  • Define a testing context.


You can also consider trying the step-by-step tutorial for integrating the MissingLink SDK with a generic framework.


You must have created a new project. If not, follow the instructions in Creating a project.


Look at the full code sample that is described below.

Write code

  1. Import the SDK and define your credentials at the beginning of the file (before any function definition).

    from missinglink import VanillaProject
  2. Now create a VanillaProject instance with your credentials, which helps to monitor the experiment in real time. Before the training loop, add the following statement:

    project = VanillaProject(project_token=args.project_token)
  3. Create a new experiment as a context that contains your training loop:


    If you are using custom metrics, there are additional steps that you need to perform to see them in the dashboard. For more information, see Visualization of Custom Metrics in Generic Frameworks.

    with project.experiment() as experiment:

    Parameter descriptions

    • display_name (optional): Name to be displayed on the experiment
    • description (optional): Experiment description
    • metrics: Dictionary of all the metrics that will be tracked during the experiment
  4. Within the experiment context, change the for loop to use experiment.loop generator:

    for i in experiment.loop(1000):

    Additional implementations of iteration loop

    • Use iterable parameter
      loop can also iterate over an iterable, using the iterable parameter:
    for step, data in experiment.loop(iterable=train_data):
    # Perform a training step on the data

    The iterable argument can be any iterable you wish, like a list, a file, a generator function, etc. When used with the iterable parameter, loop yields the index of the step and the data from the iterable.

    • Use lambda condition

    There is an optional parameter that can be added to control the training loop.

    Note that this is not the actual loss value - it's a variable that has been created as an example. The training runs as long as the loss value is greater than 0.5%.

    loss_value = 0.55
    for step in experiment.loop(condition=lambda _: loss_value > 0.5):


    If you prefer to use your own iterator logic, simply tell MissingLink which epoch loop you are in currently:

    for i in range(0, 5000, 2):
        with experiment.batch(i):
  5. Next, create various contexts so that the SDK is aware of different steps of your training cycle.

    Add the experiment.train context.

    with experiment.train():


    If you would like to monitor different metrics on this level in addition to what the experiment already does in step 3, you can supply them here.

    For instance, if you would like to also monitor another metric mean_squared_loss in the training stage only, write the following:

    with experiment.train(monitored_metrics={'mean_squared_loss': mean_squared_loss}):
        _, loss_value =
            [train_op, loss], feed_dict=feed_dict
  6. Similarly, add the experiment.validation context.

    with experiment.validation():


    If you would like to monitor different metrics on this level in addition to what the experiment already does in step 3, you can supply them here.

    For instance, if you would like to also monitor another metric mean_squared_loss in the validation stage only, write the following:

        experiment.log_metric('loggy', val_loggy)
        experiment.log_metric('sin', val_sin)
  7. Similarly, add the experiment.test context.

    with experiment.test() as test
    Parameter descriptions

    • total_test_iterations: Total iterations needed to go over test dataset
    • expected: Tensor for expected values
    • predicted: Tensor for predictions

    If you do implement a testing context, MissingLink automatically adds a confusion matrix and a table of standard test metrics, all viewable under the Test tab for the experiment.

    If there is a testing context, MissingLink displays a confusion matrix and standard test metrics under the Test tab

You should have integrated MissingLink's SDK successfully.

Web dashboard monitoring

You can monitor your experiment on your MissingLink dashboard.

monitor your generic experiment on your MissingLink dashboard

Click on the experiment to view your metric graphs.

Click on the generic experiment to view your metric graphs

Next steps

Learn more about integrating with generic code to enable the following MissingLink features: