Important announcement: Missinglink has shut down. Click here to learn more.

Abstraction of a neuron with binary on dendrites

Neural Network Concepts

CNN vs RNN: Which Neural Network Is Right for You?


Neural networks, inspired by the human brain, are increasingly being used in the classification of complex information and the use of CNNs and RNNs is especially promising. To learn about these two neural networks, often used in conjunction with each other, and how they are being developed to advance computer identification and the prediction of visual and audio inputs, read on.

What Is a CNN?

A Convolutional Neural Network (CNN) is a multi-layer neural network used to analyze images for image classification, segmentation or object detection. CNNs work by reducing an image to its key features and using the combined probabilities of the identified features appearing together to determine a classification. One advantage that CNNs have over other classification algorithms is that they require fewer hyperparameters and less supervision.

What is a RNN?

A Recurrent Neural Network (RNN) is a multi-layer neural network, used to analyze sequential input, such as text, speech or videos, for classification and prediction purposes. RNNs work by evaluating sections of an input in comparison with the sections both before and after the section being classified through the use of weighted memory and feedback loops. RNNs are useful because they are not limited by the length of an input and can use temporal context to better predict meaning.

CNN vs RNN Comparison: Architecture and Applications

Although CNNs and RNNs are both neural networks and can process some of the same input types, they are structured differently and applied for different purposes.


CNNs are made up of three layer types—convolutional, pooling and fully-connected (FC).

In the convolutional layers, an input is analyzed by a set of filters that output a feature map. This output is then sent to a pooling layer, which reduces the size of the feature map. This helps reduce the processing time by condensing the map to it’s most essential information.

The convolutional and pooling processes are repeated several times, with the number of repeats depending on the network, after which the condensed feature map outputs are sent to a series of FC layers. These FC layers then flatten the maps together and compare the probabilities of each feature occurring in conjunction with the others, until the best classification is determined.

cnn architecture

Illustration of CNN architecture layers

This architecture allows CNNs to learn the position and scale of features in a variety of images, making them especially good at the classification of hierarchical or spatial data and the extraction of unlabelled features. Unfortunately, this structure requires CNNs to only accept fixed-size inputs—and it only allows them to provide fixed-size outputs.

CNNs are currently being applied to several applications, including:

  • Computer vision—medical image analysis, image recognition and face detection
  • Natural Language Processing (NLP)—semantic parsing, sentence modeling and search query retrieval
  • Drug discovery—the discovery of chemical features and prediction of medicinal benefits


In a simple RNN, each input is evaluated on a single layer and an output is given. This can occur on a one-to-one, one-to-many, many-to-one or many-to-many input to output basis.

As the RNN analyzes the sequential features of the input, an output is returned to the analysis step in a feedback loop, allowing the current feature to be analyzed in the context of the previous features. Since each step requires feedback from the previous step, RNNs are unable to take advantage of Massive Parallel Processing (MPP) as CNNs can.

When an RNN is trained, it is taught what weight to assign to each input feature, which then determines what information is passed back to the feedback loop according to gradient descent. This process, which creates the “short-term memory” of an RNN, is known as Backpropagation Through Time (BPTT).

Unrolled Recurrent Neural Network

Illustration of RNN architecture feedback loop

RNNs are currently being applied to several applications, including:

  • Temporal analysis—time-series anomaly detection and time-series prediction
  • Computer vision—Image description, video tagging and video analysis
  • NLP—Sentiment analysis, speech recognition, language modeling, machine translation and text generation

RNN CNN Hybrids

CNNs and RNNs are not mutually exclusive, as both can perform classification of image and text inputs, creating an opportunity to combine the two network types for increased effectiveness. This is especially true if the input to be classified is visually complex with added temporal characteristics that a CNN alone would be unable to process.

Typically, when these two network types are combined, sometimes referred to as a CRNN, inputs are first processed by CNN layers whose outputs are then fed to RNN layers. CNN Long Short-Term Memory (LSTM) architectures are particularly promising, as they facilitate analysis of inputs over longer periods than could be achieved with lower-level RNN architecture types.

Currently, these hybrid architectures are being explored for use in applications like video scene labeling, emotion detection or gesture recognition, video identification or gait recognition, and DNA sequence prediction.

Running RNNs and CNNs with MissingLink

As neural networks become more complex, so does the management of data and resources, but this can be simplified through automation tools.

MissingLink can help with this process through platform features that facilitate:

  • tracking experiments

    Tracking experiment progress, hyperparameters and source code

    Neural networks have numerous hyperparameters and require constant tweaking. Testing each of these requires running an experiment and tracking its results.

    MissingLink can help you keep track of where you are in the process and prevent you from losing or duplicating work.

  • running experiment across multiple machines

    Running experiments across multiple machines and GPUs

    Neural networks are computationally intensive and running multiple experiments on different data sets can take hours or days for each iteration.

    MissingLink can simplify running experiments on multiple machines or GPUs and facilitate provisioning, configuration and distribution.

  • manage training datasets

    Managing training datasets

    Neural networks typically use media-rich datasets with images and video, which amount to GBs or even TBs of storage. Each experiment requires modifying your dataset and syncing the updated version with your training machines.

    MissingLink can help reduce the amount of time it takes to refine your dataset and reduce copy errors when sets are updated.


The MissingLink deep learning platform can simplify the process of running your experiment, with support for TensorFlow, Keras, Pycaffe and PyTorch, giving you the flexibility to select what works best for your design and allowing you to focus on what matters.

Train Deep Learning Models 20X Faster

Let us show you how you can:

  • Run experiments across hundreds of machines
  • Easily collaborate with your team on experiments
  • Reproduce experiments with one click
  • Save time and immediately understand what works and what doesn’t

MissingLink is the most comprehensive deep learning platform to manage experiments, data, and resources more frequently, at scale and with greater confidence.

Request your personal demo to start training models faster

    Thank you!
    We will be in touch with more information in one business day.
    In the meantime, why not check out how Nanit is using MissingLink to streamline deep learning training and accelerate time to Market.