Helping Babies and Their Parents Sleep Better
Nanit is a consumer product for parents of young babies and toddlers. It is an automated sleep adviser that uses computer vision, image recognition, and deep learning to monitor and analyze the baby’s sleep behavior and provide guidelines for improving the quantity and quality of the baby’s sleep.
Poor sleep in the early years of life is problematic both for the child and the parents, making Nanit an essential solution for the entire family. For babies, sleep is critical for mental and physical development. For parents, dealing with poor sleep is exhausting—and parents lose 44 nights of sleep on average in their baby’s first year. Nanit provides the parenting and extended caregiving team with expert tips and guidelines on how to improve their baby’s sleep, based on close and continual observation of their baby’s sleep patterns.
Nanit is a fast growing company with offices in Tel Aviv and New York City at present. Its R&D team comprised of developers, data engineers, and algorithm developers. Although currently focused on sleep issues related to babies, Nanit’s algorithm team is excited about building a smart platform that has many other possible applications, such as adult sleep syndromes, elder care, hospital care, and much more.
Why Deep Learning for Nanit
Nanit’s mission is a classic use case for deep learning for computer vision. The team uses psychologists and sleep and behavioral experts to gain a deep understanding of the sleep behavior of babies and their parents. They then run hundreds of experiments a month. Over the course of these iterative experiments, which use the tagged representative data, the team trains an optimized model of the statistical correlations between parameters such as body and eye movements and baby sleep patterns.
The Nanit deep learning model can then make accurate inferences about babies and their sleep patterns in the general population, providing parents with hands-on, personalized, expert sleep training guidelines. For example, Nanit may learn that a certain baby wakes up crying several times during the night, but usually falls back to sleep spontaneously after a couple of minutes. The advice to the baby’s parents, therefore, will be to let the baby cry for a few minutes, rather than rush in immediately and pick her up. Nanit helps the parents reinforce their baby’s ability to put herself back to sleep.
The Challenges of Running Hundreds of Experiments Concurrently
In the early stage of the company, Nanit ran hundreds of experiments and trained dozens of models using Tensorflow and Keras, two popular open-source deep learning frameworks, as well as home-grown tools and processes. However, the Nanit team ran into fundamental challenges regarding scalability, data exploration, versioning, and tracking. They also encountered productivity issues, with their data scientists spending a lot of time on infrastructure and operations, rather than on the core problem. Let’s take an in-depth look at some of the challenges Nanit faced.
Scalability and Streamlined Cycles
As a deep learning startup, scalability was both very important and very challenging for Nanit. Managing early models was pretty easy, with a small team running a limited number of experiments on just a few gigabytes of data. But as the business grew, they soon hit a wall. They added more data scientists; terabytes of data had to be prepared, tagged, explored, queried, managed, and tracked; more and more experiments were being run concurrently; and the models became more complex. For each experiment, the team had to keep careful track of the data (often from a variety of sources), as well as the hyperparameters and the code for running the experiment and evaluating the results.
At the beginning we were using naming conventions of folders to try to keep track [of all this] and pretty soon we hit a wall because the team was growing and we were running a lot of experiments concurrently, with many parameters...[We] got to a point when we realized that we must use some kind of database. But we also realized that building this database in-house would use a lot of in-house resources for something that is not our core technology.
When building a deep learning model, you have to be able to explore the structure of your datasets and the relationships among various data elements so that you can choose the most appropriate data to run on each training model. In Nanit’s case, for example, the team wanted to be able to slice and cluster the data so that they could test different models based on various combinations of parameters, such as sleep times, specific sleep durations (e.g., from the first to the fourth hour), room temperature, the level of lighting, and so on. They knew that these decisions would have considerable impact on the performance of their models, but without the proper tools, found the data exploration and querying process to be very challenging and time-consuming.
Data Versioning and Experiment Tracking
There are numerous source control tools that you can use when writing and versioning code, including open-source frameworks like Git. In data science, however, versioning and source control are far more complex because you also have to track and version your data repositories. With numerous training models being run on different sources of data and hyperparameters, the Nanit team found it difficult to compare results across all experiments. It was even more complicated to reliably and easily reproduce the ones that gave the best results.
Once Nanit began running hundreds of experiments on terabytes of data to train complex deep neural networks, they knew that they needed: “…to track what data was used to train the model because it affects the performance. [However] the tools that we looked at didn’t have the ability built-in to support that.”
Data Privacy and Protection
Nanit’s customers entrust the company with highly sensitive data—images and videos of their sleeping babies. In return, they expect the company to strictly uphold the highest standards of data privacy. The servers and all data repositories must be super secure, and all data must be anonymized and encrypted. No third-party tool or platform should ever have access to the data. Thus, when considering third-party version control solutions for their deep learning data, Nanit prioritized the ability to maintain in-house security and privacy practices. They also wanted to avoid redefining their custom workflow to adapt to the new tool.
Time, Focus, and Core Competencies
Nanit’s data science and engineering team wanted to focus on the core tasks and algorithms that would accelerate product improvement and the development of new features. Instead, however, they spent an endless number of precious hours on peripheral tasks such as manually copying data files, maintaining their cloud infrastructure, and other operational activities.
The team spent a large portion of their time on infrastructure and activities that are not their core competency. We wanted the data scientists to focus on research and training networks and not finding tricks to handle data sources and versions...
How MissingLink Helped Nanit Overcome These Challenges
Nanit wanted a scalable platform that could streamline, manage, version, and query large volumes of data. They also wanted the platform to also track experiments—without having to update their existing toolset and workflows, or restructure or give away access to their sensitive data. MissingLink’s modular, plug-and-play platform was a great fit.
In this section we learn how MissingLink helped Nanit overcome their deep learning challenges and achieve the scalability that they needed to accelerate development.
Data Versioning, Tracking, and Feedback Loops
Nanit’s data scientists and developers, who are the main consumers of the MissingLink platform, use it to precisely track experiments, model versions, and multiple data sources. MissingLink provides them with immutable data versions—once a data version is committed, it cannot be modified after being created and changes will automatically be captured in a new version.
Because MissingLink integrates seamlessly with other data platforms that Nanit uses, the team was able to implement highly automated data lifecycle management processes. For example, they can keep close track of data throughout a feedback loop in which they use production data to continue tuning the training model and then deploy the retrained models back into production. MissingLink gives the team constant full visibility into what model version is at what phase (tagging, training, production, and so on).
Advanced Slicing of Data is One Query Away
Data exploration is one of the pillars of data science. In order to thoroughly test a hypothesis, the data scientist must have clear insight into the data and how it is structured prior to training. However, given the very large quantities of data and the fact that they are not typically stored in a relational database, data exploration and querying in deep learning are far from trivial tasks.
With MissingLink, Nanit’s data scientists can easily store metadata on their data—across different servers, different data types, different sizes of images and tables, and more. It then becomes as easy as using a relational database to explore, slice, aggregate, and cluster the data, with complete transparency into how any given dataset is structured.
Reproducing Experiments with One Click
Deep learning experiments are very complex. Each one is comprised of a specific dataset (perhaps from multiple data sources), hyperparameters, code for running the experiment and evaluating the results, and so on. It would save a lot of time if all of these elements could be easily and faithfully reaggregated in order to reproduce a successful experiment.
However, the Nanit team found that, in reality, reproducing experiments is a very time-consuming process—and not always successful. Today, thanks to the way MissingLink automatically stores information on all experiment-related elements, the Nanit team can seamlessly revisit, examine and reproduce experiments.
Data Privacy and Protection
The Nanit training dataset contains highly sensitive data provided by Nanit’s customers. Two key data management requirements for MissingLink were: the data never leaves Nanit’s premises, and only the Nanit team has access to it.
MissingLink’s unique data management implementation allows Nanit to manage its huge (millions of data points) and constantly evolving datasets without having to upload the raw data. MissingLink’s serverless hybrid architecture allows Nanit to manage its data on the cloud, but the data itself resides on Nanit’s storage facilities only.
Ease of Use and Integration
MissingLink provides both CLI and visual interfaces. The Nanit team has remarked that they find MissingLink’s CLI very flexible and easy to use. They’ve also noted how helpful it is that MissingLink presents the status of hundreds of experiments in easy-to-understand visual dashboards.
In addition, because MissingLink integrates seamlessly with other tools, code, servers, and JSON format data, the team can easily add and retrieve data. In general, MissingLink enhances, rather than replaces, their existing workflows and processes.
In short, with its precise and comprehensive tracking, as well as ease of integration, MissingLink has become a key part of Nanit’s automated data lifecycle—from data exploration to versioning and management.
MissingLink Accelerates Nanit’s Business
Just as DevOps tools and processes have dramatically accelerated the delivery of software products and features, MissingLink’s deep learning operations (DeepOps) platform has simplified and streamlined Nanit’s learning lifecycle, infrastructure, and operations so that their data scientists can focus on real solutions to real problems.
Accelerate time to market – Nanit believes that, thanks to MissingLink, its R&D pace has increased significantly, which translates into accelerated time to market for value-capturing product enhancements.
With MissingLink we eliminated a huge portion of time that the algorithm team spent on infrastructure and this time converted to working on researching. Also the research time accelerated because we could do more in less time.
Improved productivity and greater focus – With MissingLink, the data science team is now working optimally. Instead of expending time and resources on managing data and infrastructure, they can focus on core research and algorithms.
Like Slack, the real power of MissingLink is from the bottom up. It is an enabling tool that lets data scientists focus on what they really like: core issues and technology.
Expanded opportunities – By spending more time on core issues, Nanit can effectively explore the new features and applications that are essential to achieving their aggressive business growth strategy.
As we move to the future we are trying to solve more and more problems with more and more data. With MissingLink we are spending less time on infrastructure and other activities not related to the core value that we provide our users. Instead, we can focus on research into real problems, achieving insights that bring greater benefits to our customers.
MissingLink brings all of the advantages of DevOps to the field of deep learning. By streamlining data, experiment, and infrastructure management for data scientists, MissingLink frees them to focus on their core competencies. In the case of Nanit, the AI team has significantly accelerated the time-to-market for improvements and new features—bringing greater relief to young babies and their parents even faster.