(Source: archy13/Shutterstock.com)
Deep learning algorithms are the future of machine learning (ML)—but only those with certain requirements. Many ML algorithms work effectively on small data. Here, we’ll examine some of these algorithms, such as neural networks and k-means clustering, and how they can be applied.
When we think about ML for the Internet of Things (IoT), we commonly think about ML in the cloud and large-scale algorithms that pore over massive amounts of data. That’s a use model for ML in the IoT, but it’s not the only one. ML can be applied everywhere within the IoT ecosystem, just attacking different problems with different algorithms (Figure 1).
Figure 1: The IoT Ecosystem and ML/Data Perspectives. (Source: Author)
From Figure 1, you can also see some differences in how data can be used. The cloud processes data from many sites, enabling it to explore patterns across the various instances of your IoT products. If you’re concerned with fault prediction at the endpoints, having data from all endpoints can help predict failures.
The gateway can process data from a given site and the IoT endpoint’s data that is generated, giving you a site-level view. For sensor fusion and learning household energy usage patterns, a gateway (or home-level) view of data is appropriate.
Finally, the endpoints themselves view the sensor data in isolation (which could itself represent multiple sensors, but a more narrow view). At this level, ML can be applied to individual sensors in a hierarchy of ML algorithms.
There are many reasons to use ML at the edge, from communication latency to the cloud to privacy of data and learned models. Let’s look at some of the ways ML can be applied at the edge.
The Nest thermostat or “Google Nest” is considered a preeminent example of IoT, Smart Home, and ML at the edge. The Nest thermostat learns to program itself and set your home’s optimal temperature base on your unique temperature preferences and in-home schedule.
Another example is a residential rainwater collection system that would learn when and how to prioritize and optimize lawn watering based on environmental sensor data, lawn zone priority, time of day, level of available water, and local historical rain forecasts.
Table 1 illustrates some of the ML algorithms that can be applied at the IoT endpoint and the applications they can serve.
Table 1: Machine Learning Algorithms and Their Applications
ALGORITHM
APPLICATION
Feed-forward neural networks
Prediction
Recurrent neural networks
Time-series prediction
K-means clustering
Outlier detection
Support vector machines
Classification
Principal component analysis
Fault detection
Naïve Bayes
Pattern recognition
Density-based clustering
Anomaly detection
Deep learning is a useful algorithm for many use models that involve both video and audio information. Deep learning coprocessors are making it possible to build deep-learning applications in resource-constrained embedded devices. You can also find deep-learning algorithms that are tailored to embedded devices, as well.
Therefore, the algorithm you use is a function of the application. If deep learning is what you need, that approach can apply within IoT endpoints with the right hardware and firmware implementation.
No approach is a panacea in the IoT domain. Those solutions that succeed are those that solve problems efficiently as close to the data as possible. Whether you integrate ML at individual levels of an IoT system or distribute ML across the architecture, ML will play an even more important role for IoT systems in the future.
M. Tim Jones is a veteran embedded firmware architect with over 30 years of architecture and development experience. Tim is the author of several books and many articles across the spectrum of software and firmware development. His engineering background ranges from the development of kernels for geosynchronous spacecraft to embedded systems architecture and protocol development.