Three years ago, this article would have been titled, “The Power of Artificial Intelligence (AI): What Machine Learning Can Do.” It would have listed subsets of Machine Learning (ML) like computer vision, natural language processing, sentiment analysis, and recommendation engines highlighting the versatility of this technology. Today, we take this one step further by addressing new and novel hardware that support these applications. Gone are the days when ML models required a desktop, laptop, or cloud compute to run. By leveraging an ML-optimized edge device such as the NVIDIA Jetson, models can process data anywhere, anytime, at mission speed. Anywhere? Yes, anywhere—be it a car, a plane, a drone, a boat, or a backpack.

Only measuring a few inches and weighing a few ounces, edge devices are small but powerful computing devices designed to work in disconnected environments—disconnected both physically from an office and from WiFi, Bluetooth, etc. These robust devices are engineered to operate in hard-use environments such as vehicles and drones and are built to withstand extreme temperatures (-40 to 185°F). Additionally, these devices are purpose-built for AI/ML in both modeling ability and hardware configuration. They can process still imagery, live video, audio files, signal data, and more in the field in real-time.

Out with the old, in with the new

Traditionally, field operators—military personnel, intelligence professionals, law enforcement, wildland firefighters, or first responders—collect data and bring it back to the office for processing. In this paradigm, outputs (analytic or ML model outputs) are outdated as soon as they are produced. The lag time between data collection and processed results can be anywhere from hours to days to months. This process is slow, costly, requires great bandwidth, and doesn’t provide the kind of actionable intelligence field operators need on the ground.

Leveraging edge hardware provides real-time answers that drive time- and mission-sensitive decisions. Operators can process the data in the field, in real-time, effectively breaking down all barriers to who can process data, when it can be processed, and where it can be processed. By collecting and processing data in the field, all the data does not need to be sent back to a central server. Only sending actionable results reduces needed bandwidth and saves money on transfer and storage costs. When time, cost, and bandwidth matter and lives hang in the balance, edge computing is the key to success.o

CASE STUDY

Real-time wildfire location & spread prediction

Numerous models and simulations exist for characterizing and predicting wildland fire behavior. The U.S. Forest Service (USFS) and other organizations have devoted years of research into identifying the parameters that affect fire movement, rate of spread, and direction of spread across geographic terrain. While this research is invaluable to the firefighting community, due to computational constraints, these models do not run in real-time or against imagery at time-of-collect, and therefore do little to assist the firefighter and first responders on the ground during a wildland fire event.

NT Concepts developed an end-to-end, real-time streaming pipeline that characterizes fire behavior and predicts the rate and location of spread in real-time across any geographic terrain. Our first step is the classification and segmentation of the wildland fire in Wide Area Motion Imagery (WAMI) using ML methods to answer the question, “where is the fire currently” frame over frame. Step two is the training of a robust, purpose-built Recurrent Neural Network (RNN) architecture incorporating many of the parameters the USFS has been studying for decades to predict where the fire will move in the next minutes. Running these two models on an edge device mounted on aerial collection platforms provides real-time decision support for firefighting operations.

Imagine a mobile app that shows the minute-by-minute past, current, and predicted future movements of a wildland fire. Just as meteorologists, localities, and citizens use weather radar tracking apps to watch storms and hurricanes move across their geographic areas, so too could firefighters, localities, and concerned citizens. In emergency situations, knowledge is power. For localities, this means alerting their citizens faster and with more accurate information to reduce panic. For citizens, this means increased awareness. For firefighters in the field, this gives them the real-time info they need to make critical decisions such as where to move to avoid burnover loss of life.

Play Video

Meredith Gregory explains wildfire perimeter extraction and prediction at our Introduction to Data Science NEXT Talks.

Technical Challenges

Though the real-world applications of edge computing are seemingly boundless, technical challenges exist to operating these systems under real-time constraints. Limits on processing capacity and power usage must be addressed to ensure the application runs without lag.

The incorporation of multithreading techniques is key to bolstering the model’s performance on an edge device. Multithreading and multiprocessing are two techniques to optimize memory usage: multiprocessing increases the number of central processing units (CPUs), and multithreading increases the number of threads on each CPU. When all steps of model functionality occur within a single thread, the frame rate is half that of a multithreaded application. Careful architecting can lead to major performance increases.

Another issue is maintaining a stable power supply. The Jetson, for example, has two power options, a micro-USB port and a DC barrel jack. The micro-USB port supplies up to 5V/2A, however, many power supplies cannot sustain enough power to avoid a brownout condition which occurs when voltage decreases to <4.75V. For intensive tasks like the object detection application, powering from the DC barrel jack, which supplies 5V/4A, is required to maintain a stable power supply.

CASE STUDY

Real-time object detection & tracking

Object detection and tracking algorithms have proven time and again to be powerful ways to characterize scenes from imagery. One example is identifying and classifying objects of interest in aerial or ground-level imagery to improve situational awareness. Edge processing substantially increases the timeliness of this capability. Both the relevance and timeliness only increase when they are mobilized on edge devices.

Situational awareness is crucial for law enforcement, military, and intelligence officers—object detection in real-time increases situational awareness. Imagine you are a soldier out on patrol. While driving through a crowded market, the orchestrated object detection models alert you of a potential ambush location, a person of interest, or a car following typical surveillance patterns. The detections drawn over the live video on a screen attached to an edge device or projected on the windshield draw your attention to a target you might not have noticed. You can react immediately. Now imagine you are not the only one who gets the alert, but the detection, GPS coordinates, and other important information are sent back to dispatch/headquarters at the same time, so that all units, air, maritime, or ground, can converge and track the target. Whether deployed in a car, backpack, plane, or drone, edge devices provide field operators with the power of AI previously only known to those in offices with expensive, heavy, immobile processing units.

NT Concepts summer data science interns built a real-time object detection pipeline for a Jetson Nano in six weeks over Summer 2020. The final application runs inference on live video footage from a car dashboard using an off-the-shelf Tensorflow Lite (.tflite) model to locate objects of interest and presents them back to the user on a screen. Built with modularity in mind, this pipeline can accept any Tensorflow Lite or ONNX model. ONNX model compatibility is key as these models run on Android Tactical Assault Kits. The possibilities of objects to find and/or track are limited only by imagination, no longer constrained by available compute power or bandwidth. The application processes 4-5 frames per second (fps) with model optimizations and multithreading producing a clean video for the user.

Play Video

Real-time object detection model running from a car. Cars, trucks, traffic lights, and trees are among the objects this model locates. The GPS coordinates of the vehicle are in the lower left and the FPS rate is in the lower right corner. The map on the right shows the location of the vehicle as the video is recorded

Looking ahead: Multi-INT at the tactical edge

Multi-INT fusion—the combination of multiple types of intelligence (MASINT, SIGINT, HUMINT, OSINT, etc.)—is not a new idea. Using these together generates a more complete and detailed view of a place/event and thus greater situational awareness for field operators. Attempts at multi-INT fusion are plagued by the same challenges mentioned previously—results are outdated as soon as they are produced and all data from all sources must be shipped/transmitted to base for processing and dissemination. Currently, each source sends back their data in its entirety, the data are cleaned, combined, analyzed, then results are sent back to the field. When the difference in four hours can mean the difference in life or death, placing the power of multi-INT fusion in the hands of field operators through edge devices is crucial.

By placing the fusion and analysis at the tactical edge, the end-users (i.e., field operators) are the first rather than the last to see results—greatly reducing the lag time as well as the cost of and needed bandwidth for moving large amounts of data.

The faster the data is fused and processed, the quicker decisions can be made.

Photo of Meredith Gregory
Meredith Gregory

MEREDITH GREGORY is a Technical Program Manager and ML Solutions Architect championing NT Concepts machine learning solutions—she curates datasets, trains models, and visualizes the results. An avid learner with an extensive GIS background, Meredith contributes best-in-class geospatial methodologies and technology to the team.