Efficient Simulation and Implementation of Neural Networks on Resource-Constrained Platforms
Files
Publication or External Link
Date
Authors
Advisor
Citation
Abstract
Neural Networks have been widely adopted in signal processing applications and systems. Due to the application scenarios enabled by the portability and increasing computational capabilities associated with embedded processing platforms, such platforms are of increasing interest for deploying signal processing systems with neural networks. However, unlike high-performance computing platforms, which are suitable for computationally-intensive neural-network-equipped systems, embedded platforms are often characterized by tight resource constraints. These constraints necessitate new types of optimization and trade-off analysis in the complex design spaces associated with neural network implementation. Resource constraints also become a major concern in the simulation of spiking neural networks (SNNs) on commodity-off-the-shelf (COTS) desktop or laptop computing platforms. Such simulation capability opens up much greater access to accurate SNN simulation, which is conventionally carried out on supercomputers or specialized hardware. This thesis focuses on developing novel models and methods for efficient simulation and implementation of neural networks on resource-constrained platforms.
First, we present a novel approach for simulating Spiking Neural Networks (SNNs) that is based on timed dataflow graphs. Whereas conventional SNN simulators compute changes in spiking neuron variables at each time step, the proposed simulation approach focuses on evaluating spike timings. This focus on evaluating when a dataflow actor (spiking neuron) reaches a new spike contributes to making spike evaluation an event-driven computation. The resulting event-driven simulation approach avoids unnecessary computations at time steps that lie between spiking events. This optimization is achieved while avoiding the large overheads associated with lookup tables that are incurred in existing event-driven approaches. Our results show identical spiking behavior compared to simulation using a conventional (time-based) simulator while providing significant improvement in execution time. Furthermore, the simulation of the event-driven approach is achieved on a low cost, COTS computer, whereas most SNN simulators have focused on supercomputer scale platforms or specialized hardware, as described above.
Secondly, this thesis also investigates the implementation of deep neural networks (deep convolutional neural networks in particular) on resource-constrained platforms. This study is carried out in the context of hyperspectral image processing, which has attracted increasing research interest in recent years, due in part to the high spectral resolution of hyperspectral images together with the emergence of deep neural networks (DNNs) as a promising class of methods for analysis of hyperspectral images. An important challenge in realizing the full potential of hyperspectral imaging technology is the problem of deploying image analysis capabilities on resource-constrained platforms, such as unmanned aerial vehicles (UAVs) and mobile computing platforms. In this thesis, we develop a novel approach for designing DNNs for hyperspectral image processing that are targeted to resource-constrained platforms. Our approach involves optimizing the design of a single DNN for operation across a variable number of spectral bands. DNNs that are developed in this way can then be adapted dynamically based on the availability of resources and real-time performance constraints. The proposed approach supports the Dynamic Data Driven Application Systems (DDDAS) paradigm as an integrated part of the design and training process to enable dynamic-data driven adaptation of the DNN structure --- that is, the set of computational modules and connections that are active when the DNN operates. We demonstrate the effectiveness of the proposed class of adaptive and scalable DNNs through experiments using publicly available remote sensing datasets.
Deep Neural Networks (DNNs) are adopted in numerous application areas of signal and information processing with Convolutional Neural Networks (CNNs) being a particularly popular class of DNNs. Many machine learning (ML) frameworks have evolved for design and training of CNN models, and similarly, a wide variety of target platforms, ranging from mobile and resource-constrained platforms to desktop and more powerful platforms, are used to deploy CNN-equipped applications. To help designers navigate the complex design spaces involved in deploying CNN models derived from ML frameworks on alternative processing platforms, retargetable methods for implementing CNN models are of increasing interest.
In this thesis, we present a novel software tool, called the Lightweight-dataflow-based CNN Inference Package (LCIP), for retargetable, optimized CNN inference on different hardware platforms (e.g., x86 and ARM CPUs, and GPUs). In LCIP, source code for CNN operators (convolution, pooling, etc.) derived from ML frameworks is wrapped within dataflow actors. The resulting coarse grain dataflow models are then optimized using the retargetable LCIP runtime engine, which employs higher-level dataflow analysis and orchestration that is complementary to the intra-operator performance optimizations provided by the ML framework and the back-end development tools of the target platform. Additionally, LCIP enables heterogeneous and distributed edge inference of CNNs by offloading part of the CNN to additional devices, such as onboard GPU or network devices. Our experimental results show that LCIP provides significant improvements in inference throughput on commonly-used CNN architectures, and the improvement is consistent across desktop and resource-constrained platforms.
Lastly, image classification is an essential challenge for many types of autonomous and smart systems. With advances in Convolutional Neural Networks (CNNs), the accuracy of image classification systems has been dramatically improved. However, due to the escalating complexity of state-of-the-art CNN solutions, significant challenges arise in implementing real-time image classification applications on resource-constrained platforms. The framework of elastic neural networks has been proposed to address trade-offs between classification accuracy and real-time performance by leveraging intermediate early-exits placed in deep CNNs and allowing systems to switch among multiple candidate outputs, while switching off inference layers that are not used by the selected output. In this thesis, we propose a novel approach for configuring early-exit points when converting a deep CNN into an elastic neural network. The proposed approach is designed to systematically optimize the quality and diversity of the alternative CNN operating points that are provided by the derived elastic networks. We demonstrate the utility of the proposed elastic neural network approach on the CIFAR-100 dataset.