The adoption of sensory networks has been steadily increasing across various technology domains including healthcare, environmental monitoring, industrial automation, smart homes, agriculture, transportation, security, and defense. The number of these sensory nodes is projected to experience exponential growth, reaching 75 billion by 2025 and escalating to 125 billion by 2030. This substantial increase will result in a vast amount of raw data that needs to be processed. Typically, data in sensory networks goes through three main phases: 1) the sensing phase, which involves capturing data in the analogue domain and performing analogue-to-digital conversion, 2) the transmission phase, where the data is transferred from the analogue sensing frontend to the processing backend, and 3) the high-level processing backend, which may involve tasks such as classification. This Von Neumann-like bottleneck adds more power and performance penalties to the already struggling conventional technologies in the era of AI. To mitigate this growing bottleneck, it is crucial to adopt different unconventional technologies that span emerging electronic/photonic technologies and in-memory computing to push computational capabilities closer to the edge.
On the other hand, tinyML is a machine learning technology which is optimized for on-device sensor data analytics at extremely low power. It involves hardware, algorithms, and software designed to perform analytics on battery-operated devices. While tinyML-capable hardware is considered "good enough" for many commercial applications in defense and security, new architectures, such as in-memory compute, and light-enabled sensing modules, are emerging on the horizon, further enhancing the capabilities of these devices. Advancements in algorithms, networks, and models have resulted in remarkable reductions in size, with models now reaching 100kB and below. These advancements have paved the way for initial low-power applications in areas such as image and sensor data processing, which are crucial for defense and security. Typical tinyML models employ various optimization techniques, including weight reduction, quantization, clustering, encoding, and compilation. These techniques aim to create smaller models that offer improvements in execution speed and memory efficiency.
ProSensing aims at defining a novel approach to embed intelligence locally enabling training at the edge by developing novel in-sensing processing elements (enabling electronic and photonic control) that will be combined with tinyML technologies. Thus, we propose developing an in-sensor processing architecture using emerging devices (RRAMs) for image classification, however it can be used in various domains such as light, RF, IR, and gas. The proposed architecture leverages the in-memory processing capabilities of RRAM devices to enable in-sensor feature extraction and near-sensor classification. This will significantly reduce the data transmission penalty, where only relevant features are transmitted to the near-sensor high-level processing and tinyML layers. Furthermore, we aim to eliminate the analogue-to-digital interfacing complexity by processing the extracted features as analogue vectors. RRAM-based Analogue Content-Addressable Memories (ACAMs) will capture pretrained analogue templates to act as a near-sensor classifier.
|