Current students


PAVAN MASSIMOCycle: XXXVII

Section: Computer Science and Engineering
Advisor: ROVERI MANUEL
Tutor: MARTINENGHI DAVIDE

Major Research topic:
Embedded and Edge AI for UWB-radar based detection and recognition of living beings and their activities in complex environments

Abstract:
;
Abstract
Tiny Machine Learning (TinyML) is a novel research area aiming at designing machine
and deep learning (MDL) models and algorithms able to be executed on tiny devices,
such as Internet-of-Things units, edge devices or Microcontroller (MCU) based sys-
tems.

The research in this area is nowadays taking giant steps in the field of frameworks
(e.g., Tensorflow Light for Microcontrollers [2], ARM’s CMSIS-NN [5]), algorithms
(e.g., quantizations [4] and pruning mechanisms [6]), models (e.g., [7], [1] ) and learn-
ing paradigms (e.g., [3]). Advances obtained with these researches allow MDL models
to overcame the constraints on computation, memory and energy consumption char-
acterizing the technology, hence paving the way for a pervasive diffusion of TinyML
applications in everyday life (e.g., smart home and buildings, smart cars, e-health,
industry 4.0).

This work aims at designing tiny learning algorithms, i.e., novel deep learning and
advanced signal processing strategies and implementations able to meet the severe
constraints on memory (the available RAM is in the order of the MB), computation
(the MCU frequency is in the order of the MHz), and power consumption (typically
< 0.1 W) of IoT units in both training and inference. In particular, the tasks that we
addressed are:
– Design of Tiny models and algorithms for IoT units
– Design of Incremental On-Device Supervised Learning algorithms for IoT units
– Design of Incremental On-Device Self-Supervised Learning algorithms for IoT
units
This work will result in the creation of a framework that targets not a specific
platform but any MCU based embedded system, that enables incremental on-device
supervised and self-supervised learning at the edge, using deep learning algorithms.
This solution will be designed to work also in nonstationary environments, i. e. envi-
ronments in which it’s not possible to apply the canonical train and deploy pipeline
which characterizes normal MDL scenarios.

References
1. C. Alippi, S. Disabato, and M. Roveri. Moving Convolutional Neural Networks to Em-
bedded Systems: The AlexNet and VGG-16 Case. In 2018 17th ACM/IEEE International
Conference on Information Processing in Sensor Networks (IPSN), pages 212–223, Apr.
2018.
2. R. David et al. TensorFlow Lite Micro: Embedded Machine Learning on TinyML Systems.
arXiv:2010.08678 [cs], Mar. 2021. arXiv: 2010.08678.
3. S. Disabato and M. Roveri. Incremental On-Device Tiny Machine Learning. page 7, 2020.
4. A. Gholami et al. A Survey of Quantization Methods for Efficient Neural Network Infer-
ence. arXiv:2103.13630 [cs], June 2021. arXiv: 2103.13630.
;
;
;
5. L. Lai, N. Suda, and V. Chandra. CMSIS-NN: Efficient Neural Network Kernels for Arm
Cortex-M CPUs. arXiv:1801.06601 [cs], Jan. 2018. arXiv: 1801.06601.
6. J. Liu et al. Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge
Applications: A Survey. arXiv:2005.04275 [cs, stat], May 2020. arXiv: 2005.04275.
7. P. Warden and D. Situnayake. TinyML Machine Learning with TensorFlow Lite on
Arduino and Ultra-Low-Power Microcontrollers. Dec. 2019.
;