|Thesis abstract: |
The potential of the Internet of Things is leading to a paradigm shift with an ambi- tious long-term vision, in which battery-operated sensing nodes are empowered with sight and are capable of complex visual analysis tasks. Unfortunately, this is out of reach with the current technology. This thesis addresses a comprehensive set of new methodologies to empower wireless sensor networks with vision capabilities compara- ble to those achievable by power-eager smart camera systems. The key tenet is that most visual analysis tasks can be carried out based on a succinct representation of the image, which entails both global and local features, while it disregards the under- lying pixel-level representation. Still, under severe energy and bandwidth constraints it is imperative to optimize the computation, the coding and the transmission of the visual features. On the coding side, this thesis tackles the problem by reversing the conventional compress-then-analyze paradigm. That is, image features are collected by sensing nodes, compressed with suitable coding algorithms, and delivered to final destination(s) in order to enable higher level visual analysis tasks by means of either centralized or distributed detectors and classifiers, somewhat mimicking the process- ing of visual stimuli in the early visual system. The transmission of visual features is subject to tight application-dependent requirements (bandwidth/delay guarantees), and may be affected by network conditions. Therefore, on the communication side, this thesis addresses the design of energy efficient tools for optimizing the operation of visual sensor networks, by facing the problem of allocating the available resources to multiple wireless camera nodes, developing cooperative schemes for speeding up the in-network processing of visual features and exploring the impact of spatio-temporal tradeoffs to the energy consumption of each node in the visual sensor network.