|Thesis abstract: |
In this thesis we address a fairly broad range of current audio signal processing problems through the study, representation, acquisition/construction and use of the plenacoustic data that captures the acoustic scene as ¿seen¿ from different points in space. In particular, for the modeling of acoustic propagation, the plenacoustic data takes form of the visibility information that specifies the visibility of geometric objects from generic points in space. This information is used for an efficient and accurate simulation of acoustic propagation in complex environments. As far as the analysis of acoustic scenes is concerned, the plenacoustic data acquired by one or more microphone arrays is represented in form of plenacoustic image that captures the soundfield coming from a given direction at a given point in space. This image carries a great deal of information on the acoustic scene.
Following the laws of the geometrical acoustics, the plenacoustic data is represented in terms of acoustic rays in a space here referred to as the ray space. The adopted parameterization of the acoustic rays allows both an efficient construction of visibility information and an easy extraction of acoustic features from the acquired plenacoustic images. High regularity and generality make the ray space representation of plenacoustic data suitable for a variety of potential applications.
The applications examined in this dissertation show the validity of the proposed approach for purposes of both modeling of acoustic propagation and analysis of acoustic scenes. In particular, the examined applications follow a specific scenario in which an advanced spatial audio system aimed at reproducing the desired soundfield within a region of space is placed inside an unknown hosting environment. First, an acoustic source probes the environment and the plenacoustic images acquired by a microphone array are examined in order to infer the geometry of the environment. The modeling engine computes the reflective paths between the source and the array. The modeled paths are then used by the second analysis algorithm that compares them with the acoustic measurements in order to estimate the reflection coefficients of all reflective surfaces in the environment. Given the information on geometry and reflection coefficients of the hosting environment, the modeling engine is used, once again, to model the sound propagation inside such environment. The rendering system uses this information to reproduce the desired soundfield by means of a loudspeaker array while compensating at the same time for the natural reverberation of the hosting environment.
Theoretical aspects, implementation issues and statistical performance of proposed algorithms are analysed. The validity of the proposed approach, however, is not limited just to the presented applications. Efficiency, regularity and generality of the representation, as well as the perspective of even further availability of inexpensive integrated microphone/loudspeaker arrays in near future, make the proposed tools attractive for a wider range of possible applications including source characterization and separation, wavefield extrapolation, etc.