|Thesis abstract: |
The problem of Simultaneous Localization and Mapping (SLAM) regards the estimation of the pose of an observer (usually a robot) from sensor observations (i.e., local measurements), while creating, at the same time, a consistent map of the observed environment. Visual SLAM aims at the solution of the SLAM problem with the use of visual sensors, i.e., cameras, only.
In this thesis we worked on the development of a multicamera SLAM system able to operate in real time, on large environments with a generic number of cameras. The underlying methodology is the well known EKF-SLAM (Extended Kalman Filter SLAM) algorithm, but we introduced numerous improvements.
First of all, we deeply reviewed the parameterizations for monocular SLAM presented in the literature, highlighting some properties that allow to reduce their computational complexity. Moreover, two new parameterizations have been introduced and their differences with previous parameterizations have been analyzed. A multicamera Visual SLAM system has been developed leveraging on monocular measurements, and allowing the definition of a flexible system in which cameras are treated independently, and, possibly, with reduced overlapping fields of view. The Conditionally Independent (CI) Submapping Framework is introduced in the system to treat large scale problem. Properties of this system are deeply investigated, bringing to light problems with ill conditioned matrices. To comply with these issues we reformulated the EKF SLAM algorithm in the Hybrid Indirect EKF SLAM form, were two common approaches to the Indirect EKF estimation (a.k.a. Error State EKF) are combined. As a final contribution, we developed a technique which allows to refine each submap of the CI-SLAM system with a Bundle Adjustement (BA) optimization. Optimized submaps are then reinserted in the CI-SLAM submaps collection, allowing the continuation of the estimation process. Extensive tests and analysis have been performed on simulated environments and on real datasets.