|Thesis abstract: |
Classical control theory has limited applicability in unstructured or partially known environments. Reinforcement Learning (RL) techniques may help to overcome such limitations. RL may allow avoiding the need to preprogram all possible scenarios, but rather learns the dynamics during operation. At the same time, RL may gain in learning time and guarantees from control theory.
Recently, RL approaches have been scaled to robotic systems where complete information is unavailable. However, these techniques have been applied only to simple problems related to the estimation of the cynematic of the robot or to the imitation of trajectories.
Despite the limited successfull applications, industrial robotics is demanding integration of RL techniques to solve problems that are difficult to be faced with classical control techniques.
For instance, many control applications require to mitigate the impact of uncertainty avoiding potentially dangerous strategies (risk-avverse control). In this scenario the use of predefined strategies is unfeasible due to frequent changes in the environment.
Another possible scenario is task planning where the robot can learn a set of basic tasks that can later use in order to achieve new and more complex behaviors (hierarchical and transfer learning).
My research will focus on the investigation of actual model limits and on the definition of new solution concepts.