|CICCONE MARCO||Cycle: XXXII |
Section: Computer Science and Engineering
Tutor: SILVANO CRISTINA
Advisor: MATTEUCCI MATTEO Major Research topic
:Learning reusable skills for tasks decompositionAbstract:
Over the last decade, deep neural networks advanced the state-of-the-art of several applications in computer vision, natural language processing, planning, and control, surpassing in some cases human performance. The key feature of such techniques is the ability to learn to solve a given task, e.g. classification, in a fully end-to-end data-driven fashion. In most cases, once a problem is defined, an agent usually learns to solve the associated task end-to-end from observations optimizing some functional. While "task-driven learning" is an interesting paradigm, it has shown several pitfalls. Being able to accurately solve a task usually requires mastering a variety of sub-tasks that could be hard to define a priori. When an agent learns a task, for instance, it has to implicitly decompose the associated problem into the necessary skills to solve it and consequently, it has to learn them either one at a time or simultaneously. This learning process is far from being efficient. For instance, Reinforcement Learning (RL) applications such as complex control tasks, require to sample a massive number of trajectories from the environment and convergence to optimal policies is slow and not guaranteed. Moreover, interaction with real environments is not always possible and simulations are often needed. Transferring learned policies from simulated to real environments adds another level of complexity to the learning pipeline. Indeed, current RL algorithms fail to learn policies that can generalize to new situations, even if tasks or environments partially share information with the ones previously observed. We propose a framework to learn how to decompose tasks into sub-tasks while learning reusable components, hypothesizing that such decomposition into primitive skills is key to improve learning efficiency in transfer and multi-task learning settings, also known as meta-learning scenarios. We illustrate the issues of end-to-end learning for task performance maximization and explain why it is not, in our perspective, the right paradigm for transfer learning.