Current students


ZAPPA ISACCOCycle: XXXVII

Section: Systems and Control
Advisor: ZANCHETTIN ANDREA MARIA
Tutor: FAGIANO LORENZO MARIO

Major Research topic:
Cobots understanding tasks programmed by demonstration.

Abstract:
The advent of the fourth industrial revolution brought a change of perspective on the role of robots in factories. From the concept of traditional robotic manipulators as general-purpose machines, hardcoded and deployed for specific tasks in confined workspaces, robots are now conceived as cyber-physical systems which can work next to human operators.
Collaborative Robots (cobots) are devised as the embodiment of this paradigm shift, offering the companies a smarter and interoperable machine to be employed in their production lines. The progress in cobots’ design and equipable sensors enable them to interact safely with human operators, sharing the same workspace for the completion of collaborative tasks. Moreover, cobots programming is simplified by a user-friendly programming environment, with growing sets of skills ready to be used, allowing more flexibility and ease of deployment. These characteristics are responsible for the trend of increasing market share. However, a factor that is still reducing the widespread of cobots in factories is given by the requirement of a field expert to design and validate the robot's operation, thus limiting the ease of reconfigurability of the cobot for different tasks.
Several strategies have been designed to provide intuitive methods to program a robot. Programming by Demonstration addresses teaching the robot how to execute a task by extracting the relevant information from a demonstration. The types of demonstration employed can be variegated, ranging from moving the robot arm manually, through teleoperation, or by letting the system monitor the task execution by an operator. Although partially overcoming the problem of fast reconfiguration of the robot, these techniques do not exploit the intrinsic modularity of industrial tasks and transferability of the previously acquired knowledge.
This research focuses on enabling the robotic system to acquire, through intrinsic and extrinsic sensors, a semantic understanding of the scene characterizing the workspace and of the actions being part of the demonstration. Semantics refers to the knowledge that we, as humans, build of the objects surrounding us and the relations among them, together with the organization and the strategies to take advantage of this knowledge. Semantics applied to robotics tries to mimic the representation we create of the world. From the raw data acquired by the sensors, abstractions are created which can be used to describe objects’ properties and their relations, while also characterizing robot actions. The objective of this research is to provide a framework that can enable the transferability and scalability of the knowledge gained by the robot from previous experience, thus allowing a faster deployment in new tasks without the need for an expert operator.