Model-based and model-free extraction of parameterised skills
This project extracts parameterised skills from human-level specifications (e.g., CAD and MTM) and human demonstrations. For that, the following methodological steps will be taken. First, a palette of basic robotic skills will be constructed. Then, human demonstrations are recognized as a sequence of segmented skills. Thus, skills and their task relevant parameters are extracted; the parameterization enables fast generalisation of the skill to different use cases. The developed algorithm aims to handle visual and kinesthetic human demonstrations to train robotic skills so that the extracted task parameters can be directly mapped between visual modality and control inputs.