Our 10 distinguished speakers, who work in different aspects of robot learning, such as developmental
robotics, visuomotor learning, symbol and language acquisition, and multi-modal
learning, will present their research, along
with an overview of the current state-of-the-art in the respective fields
We are confident that with such a distinguished and diverse group of experts, this workshop can
disseminate the current state of the art and plant the initial seeds for a research community
to investigate and develop interaction mechanisms among different levels of learning approaches.
|
Minoru Asada
has been a Professor of the department of Adaptive Machine Systems at
the Graduate School of Engineering, Osaka University, Japan.
The focus of his talk will be on cognitive development in robots
Bio
He received Ph.D. in control engineering from Osaka University in 1982.
Since 1997, he has been a Professor of the department of Adaptive
Machine Systems at the Graduate School of Engineering, Osaka
University. He was the president of the International RoboCup
Federation (2002-2008) and was the Research Director of "ASADA
Synergistic Intelligence Project" of ERATO (2005-2012). In 2012, the
Japan Society for Promotion of Science (JSPS) named him to serve as the
Research Leader for the Specially Promoted Research Project (Tokusui)
on Constructive Developmental Science Based on Understanding the
Process From Neuro-Dynamics to Social Interaction. Since 2013, he has
been the director of the division of cognitive neuroscience robotics,
the Institute for Academic Initiatives (IAI), Osaka University.
|
|
Tamim Asfour,
is a full Professor the Institute for Anthropomatics and Robotics,
Karlsruhe Institute of Technology (KIT), Germany , and chair of Humanoid Robotics
Systems, High Performance Humanoid Technologies Lab.
The title of his talk is Affordance-based Grasping, Balancing and
Walking.
Bio
His major current research interest is 24/7 high performance humanoid
robotics. His major research topics include humanoid
mechano-informatics, grasping and dexterous manipulation, goal-directed
imitation learning, active vision and active touch, modeling and
analysis of human motion, software and hardware architectures and
system integration.
Abstract
Exploiting interaction with the
environment is a promising and powerful way to enhance humanoid robots'
capabilities and robustness while executing locomotion and manipulation tasks.
In this talk we first present an approach for autonomous interactive
segmentation of unknown objects in a complex scene through physical
interaction and strategies for reactive grasping using visual and haptic
information. Following the idea of duality between object grasping and
humanoid balancing, we show how co-joint object-action representations,
Object-Action Complexes, are used and extended for associating whole-body
actions of a humanoid robot with affordances of objects and environmental
elements in the scene. We show how affordance hypotheses are generated through
visual exploration and verified using haptic feedback as well as reachability
and stability measures of the robot. Results on grasping unknown objects as
well as generation of whole-body action for balancing and footstep planning
will be discussed.
|
|
Angelo Cangelosi
is a Professor of Artificial Intelligence and Cognition and the Director
of the Centre for Robotics and Neural Systems at Plymouth
University, UK.
The title of his talk will be Developmental Robotics for Embodied
Language Learning.
Bio
Cangelosi studied psychology and cognitive science at the Universities of
Rome La Sapienza and at the University of Genoa, and has been visiting
scholar at the University of California San Diego and the University of
Southampton. Cangelosi's main research expertise is on language grounding
and embodiment in humanoid robots, developmental robotics, human-robot
interaction, and on the application of neuromorphic systems for robot
learning. He currently is the coordinator of the UK EPSRC project
"BABEL: Bio-inspired Architecture for Brain Embodied Language"
(2012-2016), and previously coordinated the Marie Curie ITN "RobotDoC:
Robotics for Development of Cognition" (2009-2014) and the FP7
Integrating Project "ITALK” (2008-12). He also is Principal investigator
for the ongoing projects “THRIVE” (US Air Force Office of Science and
Research, 2014-1018), the FP7 projects POETICON++ and ROBOT-ERA, and the
Marie Curie projects SECURE, ORATOR and DECORO. Overall, he has secured
over £10m of research grants as coordinator/PI. Cangelosi has produced
more than 200 scientific publications, and has chaired numerous workshops
and conferences including the IEEE ICDL-EpiRob 2011 and 2013 Conferences
(Frankfurt 2011, Osaka 2013). In 2012-13 he was Chair of the IEEE
Technical Committee on Autonomous Mental Development. In January 2015 he
became Editor-in-Chief of the IEEE Transactions on Autonomous
Development, and also is Editor (with K. Dautenhahn) of the journal
Interaction Studies. His latest book “Developmental Robotics: From Babies
to Robots” (MIT Press; co-authored with Matt Schlesinger) has just been
released, as of January 2015.
Abstract
Growing theoretical and experimental research on action and language processing
and on number learning and space representation clearly demonstrates the role of
embodiment in cognition and language processing. In psychology and neuroscience
this evidence constitutes the basis of embodied cognition, also known as
grounded cognition (Pezzulo et al. 2012). In robotics, these studies have
important implications for the design of linguistic capabilities in cognitive
agents and robots for human-robot communication, and have led to the new
interdisciplinary approach of Developmental Robotics (Cangelosi & Schlesinger
2015). During the talk we will present examples of developmental robotics models
and experimental results from iCub experiments on the embodiment biases in early
word acquisition studies, on word order cues for lexical development and number
and space interaction effects. The presentation will also discuss the
implications for the "symbol grounding problem" (Cangelosi, 2012) and how
embodied robots can help addressing the issue of embodied cognition and the
grounding of symbol manipulation use on sensorimotor intelligence.
References: (1) Cangelosi A. (2012). Solutions and open challenges for the symbol grounding
problem. International Journal of Signs and Semiotic Systems, 1(1), 49-54 (with
commentaries) (2) Cangelosi A, Schlesinger M (2015). Developmental Robotics: From Babies to
Robots. Cambridge, MA: MIT Press. (3) Pezzulo G., Barsalou L.W., Cangelosi A., Fischer M.H., McRae K, Spivey M.J.
(2011). The mechanics of embodiment: a dialog on embodiment and computational
modelling. Frontiers in Psychology, 2(5), 1-21
|
|
Lorenzo Jamone
is an associate researcher in humanoid robotics at Vis Lab, at Instituto Superior Tecnico, Portugal
The focus of his talk will be on learning affordances
Bio
Lorenzo Jamone received his MS in
Computer Engineering from the University of Genova in 2006 (with honors), and
his PhD in Humanoid Technologies from the University of Genova and the IIT in
2010. He was Associate Researcher at the Takanishi Laboratory in Waseda
University from 2010 to 2012, and since January 2013 he is an Associate
Researcher at VisLab (Instituto Superior Tecnico, Lisbon, Portugal). His
research interests include cognitive humanoid robots, motor learning and
control, force and tactile sensing.
Abstract
Inspired by the extraordinary ability
of young infants to learn how to grasp and manipulate objects, many works in
robotics have proposed developmental approaches to allow robots to learn the
effects of their own motor actions on objects, i.e., the objects affordances.
While holding an object, infants also promote its contact with other objects,
resulting in object-object interactions that may generate effects not possible
otherwise. Depending on the characteristics of both the held object
(intermediate) and the acted object (primary), systematic outcomes may occur,
leading to the emergence of a primitive concept of tool. This will later on
enable more and more complex planning skills, eventually allowing for problem
solving.
I will discuss our attempts toward modeling this kind of knowledge acquisition
and exploitation in the humanoid robot iCub. The robot first learns a
probabilistic causal model of object affordances through the interactive
exploration of the environment, and then uses such model to make predictions,
take decisions and plan sequences of actions to achieve given goals.
The learned affordances are used to ground the planning rules, so to adapt
them to the actual motor and perceptual capabilities of the robot, and to the
properties of the objects around; this is made possible by the use of
probabilistic techniques both for modeling affordances and for computing the
plans.
|
|
Takayuki Nagai
is a Professor in Intelligent Systems Lab., The University of
Electro-Communications, Japan.
The title of his talk is Toward robots that learn concepts and words through experience
Bio
Takayuki Nagai received his BE, ME, and
DE degrees from the Department of Electrical Engineering, Keio University, in
1993, 1995, and 1997, respectively. Since 1998, he has been with the
University of Electro-Communications where he is currently a professor of the
Graduate School of Informatics and Engineering. From 2002 to 2003, he was a
visiting scholar at the Department of Electrical Computer Engineering,
University of California, San Diego. Since 2011, he has also been a visiting
researcher at Tamagawa University Brain Science Institute.
Abstract
To interact naturally with humans,
robots need to understand human words and take actions based on the meaning
behind those words. Moreover, it is desirable for robots to express their
intentions through language in communication with humans. To achieve this many
works have been done on the symbol grounding problem in the field of
intelligent robotics. In this talk a statistical modeling of concepts and
language, which makes possible for robots to learn concepts, words and grammar
in a bottom-up manner, is introduced. The key ideas behind the framework are
spatiotemporal segmentation and multimodal categorization.
|
|
Yukie Nagai
is a Specially Appointed Associate Professor at Graduate School of Engineering, Osaka University, Japan .
The title of her talk is Predictive Learning of Sensorimotor Information as a Key for Cognitive Development.
Bio
She received her Ph.D. in Engineering from Osaka University in 2004 and
worked as a postdoc researcher at National Institute of Information and
Communications Technology in Kyoto from 2004 to 2006. She then worked at
Bielefeld University, Germany for three and a half years until she
started her current position at Osaka University in October 2009. Her
research interests include the developmental mechanism of human social
cognition such as self-other cognition, imitation, joint attention, and
cooperation. She has been investigating how human infants acquire such
abilities through interaction with their caregivers by means of
constructive approaches. More information at:
http://cnr.ams.eng.osaka-u.ac.jp/~yukie
Abstract
Human infants acquire various cognitive abilities such as self-other
cognition, imitation, and cooperation in the first few years of life.
Although developmental studies have revealed behavioral changes in
infants, underlying mechanisms for the development are not fully
understood yet. We hypothesize that predictive learning of sensorimotor
information plays a key role in infant development. Predictive learning
is defined as a process to minimize the prediction error between an
actual sensory feedback and a predicted one. For example, minimizing the
prediction error enables infants to discriminate the self from others
because the self's body is controllable and thus recognized as a
perfectly predictable entity. Social behaviors such as imitation and
cooperation also emerge through predictive learning. A failure in others’
action, for example, induces a larger prediction error and thus triggers
the execution of infants’ action to reduce the error, which results in a
cooperative behavior. My talk will present our robotics studies
investigating how predictive learning reproduces infant cognitive
development. Furthermore, a potential of our hypothesis to explain the
mechanism of autism spectrum disorders (ASD) will be explained. Our
research supports a recent hypothesis that ASD is characterized by a
difficulty in learning of sensorimotor prediction rather than in social
interaction.
|
|
Luka
Peternel is a PhD. student in the Department of Automation,
Biocybernetics and Robotics at Jozef Stefan Institute in Ljubljana.
The title of this talk is Human-in-the-loop approach for teaching
robots to dynamically interact with environment and humans.
Bio
Main research topics of Luka Peternel are human-in-the-loop robot teaching and exoskeleton control.
He works with Dr. Jan
Babic who is a Senior Researcher at "Jozef Stefan" Institute, Slovenia and an Assistant Professor at Faculty of Electrical Engineering, University of Ljubljana, Slovenia.
Dr. Babic received my Ph.D. from Faculty of Electrical
Engineering, University in Ljubljana examining the role of biarticular
muscles in human locomotion. During the years 2006/2007 he was a visiting
researcher at ATR Computational Neuroscience Laboratories in Japan. In
November 2014, he was a visiting professor at The Institute for Intelligent
Systems and Robotics, University of Pierre and Marie Curie in France.
His current research is particularly concerned with the understanding
how human brain controls movement of the body. He is using these
neurophysical models to design biologically plausible solutions for a
broad spectrum of robotic systems such as industrial robots, humanoids,
exoskeletons and rehabilitation devices.
Abstract
We present a novel human-in-the-loop approach for teaching robots how
to perform dexterous tasks involving physical interaction with
unstructured and unpredictable environment. By including human into the
robot control loop, the human learns to perform a given task using the
robot. The newly acquired human skill is then captured and added to the
robot control system. This enables the robot to perform the task
autonomously. To demonstrate the applicability of the approach, I will
present several experiments including reactive postural control of a
humanoid robot, human-robot cooperation in dynamic manipulation tasks,
and control of exoskeleton robots. Using our approach, we can collect a
large amount of sensorimotor skills for specific robotic tasks. To
achieve full robot autonomy, a cognitive-level algorithm is needed for
the robot to decide which skill to use in the given situation. In
addition, such algorithm could enable the robot to merge different
sensorimotor skills and potentially create new skill. We hope that the
discussion and idea-sharing at this workshop will lead to possible
solutions of this issue.
|
|
Amit Kumar Pandey
is a Chief Scientist (R&D) at Aldebaran
Robotics, Paris, France.
The title of his talk is Development of Socially Intelligent
Robots and the need of Learning: an industrial
perspective and use cases.
Bio
Earlier for 6 years he worked as doctoral and postdoctoral researcher in Robotics and AI at
LAAS-CNRS (French National Center for Scientific Research),
Toulouse, France. His Ph.D. thesis in Robotics (title: Towards
Socially Intelligent Robots in Human Centered Environment,
2012) is the second prize winner (tie) of the prestigious Georges
Giralt Award for the best Ph.D. Thesis in Robotics in Europe,
2013, awarded by European Robotics Research Network euRobotics).
His current research interest includes Socially
Intelligent Robots, Human Robot Interaction (HRI) and Robot’s
Cognitive Architecture. On these aspects, he has been actively
contributed in various national and European Union (EU) projects as well
as involved in their design and proposal. Among other responsibilities, he is the coordinator
of Socially Intelligent Robots and Societal Applications (SIRoSA)
topic group of EU Robotics, and active contributor in the MultiAnnual
Roadmap of euRobotics, aims to shape the future of robotics in Europe in collaboration
with European Commission (EC) ( http://www.sparc-robotics.net/ )
Abstract
With significant advancements in robotics, now the robots are beginning
to coexist and work
with us, to assist and accompany us, and to interact, play, learn and
teach. Time has arrived,
when social robots are getting deployed or available for practical
purposes in homes, stores,
and public places. E.g., P epper robot of Aldebaran SoftBank group is
planned for mass
production and already being used for social interaction at SoftBank
stores in Japan, the
Romeo2 project, focusing on development and evaluation of humanoid robot
companion for
everyday life, etc. However diverse the application might be, the common
requirement is that
such robots, beyond their short term engaging effect due to novelty,
should actually be able to
establish long term social relations with human and with individuals, by
behaving in socially
expected and accepted manners. For this, the S ocial Intelligence , being
the underlying engine
for reasoning, play crucial role. But the question is how to embody such
capabilities and how to
close the loop of interaction and learning? There are also crucial from
commercial and industrial
perspective of exploitation of social robots. The talk will emphasize on
these aspects, highlight
some of the R&D challenges and needs from industrial perspective along
with some use cases.
|
|
Justus Piater
is a Professor in Computer Science, University of Innsbruck, Austria.
The title of this talk is Stacked learning of affordances
Bio
After eight years as a professor at the University of
Liege, Belgium, including one year as a
visiting scientist at the Max Planck Institute for Biological Cybernetics
in Tubingen, Germany, Prof. Piater
moved to Innsbruck in 2010. There he founded the IIS group at the
Institute of Computer Science, which
currently counts 6 postdoctoral researchers and 8 doctoral students. He
has written about 150 publications
in international, peer-reviewed journals, conferences and workshops, and
has been a principal investigator in
1 EU-FP6 and 6 EU-FP7 projects, of which 4 are currently active.
Abstract
General-purpose autonomous robots for deployment in unstructured domains
such as service and households require a high level of understanding of
their environment. For example, they need to understand how to handle
objects, how to operate devices, the function of objects and their
important parts, etc. How can such understanding be made available to
robots? Hard-coding is not feasible, and conventional machine
learning approaches will not work in such high-dimensional, continuous
perception-action spaces and realistic amounts of training data. One
way to get robots to learn higher-level concepts may be to focus on
simple learning problems first, and then learn harder problems in ways
that make use of simpler problems already-learned. For example,
learning problems can be stacked by making the output of lower-level
learners available as input to higher-level learning problems,
effectively turning hard problems into easier problems by expressing
them in terms of highly-predictive attributes. This talk discusses how
this can be done, including further boosting learning efficiency by
active learning, and automatic, unsupervised structuring of sets of
learning problems and their interconnections.
|
|
Sanem Sariel
is an Assistant Professor in Computer Engineering, Istanbul
Technical University, Turkey.
The title of her talk is Robots learn they can not afford in every context
Bio
Sanem Sariel received the BS, MSc, and PhD degrees in Computer
Engineering from Istanbul Technical University, in 1999, 2002 and 2007,
respectively. During her PhD studies, she worked as a researcher at
Georgia Institute of Technology, Atlanta, GA, USA from 2004-2006. She was
co-advised by Tucker Balch (Georgia Tech) and Nadia Erdogan (ITU).
Sanem's research interests include robot intelligence, robust cognitive
systems and multirobot systems. She is the PI of an ongoing project
(funded by TUBITAK, the Scientific and Technological Research Council of
Turkey) on automated reasoning, planning and learning methods for
autonomous mobile robots.
Abstract
Several studies present different methods for robots to learn to complete tasks more efficiently; and task completion is always the highest priority in these studies. However, especially in unstructured environments, there are cases where task completion is not possible, or certain precautions should be taken into account to ensure safety in task execution. We study how learning helps a robot determine general or specific limitations on task execution beyond its capabilities, and gain experience on these cases to make safe decisions on future tasks. In this talk, I will present our experiential learning framework for robots to build online experience and transfer knowledge among appropriate contexts. We use Inductive Logic Programming (ILP) to frame hypotheses represented in first-order logic that are useful for further reasoning and planning processes. We analyzed the performance of the learning method on our autonomous mobile robot and our robot arm both building their experience on action executions at runtime.
|