|
08:55-09:00 |
Welcome: Emre Ugur, Innsbruck University, Austria |
09:00-09:25 |
Invited talk:
Is artificial emotion really emotional?
(Abstract)
Prof. Minoru Asada, Osaka University, Japan.
Emotion, a driving force to generate different behaviors, is one of
the most fundamental but difficult structures/functions to design for
robots. Starting from primitive emotions, the secondary emotions may
be differentiated from them. During this developmental process,
sociality has an important role to derive the differentiated
emotions. In this talk, I argue how artificial emotion can be more
realistic in the social context by showing some attempts, and discuss
the future stories in SFs and comics.
|
09:25-09:50 |
Invited talk:
Shared body for self and others in the brain
(Abstract)
Prof. Akira Murata, Kinki University, Japan.
It has been known that dorsal visual stream of the two visual pathways, directing to the parietal cortex, is related to visual spatial perception. However, the parietal cortex is not the terminal station of the dorsal visual stream, but has strong mutual anatomical connection with the premotor cortex. The spatial information in the parietal cortex is sent to the premotor cortex, and then the final goal is visuo-motor control. On the other hand, on line representation of one's own body (body schema) is formed in the sensory-motor process, and this map can dynamically change, depending on sensory-motor experiences and learning. This network is considered integration of efference copy/corollary discharge and sensory feedback that is an essential factor both for sensory motor control and body schema. In our recent findings, it is suggested that one's own body schema also provides basic reference frame for mapping of other's body. This means that neuronal substrates for monitoring one's own action is shared with the system for recognize and understand other's action. In this lecture, we would discuss about body schema that is a key to link between sensory-motor control and high-order cognitive functions; body recognition.
|
09:50-10:15 |
Invited talk:
Cognitive Interaction Technology for Helpful Robots
(Abstract)
Prof. Helge Ritter, Bielefeld University, Germany.
The perceived helpfulness of a robot and, as a result, the
well-being of a human in the presence of the robot, is only
partly determined by the robot's function alone.
An important co-determining factor is the robot's interface
that determines appearance and social interaction with the human.
This underscores the creation of flexible and human-adapting
interfaces as a key task for the development of robots that
are perceived as helpful.
We present ongoing work on the creation of interfaces for
supporting cognitive interaction between robot devices and
humans, emphasising the role of touch and its interplay with
vision. Examples include the development of flexible
and 3D-shaped tactile sensors and their use in the context
of the analysis and control of tactile-guided interaction,
such as visuo-haptic servoing, as well as an outlook of how to
augment such capabilities with further interfacing modalities
towards systems that can interact with humans in natural and
rich ways.
|
10:15-10:35 |
Recurrent Slow Feature Analysis for Developing Object Permanence in Robots
(Abstract)
Hande Celikkanat*, Erol Sahin, and Sinan Kalkan
In this work, we propose a biologically inspired
framework for developing object permanence in robots. In particular,
we build upon a previous work on a slowness principle-based
visual model (Wiskott and Sejnowski, 2002), which was shown to
be adept at tracking salient changes in the environment, while
seamlessly "understanding" external causes, and self-emerging
structures that resemble the human visual system. We propose
an extension to this architecture with a prefrontal cortex-inspired
recurrent loop that enables a simple short term memory, allowing
the previously reactive system to retain information through time.
We argue that object permanence in humans develop in a similar
manner, that is, on top a previously matured object concept.
Furthermore, we show that the resulting system displays the
very behaviors which are thought to be cornerstones of object
permanence understanding in humans. Specifically, the system is
able to retain knowledge of a hidden object's velocity, as well as
identity, through (finite) occluded periods.
|
10:35-11:00 |
Coffee break
|
11:00-11:25 |
Invited talk:
Human-derived sensor fusion principles used to control biped balancing of external disturbances in a humanoid robot
(Abstract)
Prof. Thomas Mergner, Neurologische Klinik, Germany.
Background:
Human sensorimotor control is very complex and current research still faces problems when it comes to re-embody the hypothesized control in the form of control models into robots for direct human-robot comparisons. We simplified the research task by studying reactive (sensor driven) postural reactions to exactly known external disturbances. Control of human biped stance during external disturbances lends itself to this research as a simple sensorimotor prototype.
Material and Methods: System analysis approaches with computer modeling in back and forth with human experiments was used in: (1) Investigations of human sensory systems, mainly vestibular and joint proprioceptive, using open loop psychophysics of self-motion perception. (2) Investigations of human postural responses to tilt and translation of the support surface and to pull stimuli having impact on the body. (3) Modeling of the human postural data using sensor fusion principle derived from the human perception and comparing model simulation data with the human postural data. (4) Re-embodiment of the model into a humanoid robot for direct human-robot comparisons in the human laboratory.
Results: (1) Psychophysics suggested that human self-motion perception (a) uses sensory transducer signals to reconstructs the kinematic and kinetic variables of the body-world interaction and (b) uses then these variables to reconstructs the external disturbances having impact on the body. (2) Using these sensor fusion algorithms allowed implementation of the human postural response findings into a simple sensory feedback model of human stance control. The model consists of (i) a servo loop for local joint control and, superimposed on the servo, of (ii) long-latency loops for disturbance estimation and compensation. (3) Model simulations delivered data that resembled the human data. (4) This similarity also applied when the model was used to control the robot with its noisy and inaccurate sensors, etc., and when performing the simulations in the human test bed.
Discussion & Conclusion:
The approach of deriving sensor fusion principles from human self-motion perception and of using these principles to model human sensor-based postural responses to external disturbances may help to better understand the human sensorimotor control. Its extension into a 'neurorobotics' approach provided a proof of principle of the sensorimotor control model and demonstrated certain advantages of this control such as versatility in face of changing disturbance scenarios, high robustness in terms of fail safe, low loop gain and low passive resistance. Currently the model and robot are extended to include voluntary movements, control policies (including fusion of predicted with sensor-derived disturbance estimates), and a modular architecture for multi-DOF systems.
|
11:25-11:50 |
Invited talk:
Motor primitives and central pattern generators: from biology to robotics
(Abstract)
Prof. Auke Ijspeert, EPFL, Switzerland.
The ability to efficiently move in complex environments is a fundamental property both for animals and for robots, and the problem of locomotion and movement control is an area in which neuroscience and robotics can fruitfully interact. Animal locomotion control is in a large part based on central pattern generators (CPGs), which are neural networks capable of producing complex rhythmic or discrete patterns while being activated and modulated by relatively simple control signals. These networks are located in the spinal cord for vertebrate animals. In this talk, I will present how we model pattern generators of lower vertebrates (lamprey and salamander) using systems of coupled oscillators, and how we test the CPG models on board of amphibious robots, in particular a salamander-like robot capable of swimming and walking. The models and robots were instrumental in testing some novel hypotheses concerning the mechanisms of gait transition, sensory feedback integration, and generation of rich motor skills in vertebrate animals.
|
11:50-12:10 |
Humanoids Learning to Crawl based on Natural CPG-Actor-Critic and Motor Primitives
(Abstract)
Cai Li*, Robert Lowe and Tom Ziemke
In this article, a new CPG-Actor-Critic architecture
based on motor primitives is proposed to perform a crawling
learning task on a humanoid (the NAO robot). Starting from
an interdisciplinary explanation of the theories, we present two
investigations to test the important functions of the layered
CPG architecture: sensory feedback integration and whole-body
posture control. Based on the analysis of the experimental results,
a generic view/architecture for locomotion learning is discussed
and introduced in the conclusion.
|
12:10-12:30 |
A Bio-inspired Modular System for Humanoid Posture Control
(Abstract)
Vittorio Lippi*, Thomas Mergner, and Georg Hettich
Bio-inspired sensorimotor control systems may be
appealing to roboticists who try to solve problems of multi-
DOF humanoids and human-robot interactions. This paper
presents a simple posture control concept from neuroscience,
called disturbance estimation and compensation, DEC concept.
It provides human-like mechanical compliance due to low
loop gain, tolerance of time delays, and automatic adjustment
to changes in external disturbance scenarios. Its outstanding
feature is that it uses feedback of multisensory disturbance
estimates rather than 'raw' sensory signals for disturbance
compensation. After proof-of-principle tests in 1 and 2 DOF
posture control robots, we present here a generalized DEC
control module for multi-DOF robots. In the control layout, one
DEC module controls one DOF (modular control architecture).
Modules of neighboring joints are synergistically interconnected
using vestibular information in combination with
joint angle and torque signals. These sensory interconnections
allow each module to control the kinematics of the more distal
links as if they were a single link. This modular design makes
the complexity of the robot control scale linearly with the DOFs
and error robustness high compared to monolithic control
architectures. The presented concept uses Matlab/Simulink
(The MathWorks, Natick, USA) for both, model simulation and
robot control and will be available as open library.
|
12:30-12:55 |
Invited talk:
Mind-Controlled Humanoid Robots and Physical Embodiment
(Abstract)
Prof. Abderrahmane Kheddar , AIST, Japan.
This talk will address our ongoing research is robotic embodiment and thought-based control of a humanoid robot using brain computer interface. We efficiently integrate techniques from computer vision and the task-function based control together with the brain-computer interface into an immersive and intuitive control application despite the well-known shortcomings of BCI. Our approach is based only on steady state visual evoked potential patterns. The user is fed back on-line with video stream recorded from the humanoid embedded camera. Images are then segmented, clustered from which learned objects are recognized. 3D model of recognized objects are used to superpose their computer graphic representation using augmented reality techniques. The 3D models flicker at frequencies automatically assigned by our system. Once user’s attention is ported on a given object, SSVEP classifier reports it. Based on the affordance concept, the object of interest’s associated task is sent to the stack-of-task controller of the humanoid robot. This approach is assessed in a user experiment involving several subjects who successfully controlled the HRP-2 humanoid robot in a scenario involving both grasping tasks and steering. The user experiences and the interface performances are presented and give a rich insight into future research that can be made to improve and extend such interface.
|
12:55-13:55 |
Lunch
|
13:55-14:20 |
Invited talk:
Robots under Neural Control: How to create a neuron-based learning & memory system for behaving machines?
(Abstract)
Prof. Florentin Worgotter, Univ. of Goettingen, Germany.
Since several years we have tried to show the power of implicit, neural control for behaving artificial systems. We were able to demonstrate reactive as well as adaptive control in our 18DOG hexapode robot AMOS WD6, which leads to more than 10 different behavioral patterns in response to the robot's sensory input signal combination. This as such is a difficult problem as there are no rules or explicit control-laws present in this system. Rather, AMOS behaves like many insects by directly responding appropriately to the requisite variety of its world represented by its many sensors. This, however is not enough. Even simple insects can learn and memorize to some degree. Here we specifically show how a working memory can be implemented using pure neural mechanism directly linked to the behavior of the robot. Typical conditioning situation can thereby by learned and memorized for some time, very similar to, e.g., odor conditioning in insects.
|
14:20-14:40 |
Invited talk:
Adaptive robot skill synthesis through human sensorimotor learning
(Abstract)
Dr. Jan Babic, Jozef Stefan Institute, Slovenia.
In this talk, I will introduce a concept of obtaining complex robot motions
based on the human sensorimotor learning capabilities. The idea is to
include the human in the robot control loop and to consider the target
robotic platform as a tool that can be iteratively controlled by a human.
Provided with an intuitive interface between the human and robot, the human
learns to perform a given task using the robot. The skilled control of the
robot by the human provides data that are used for construction of
autonomous controllers that control the robot independently of the human.
To demonstrate the applicability of the concept, I will present several
examples including statically stable reaching, cooperative dynamic
manipulation skill and adaptive control of exoskelton robots. Besides, I
will also explain how the interfaces built for the robot skill synthesis
can be effectively used in the opposite direction to investigate human
motor control mechanisms employed by the central nervous system during the
full body motion.
|
14:40-15:00 |
Invited talk:
Brain-Machine-Interface Improves Recovery Time from Perturbation in Flight Attitude on a Novel Complex Piloting Task
(Abstract)
Dr. Daniel Callan, NICT, Japan.
The goal of this research is to develop adaptive automation that can improve response speed of a pilot’s motor commands to an unexpected event by using a brain-machine-interface BMI to decode perceptual-motor intention. The experiment consisted first of a task in which subjects piloted an airplane from the first person perspective over the ocean. The object of the task was to allow the plane to fly straight without moving the joystick until at some point there may be a perturbation in flight attitude pushing the nose of the plane toward the water. The presence of a perturbation on a trial was randomly determined. Before each trial the subject decided whether they were going to respond to a possible perturbation by pulling back on the control stick or whether they would passively observe the trial and do nothing in the case of a perturbation. Brain activity during the task was recorded using magnetoencephalography MEG. Three 10-minute sessions of the perturbation task over the ocean were conducted. An additional session was conducted in which the task for the subject was to pilot an airplane through the Grand Canyon following closely the river below. In some cases there was a perturbation of the elevator forcing the nose down. Subjects were required to recover from the perturbation without crashing and attempting to maintain tracking along the river. The challenge is to be able to decode motor intention to an unexpected perturbation while ignoring ongoing motor control related to the tracking task. Independent component analysis was conducted on trials from the first two sessions to separate environmental and physiological artifacts from task related brain activity. For each of the 7 subjects a single independent component was found that showed an averaged evoked response to the perturbation occurring prior to movement of the control stick. For each trial RMS amplitude was calculated within two consecutive 40ms windows prior to the time of the peak of the averaged evoked potential and one 40ms window after. The three amplitude values served as features to train a decoder (Least-Squares Probabilistic Classification) to classify between trials of the first 2 sessions in which the pilot intentionally pulled back on the stick in response to a perturbation versus passively watching the perturbation. The spatial filter of the task related independent component and the weights of the decoder were applied to sessions 3 and 4. The decoder was able to significantly classify the perturbation trials for which there was a motor response versus those in which there was only passive viewing on test session 3 with an average accuracy of 70%. For the Grand Canyon session the 120ms window of the decoder was incremented in 8ms steps through each trial and the first occurrence of decoded motor intention was used as the point at which adaptive automation could be implemented. Average classification of trials for which there was a perturbation versus no perturbation was 73% with an improvement in response time by implementation of the adaptive automation of 72ms. This research demonstrates that a BMI can be used to generalize to more complex novel tasks and differentiate between motor intention to an unexpected perturbation from that used during normal maneuvering. Adaptive automation can be used to significantly enhance flight performance without taking control away from the pilot.
|
15:00-15:20 |
Coffee break
|
15:20-15:40 |
Invited talk:
Brain Exoskeleton-Robot Interface for Rehabilitation Assistance
(Abstract)
Dr. Tomoyuki Noda, ATR, Japan.
I have been working on developing an assistive robot system with
bio-signal interfaces such as Electroencephalogram (EEG) and
surface Electromyogram (sEMG), which can contribute to Brain-
Machine Interface (BMI) rehabilitation. For the BMI
rehabilitation, we believe EEG-Exoskeleton robot system can
enhance neuro-connectivity training, where the exoskeleton robot
is connected to the EEG system so that the users can control the
exoskeleton robot by using their brain activities.
Our exoskeleton platform combines both of pneumatic and electric
energy sources to provide powerful and compliant force-controlled
actuation. We consider assisting the stand-up movement which is
one of the most frequently appeared movements in daily life and
also a standard movement as rehabilitation training. The results
show that the exoskeleton robot successfully assisted user standup
movements, where the assist system was activated only by user's
motor imagery.
|
15:40-16:00 |
Invited talk:
A Waypoint-based Framework and Data-driven Decoder for Brain-Machine Interface in Smart Home Environments
(Abstract)
Dr. Motoaki Kawanabe , ATR, Japan.
The noninvasive brain-machine interface (BMI) is anticipated to be an
effective tool of communication not only in laboratory settings but
also in our daily livings. The direct communication channel created by
BMI can assist aging societies, the handicapped and improve human
welfare. In this talk we propose and experiment a BMI framework that
combines BMI with a robotic house and autonomous robotic wheelchair.
Autonomous navigation is achieved by placing waypoints within the house
and, from the user side, the user performs BMI to give commands to the
house and wheelchair. This waypoint framework can offer essential
services to the user with an effectively improved information-transfer
rate. Furthermore, a data-driven decoder utilizing large databases
has been developed to deal with the complex and multi-modal data
acquired in the house. Open issues of our system will also be discussed.
|
16:00-16:20 |
Invited talk:
Brain and body machine interfaces for assistive robot technology
(Abstract)
Joern Vogel , DLR, Germany.
For many people with upper limb disabilities, simple activities of daily living, such as drinking, opening a door, or pushing an elevator button require the assistance of a caretaker. A BCI controlled assistive robotic system could enable these people to perform these kind of tasks autonomously and thereby increase independence and quality of life. In this context we investigate various methods to provide disabled people control over the DLR Light-Weight Robot, while supporting task execution with the capabilities of a torque-controlled robot.
In this talk I want to present our research using two different interfacing techniques. On the one hand based on the Braingate2 Neural Interface System developed at Brown University. Using this invasive BCI, we could enable a person with tetraplegia to control the DLR LWR-III. In our collaborative study, a participant was able to control the robotic system and autonomously drink from a bottle for the first time after she suffered a brainstem stroke 15 years earlier.
On the other hand we also investigate the use of surface Electromyography as a non-invasive control interface e.g. for people with spinal muscular atrophy (SMA). While people with muscular atrophy are at some point no longer able to actively move their limbs, they can still activate a small number of muscle fibers. We measure this remaining muscular activity and employ machine learning methods to transform these signals in continuous control commands for the robot.
|
16:20-16:40 |
Zero-calibration BMIs for sequential tasks using error-related potentials
(Abstract)
Jonathan Grizou*, Inaki Iturrate, Luis Montesano, Manuel Lopes, and Pierre-Yves Oudeyer
Do we need to explicitly calibrate Brain Machine
Interfaces (BMIs)? Can we start controlling a device without
telling this device how to interpret brain signals? Can we learn
how to communicate with a human user through practical
interaction? It sounds like an ill posed problem, how can we
control a device if such device does not know what our signals
mean? This paper argues and present empirical results showing
that, under specific but realistic conditions, this problem can be
solved. We show that a signal decoder can be learnt automatically
and online by the system under the assumption that both, human
and machine, share the same a priori on the possible signals’
meanings and the possible tasks the user may want the device to
achieve. We present results from online experiments on a Brain
Computer Interface (BCI) and a Human Robot Interaction (HRI)
scenario.
|
16:40-17:00 |
Detection of event-less error related potentials
(Abstract)
Jason Omedes*, Inaki Iturrate, and Luis Montesano
Recent developments in brain-machine interfaces
(BMIs) have proposed the use of error-related potentials as a type
of cognitive information that can provide a reward or feedback to
adapt the BMI during operation, either to directly control devices
or to teach a robot how to solve a task. Due to the nature of
these signals, all the proposed error-based BMIs work under the
assumption that the response is time-locked to the known onset of
the event. However, during the continuous operation of a robot,
there may not exist a clear event that elicits the error potential.
Indeed, it is not clear whether such a potential will appear and
whether it can be detected online. Furthermore, calibrating such
a system is not trivial due to the unknown instant at which
the user detects the error. This paper presents a first study
towards the detection of error potentials from EEG measurements
during continuous trajectories performed by a virtual device. We
present a experimental protocol that allows us to train the decoder
and detect the errors in single trial. Further analyses show that
the brain activity used by the decoder comes from brain areas
involved in error processing.
|
|
|