IROS 2013 Workshop on Neuroscience and Robotics
Towards a robot-enabled, neuroscience-guided healthy society
Tokyo, Japan, November 3rd, 2013
The list of the confirmed invited speakers is as follows: (in alphabetical order)
- Prof. Minoru Asada, Osaka University, Japan:
Is artificial emotion be really emotional?
(Abstract)
Emotion, a driving force to generate different behaviors, is one of the most fundamental but difficult structures/functions to design for robots. Starting from primitive emotions, the secondary emotions may be differentiated from them. During this developmental process, sociality has an important role to derive the differentiated emotions. In this talk, I argue how artificial emotion can be more realistic in the social context by showing some attempts, and discuss the future stories in SFs and comics.
- Dr. Jan Babic Jozef Stefan Institute, Slovenia:
Adaptive robot skill synthesis through human sensorimotor learning
(Abstract)
In this talk, I will introduce a concept of obtaining complex robot motions based on the human sensorimotor learning capabilities. The idea is to include the human in the robot control loop and to consider the target robotic platform as a tool that can be iteratively controlled by a human. Provided with an intuitive interface between the human and robot, the human learns to perform a given task using the robot. The skilled control of the robot by the human provides data that are used for construction of autonomous controllers that control the robot independently of the human. To demonstrate the applicability of the concept, I will present several examples including statically stable reaching, cooperative dynamic manipulation skill and adaptive control of exoskelton robots. Besides, I will also explain how the interfaces built for the robot skill synthesis can be effectively used in the opposite direction to investigate human motor control mechanisms employed by the central nervous system during the full body motion.
- Dr. Daniel Callan NICT, Japan:
Brain-Machine-Interface Improves Recovery Time from Perturbation in Flight Attitude on a Novel Complex Piloting Task
(Abstract)
The goal of this research is to develop adaptive automation that can improve response speed of a pilot’s motor commands to an unexpected event by using a brain-machine-interface BMI to decode perceptual-motor intention. The experiment consisted first of a task in which subjects piloted an airplane from the first person perspective over the ocean. The object of the task was to allow the plane to fly straight without moving the joystick until at some point there may be a perturbation in flight attitude pushing the nose of the plane toward the water. The presence of a perturbation on a trial was randomly determined. Before each trial the subject decided whether they were going to respond to a possible perturbation by pulling back on the control stick or whether they would passively observe the trial and do nothing in the case of a perturbation. Brain activity during the task was recorded using magnetoencephalography MEG. Three 10-minute sessions of the perturbation task over the ocean were conducted. An additional session was conducted in which the task for the subject was to pilot an airplane through the Grand Canyon following closely the river below. In some cases there was a perturbation of the elevator forcing the nose down. Subjects were required to recover from the perturbation without crashing and attempting to maintain tracking along the river. The challenge is to be able to decode motor intention to an unexpected perturbation while ignoring ongoing motor control related to the tracking task. Independent component analysis was conducted on trials from the first two sessions to separate environmental and physiological artifacts from task related brain activity. For each of the 7 subjects a single independent component was found that showed an averaged evoked response to the perturbation occurring prior to movement of the control stick. For each trial RMS amplitude was calculated within two consecutive 40ms windows prior to the time of the peak of the averaged evoked potential and one 40ms window after. The three amplitude values served as features to train a decoder (Least-Squares Probabilistic Classification) to classify between trials of the first 2 sessions in which the pilot intentionally pulled back on the stick in response to a perturbation versus passively watching the perturbation. The spatial filter of the task related independent component and the weights of the decoder were applied to sessions 3 and 4. The decoder was able to significantly classify the perturbation trials for which there was a motor response versus those in which there was only passive viewing on test session 3 with an average accuracy of 70%. For the Grand Canyon session the 120ms window of the decoder was incremented in 8ms steps through each trial and the first occurrence of decoded motor intention was used as the point at which adaptive automation could be implemented. Average classification of trials for which there was a perturbation versus no perturbation was 73% with an improvement in response time by implementation of the adaptive automation of 72ms. This research demonstrates that a BMI can be used to generalize to more complex novel tasks and differentiate between motor intention to an unexpected perturbation from that used during normal maneuvering. Adaptive automation can be used to significantly enhance flight performance without taking control away from the pilot.
- Prof. Auke Ijspeert EPFL, Switzerland:
Motor primitives and central pattern generators: from biology to robotics
(Abstract)
The ability to efficiently move in complex environments is a fundamental property both for animals and for robots, and the problem of locomotion and movement control is an area in which neuroscience and robotics can fruitfully interact. Animal locomotion control is in a large part based on central pattern generators (CPGs), which are neural networks capable of producing complex rhythmic or discrete patterns while being activated and modulated by relatively simple control signals. These networks are located in the spinal cord for vertebrate animals. In this talk, I will present how we model pattern generators of lower vertebrates (lamprey and salamander) using systems of coupled oscillators, and how we test the CPG models on board of amphibious robots, in particular a salamander-like robot capable of swimming and walking. The models and robots were instrumental in testing some novel hypotheses concerning the mechanisms of gait transition, sensory feedback integration, and generation of rich motor skills in vertebrate animals.
- Dr. Motoaki Kawanabe ATR, Japan:
A Waypoint-based Framework and Data-driven Decoder for Brain-Machine Interface in Smart Home Environments
(Abstract)
The noninvasive brain-machine interface (BMI) is anticipated to be an effective tool of communication not only in laboratory settings but also in our daily livings. The direct communication channel created by BMI can assist aging societies, the handicapped and improve human welfare. In this talk we propose and experiment a BMI framework that combines BMI with a robotic house and autonomous robotic wheelchair. Autonomous navigation is achieved by placing waypoints within the house and, from the user side, the user performs BMI to give commands to the house and wheelchair. This waypoint framework can offer essential services to the user with an effectively improved information-transfer rate. Furthermore, a data-driven decoder utilizing large databases has been developed to deal with the complex and multi-modal data acquired in the house. Open issues of our system will also be discussed.
- Prof. Abderrahmane Kheddar AIST, Japan:
Mind-Controlled Humanoid Robots and Physical Embodiment
(Abstract)
This talk will address our ongoing research is robotic embodiment and thought-based control of a humanoid robot using brain computer interface. We efficiently integrate techniques from computer vision and the task-function based control together with the brain-computer interface into an immersive and intuitive control application despite the well-known shortcomings of BCI. Our approach is based only on steady state visual evoked potential patterns. The user is fed back on-line with video stream recorded from the humanoid embedded camera. Images are then segmented, clustered from which learned objects are recognized. 3D model of recognized objects are used to superpose their computer graphic representation using augmented reality techniques. The 3D models flicker at frequencies automatically assigned by our system. Once user’s attention is ported on a given object, SSVEP classifier reports it. Based on the affordance concept, the object of interest’s associated task is sent to the stack-of-task controller of the humanoid robot. This approach is assessed in a user experiment involving several subjects who successfully controlled the HRP-2 humanoid robot in a scenario involving both grasping tasks and steering. The user experiences and the interface performances are presented and give a rich insight into future research that can be made to improve and extend such interface.
- Prof. Thomas Mergner Neurologische Klinik, Germany:
Human-derived sensor fusion principles used to control biped balancing of external disturbances in a humanoid robot
(Abstract)
Background: Human sensorimotor control is very complex and current research still faces problems when it comes to re-embody the hypothesized control in the form of control models into robots for direct human-robot comparisons. We simplified the research task by studying reactive (sensor driven) postural reactions to exactly known external disturbances. Control of human biped stance during external disturbances lends itself to this research as a simple sensorimotor prototype.
Material and Methods: System analysis approaches with computer modeling in back and forth with human experiments was used in: (1) Investigations of human sensory systems, mainly vestibular and joint proprioceptive, using open loop psychophysics of self-motion perception. (2) Investigations of human postural responses to tilt and translation of the support surface and to pull stimuli having impact on the body. (3) Modeling of the human postural data using sensor fusion principle derived from the human perception and comparing model simulation data with the human postural data. (4) Re-embodiment of the model into a humanoid robot for direct human-robot comparisons in the human laboratory.
Results: (1) Psychophysics suggested that human self-motion perception (a) uses sensory transducer signals to reconstructs the kinematic and kinetic variables of the body-world interaction and (b) uses then these variables to reconstructs the external disturbances having impact on the body. (2) Using these sensor fusion algorithms allowed implementation of the human postural response findings into a simple sensory feedback model of human stance control. The model consists of (i) a servo loop for local joint control and, superimposed on the servo, of (ii) long-latency loops for disturbance estimation and compensation. (3) Model simulations delivered data that resembled the human data. (4) This similarity also applied when the model was used to control the robot with its noisy and inaccurate sensors, etc., and when performing the simulations in the human test bed.
Discussion & Conclusion: The approach of deriving sensor fusion principles from human self-motion perception and of using these principles to model human sensor-based postural responses to external disturbances may help to better understand the human sensorimotor control. Its extension into a 'neurorobotics' approach provided a proof of principle of the sensorimotor control model and demonstrated certain advantages of this control such as versatility in face of changing disturbance scenarios, high robustness in terms of fail safe, low loop gain and low passive resistance. Currently the model and robot are extended to include voluntary movements, control policies (including fusion of predicted with sensor-derived disturbance estimates), and a modular architecture for multi-DOF systems. - Prof. Akira Murata Department of Physiology, Kinki University Faculty of Medicine:
Shared body for self and others in the brain
(Abstract)
It has been known that dorsal visual stream of the two visual pathways, directing to the parietal cortex, is related to visual spatial perception. However, the parietal cortex is not the terminal station of the dorsal visual stream, but has strong mutual anatomical connection with the premotor cortex. The spatial information in the parietal cortex is sent to the premotor cortex, and then the final goal is visuo-motor control. On the other hand, on line representation of one's own body (body schema) is formed in the sensory-motor process, and this map can dynamically change, depending on sensory-motor experiences and learning. This network is considered integration of efference copy/corollary discharge and sensory feedback that is an essential factor both for sensory motor control and body schema. In our recent findings, it is suggested that one's own body schema also provides basic reference frame for mapping of other's body. This means that neuronal substrates for monitoring one's own action is shared with the system for recognize and understand other's action. In this lecture, we would discuss about body schema that is a key to link between sensory-motor control and high-order cognitive functions; body recognition.
- Dr. Tomoyuki Noda ATR, Japan:
Brain Exoskeleton-Robot Interface for Rehabilitation Assistance
(Abstract)
I have been working on developing an assistive robot system with bio-signal interfaces such as Electroencephalogram (EEG) and surface Electromyogram (sEMG), which can contribute to Brain- Machine Interface (BMI) rehabilitation. For the BMI rehabilitation, we believe EEG-Exoskeleton robot system can enhance neuro-connectivity training, where the exoskeleton robot is connected to the EEG system so that the users can control the exoskeleton robot by using their brain activities. Our exoskeleton platform combines both of pneumatic and electric energy sources to provide powerful and compliant force-controlled actuation. We consider assisting the stand-up movement which is one of the most frequently appeared movements in daily life and also a standard movement as rehabilitation training. The results show that the exoskeleton robot successfully assisted user standup movements, where the assist system was activated only by user's motor imagery.
- Prof. Helge Ritter (Bielefeld Univ., Germany) Neuroinformatics
- Joern Vogel, DLR, Germany:
Brain and body machine interfaces for assistive robot technology
(Abstract)
For many people with upper limb disabilities, simple activities of daily living, such as drinking, opening a door, or pushing an elevator button require the assistance of a caretaker. A BCI controlled assistive robotic system could enable these people to perform these kind of tasks autonomously and thereby increase independence and quality of life. In this context we investigate various methods to provide disabled people control over the DLR Light-Weight Robot, while supporting task execution with the capabilities of a torque-controlled robot.
In this talk I want to present our research using two different interfacing techniques. On the one hand based on the Braingate2 Neural Interface System developed at Brown University. Using this invasive BCI, we could enable a person with tetraplegia to control the DLR LWR-III. In our collaborative study, a participant was able to control the robotic system and autonomously drink from a bottle for the first time after she suffered a brainstem stroke 15 years earlier.
On the other hand we also investigate the use of surface Electromyography as a non-invasive control interface e.g. for people with spinal muscular atrophy (SMA). While people with muscular atrophy are at some point no longer able to actively move their limbs, they can still activate a small number of muscle fibers. We measure this remaining muscular activity and employ machine learning methods to transform these signals in continuous control commands for the robot. - Prof. Florentin Worgotter, Univ. of Goettingen, Germany:
Robots under Neural Control: How to create a neuron-based learning & memory system for behaving machines?
(Abstract)
Since several years we have tried to show the power of implicit, neural control for behaving artificial systems. We were able to demonstrate reactive as well as adaptive control in our 18DOG hexapode robot AMOS WD6, which leads to more than 10 different behavioral patterns in response to the robot's sensory input signal combination. This as such is a difficult problem as there are no rules or explicit control-laws present in this system. Rather, AMOS behaves like many insects by directly responding appropriately to the requisite variety of its world represented by its many sensors. This, however is not enough. Even simple insects can learn and memorize to some degree. Here we specifically show how a working memory can be implemented using pure neural mechanism directly linked to the behavior of the robot. Typical conditioning situation can thereby by learned and memorized for some time, very similar to, e.g., odor conditioning in insects.