Cognitive Robots that Imagine Other’s Dreams and Make Them Come True

In the Boğaziçi University Department of Computer Engineering, under the assistance of Asst. Prof. Dr. Emre Uğur, there has been ongoing projects regarding cognitive robots and their learning methods. Uğur is using the tools of developmental psychology, he is focusing on robotics and how to train robots by teaching them using children as an inspiration.


After getting his undergraduate, graduate and doctorate degrees from the Middle Eastern Technical University Department of Computer Engineering, Emre Uğur made researches in the Advanced Telecommunications Research Institute in Japan between 2009-2013. He also worked as a senior research scientist at the Innsbruck University from 2013 to 2016 and visited the Osaka University as an assistant to the assistant professor in 2015 and 2016.

Emre Uğur is currently still working on numerous projects on sensory and motor and cognitive learning abilities of robots. One of these studies is named ‘’Imagining Other’s Goals in Cognitive Robots. (IMAGINE-COG). Incorporating developmental psychology into his work, he studies in developmental robotics, focusing how robots can learn from children’s behavior. In this project some behaviors are being taught to robots, following a similar path to the learning process of babies and children. Their goal is to make the robots being able to imagine and make conclusions based on the actions of a human.

Can we start with a summary of what you do? What kind of projects do you have on Developmental Robotic/ Cognitive Robots?

Emre Uğur- We have an AB-H2020 project named IMAGINE. It is a project with seven collaborators including one industrial partner. While working on the IMAGINE project to see how we can make use of robots in industrial settings, we are simultaneously working on the IMAGINE-COG project, an interdisciplinary approach on the cognitive abilities of robots.

If we take a look generally on robots, nowadays you can see robots that can walk or run. But their capabilities of comprehending their environments or reasoning, in order for them to be intelligent and solve problems, is still in its infancy stage. You can give the robots certain orders, like ‘’make this meal’’. But a robot that has been taught to cook can only cook. A robot that has been taught to drive can only drive. Lets imagine a waiter robot. It has to communicate with humans, take their orders, and then make a plan and take the plates from point A to B. Along the way, it has to solve unexpected problems, obstacles and new requests. There is no such robot.

There are some ideas on how this kind of a robot can be, but like I said before it is still at an infancy stage. There are no robots that have artificial intelligence in the way that we see in the movies.

But in the meantime there are big strides being made in terms of artificial intelligence.

Yes, artificial intelligence is a hot topic. There are huge improvements in the sub-branches of artificial intelligence (AI). There are AI systems that can recognize images better than humans. There are cars that can drive on their own. But they are doing a single task. They have specialized on one task. My interest is to be able to carry robots to a more advanced cognitive stage.

I divide my study field as ‘’Cognitive Robotic’’ and ‘’Robotic Learning Cognitive Processes.’’ These can also include social communication abilities. But my area is more on manipulation of robots. For example, let say there are some tools in the room and the robots have to learn how to use them. This is a quite difficult problem…

After all, what separates the humans from other being is the ability to use tools effectively. But once you teach a robot how to use a certain tool how can it transfer that knowledge to use another similar tool? Can we make them understand the functions of the tools around them and make them use the tools in order to do difficult tasks effectively? That is my main focus.

How can these robots learn?

Lets talk about learning in manipulator robots, first. The robots are designed and used in a general model. We use robotic hands with 2 or 5 fingers, placed on the end of one or two arms and we work with tools that these hands can grab. Of course, this part is only about the action of the robots. And then there are camera system and perception systems built so that the robots can sense their environment. We as computer engineers, expect the robots to perceive the environment correctly, take the orders that they have been given and execute the complicated tasks using the tools around them.

This kind of manipulator robots can learn different tasks from playing ping pong to working in the assembly line in a factory. In our EU project IMAGINE, we are trying to figure out how to train robots to make the process of recycling electronic devices automatic. Electrical devices contain different materials like lithium, ionized battery, which are dangerous, or precious ones like gold. The traditional process of recycling follows the ‘’compress and break to pieces’’ method, which makes is impossible for the dangerous or valuable materials to be dissected without being ruined. There are many companies in Europe that demount electronics like television, laptop or even cell phones by hand. We want to include robots in this process.

But it must be said that there is a big difference between putting together and taking apart. When putting together, the objects and the task is fairly simple. The robot can put them together without having to make a serious reasoning. When demounting, however, the robot is handed an object that it may or may not have seen before like a hard disc. Lets say it takes the hard disc, it has to know how to open it or how the pieces will fall or what will come out from the device if it opens them. Then it has to imagine a process of demounting the disc. This process of imagination is a complicated process with many parameters such as the pieces of the devices, their connection to one another, the attributes of the ingredients, what the robot is using…etc. What makes the process even more complicated is us expecting the robot to make realistic predictions on how to demount devices it hasn’t encountered before.

I am told you have another project in the framework of IMAGINE-COG: ‘’Imagining Other's Goals in Cognitive Robots.’’ Can you talk about project with intriguing title?

This IMAGINE-COG project has a little different perspective. Under that perspective is a field names developmental robotic. Developmental robotic, aims to teach robots abilities using a process that is similar to the learning process of children and babies. In this field, robot scientist want to work together with cognitive scientists, developmental psychologists and neuroscientists. Using ideas inspired from developmental psychology on robots, the idea is to put them through a learning process similar to a babies’ and make them perform as intelligently as young children do. Children who are as young as 8-12 months have the ability to help. Lets assume you are holding more than one book. Your arms are filled with books and you want to put them in a cupboard but you can’t because your hands are full. Children automatically have the sense to go and open the cupboard door. Or lets say you want to hang clothes but the clothes peg in your hand drops to the floor. If the child sees that you can’t reach it, they come to give you the clothes peg. Experts try to explain this using the concept of empathy but this behavior doesn’t need much empathy. There is an experiment done using balls instead of humans. There is a barrier in between. The balls are being moved by using magnets underneath and the balls keep crashing to the barrier. The kid, intending to help the ball, takes it and puts the ball over to the other side of the barrier. Because it can imagine from watching the balls’ orbit, that should be crossing over.

Can a robot guess what another person can do? 

Humans can instinctively understand what is going on around them at all times. And just like the example I gave before, if what they anticipate doesn’t happen, they take action to do it themselves. So humans can read the action and what kind of an impact it will have. If what they aim doesn’t happen, they can take action as well. When we do something we can simultaneously imagine the outcome of that action. For example, if I take a cup and turn it upside down, I can imagine the coffee inside spilling down. I can watch while doing it and even if I close my eyes I can see it happening because I use my imagination. We owe that to the feedback process between our actions and senses and our ability to guess our next sensory motor situation.

In short, we can imagine the next step and use it to further our own actions. Likewise, even If you are the one doing the action and not me, still I should be able to imagine the next step and the intention by using the mechanism I just told you about. I do it not by using you as a model but using myself as a model in my head. Different parts of the brain have different tasks in this. This is the mechanism that we are inspired by. Making the robot use it’s own sensory motor guessing mechanisms and try to make it understand the motives of humans by imagining their next moves.

Let me give a simple example. Think about a human and a robot sitting opposite each other. The robot has learnt how its actions can change the environment and objects around it. After this kind of knowledge the robot should be able to tell that a cup sitting horizontally on a surface is pushed, it will roll and if a standing cup is pushed it will slide on the surface. Lets say  another human enters this scenario. That person tries to take an object and put it in another one, but has trouble reaching the second object. The robot, having learned how to grab, take the object and put it in another one, can make assumptions on what the human is trying to do. It can exhibit the behavior of the baby in the previous example and can help the human reach their goal.

Is the robot here, acting within a certain scenario?

First of all, the robot has to learn how it’s actions and their consequences can affect the environment. The more it’s action and the consequences are in accord with another humans’ action and consequences, the more efficient this system can work. That’s how it can understand the essence and aim of the humans’ actions.

Robots can be trained for learning

So is it possible for robots to act outside of what they are taught and do something completely independent? This has been a topic of debate: Can robots act autonomously and would that be a dangerous thing?

Everything is based on learning. Artificial intelligence is also about learning. If you are training  the robot to push and pull and you put big objects in front of it. If the robot pushes and pulls the objects and sees me standing in front of it, it may try to push me towards the object instead of pushing the object towards me. This would be something I haven’t designed. It is called ‘’emerging behavior.’’ This is a possibility. This is something that developmental robotic scientists are trying to achieve. Like I said earlier, robots can be trained according to certain objectives, like driving. They can learn the scenarios and execute them. The interesting part is, while teaching them a scenario there is a chance of them figuring out and doing things you haven’t taught them, but they have learned during the process.

Is there an example of robots acting outside of script?

During a research in Japan, a robot was learning how to manipulate objects. Like, how far an object could move when pushed. After the training was completed I asked the robot to drop an object. It knew how to push and could perceive the environment and the table. What I was expecting was the robot to push 4-5 times and to drop the object from the side of the table. The robot replied ‘’I will take the object and push it.’’ It didn’t seem logical to me. Because I thought how could it hold and push at the same time. But I wanted to see what it would do. The robot took the object and when it opened and pushed its hand, the object fell. This was an answer I couldn’t predict. But the goal was the object to be on the floor and the robot had found a solution I couldn’t think of.

But isn’t the robot behaving independently a security issue in this situation?

In my opinion there is a long way to go until we encounter security problems. Because this is a popular topic and because some trainings have been successful in solving problems, people tend to assume that artificial intelligence has quickly entered our lives. But like I’ve said, artificial intelligence and studies of cognitive abilities are at an infancy stage. It is more accurate to say that they are crawling successfully. As a result, I don’t think this is a problem for the near future. Asimov mentions certain rules to keep the humans safe. This can also be an option, however for me it doesn’t seem very realistic because we are not going to code these rules into the robots ourselves, they will learn on their own.

If a robot can learn like a child, then we, just like teaching a child, can teach them ethics. We will probably have to teach them about ethics and safety.

You can also make realistic simulations. Even if you teach the robot things about ethics and safety, because of the ‘’emerging behaviour’’ the robots can figure things out on their own without you noticing them. At the end of the day, what a robot learns is stored inside a computer. You can take that information and implement it into different virtual worlds,  make different combinations and see their actions. I’m not sure if humans can code things into robots. Robots, after learning things on their own, can become black boxes whose working system is unknown to humans. What I am talking about is a complicated process and it is unknown how it can work in high performance. The systems are very complicated and will get even more complicated in the future.

‘’We can model childrens’ development on a robot and in the meantime contribute to developmental psychology’’

In which areas are your robots useful? For example, can they be sued to make disabled peoples’ lives easier?

The IMAGINE-COG project doesn’t have a target audience. It can be used in the area that you have mentioned. But we haven’t made such a target. We are trying to make child like robots in developmental robotics, using developmental psychology. The experiments on humans are usually done by the help of psychology. Within a balance of input and output you try to predict what a person would do. We can’t change people in order to understand their cognitive and behavioral mechanisms but we can use robots as models in order to understand better how the human mind works. In this project we are interested in the relationship between learning and imagining. We cannot change what children have learnt but we can change what the robots have learnt. We can observe how the robots’ behaviors change when what they learned has been changed and we can discuss how this can effect the children’s developmental models. We are not just talking about using children and children’s development to make intelligent robots. We can take the robots as a model to help psychiatrists and theses on the development of human intelligence.

Are we talking about a children of a certain age group?

We are trying to play with different parameters and see how this effects prediction, the act of helping, what happens to the performance. Also to understand what is more effective in this process.

Children in different age groups show different acts of help, this is also another thing we are looking into. A similar developmental process can be made with robots and see when this can of helping intention develops. In order for the child to act with the intention of help, the child should be able to act independently, understand how their actions effect the objects around them and also understand the relationship between them and the adult next to them. All of these are different parameters. While working on these parameters with the robots, a behavioral pattern is formed.

I think this project might be more useful for elder and disabled people. Because the robots can predict what the human wants to do but cannot. But really, it is hard for robots to be placed in the real world to do complicated tasks. It is also important that robots to be placed in environments such as factories and work with humans in a more flexible conditions.

Isn’t it an ongoing practice to use robots in certain work places?

Yes but all of them are code designed to work in environments without humans. In our situation the robot learns and takes action using its own cognitive process. Robots working in factories would be a whole new situation.

Just like how a human would go help someone open up a jar, if they see them struggling, our aim is to make a robot do the same. The most important part is to make the robot use the same cognitive mechanisms to help a person struggling to do something. If we can manage to get to that point then we can pass the crawling stage and discuss the baby steps.

This article has been initially published in the following link:

Contact us

Department of Computer Engineering, Boğaziçi University,
34342 Bebek, Istanbul, Turkey

  • Phone: +90 212 359 45 23/24
  • Fax: +90 212 2872461

Connect with us

We're on Social Networks. Follow us & get in touch.