The shape of things to comeDirect brain control of umanoid robot demonstrated
In a move with implications for first responders and military alike, researchers show that robots may be controlled by their “master’s” human brain waves
First reposnders, emergency units, firefighters, and soldiers have been waiting for this piece of news: University of Washington researchers can control the movement of a humanoid robot with signals from a human brain. Professor Rajesh Rao, who teaches computer science and engineering, and his students have demonstrated that an individual can “order” a robot to move to specific locations and pick up specific objects merely by generating the proper brain waves which reflect the individual’s instructions. The results were presented last week at the Current Trends in Brain-Computer Interfacing meeting in Whistler, British Columbia, Canada.
The controlling individual — a graduate student in Rao’s lab — wore a cap with thirty-two electrodes. The electrodes pick up brain signals from the scalp based on a technique called electroencephalography. The person watches the robot’s movements on a computer screen via two cameras, one mounted on the robot and another above it. Right now, the “thought commands” are limited to a few basic instructions. A person can instruct the robot to move forward, choose one of two available objects, pick it up, and bring it to one of two locations. Preliminary results show 94 percent accuracy in choosing the correct object. Objects available to be picked up are seen by the robot’s camera and conveyed to the user’s computer screen. Each object lights up randomly. When the person looks at the object that he or she wants to pick up and sees it suddenly brighten, the brain registers surprise. The computer detects this characteristic surprised pattern of brain activity and conveys the choice back to the robot, which then proceeds to pick up the selected object. A similar procedure is used to determine the user’s choice of a destination once the object has been picked up.
The team plans to extend the research to use more complex objects and equip the robot with skills such as avoiding obstacles in a room. This will require more complicated commands from the “master’s” brain and more autonomy on the part of the robot. One goal is to make the robot’s behavior more adaptive to the environment, which means the robot’s programming must enable some kind of learning to occur.
For the demonstration at the conference, the robot was in a different room but in the same building as its human master, but physical proximity is not a requirement for this brain-computer system to work: the individual and the robot can be anywhere in the world as long as there is Internet connectivity between their two locations.
Rao’s work has been financed by grants from the Packard Foundation, the Office of Naval Research and the National Science Foundation.
-read more in Technology News Daily report; and Rao’s home page