WEIRD SCIENCE| Did you know robots can’t use their hands the way humans do?
Bio-engineering is yet to solve for dexterity for robots. Sure, science fiction predicts the future, and they’ll take over the earth some day. Just not yet.
Humanoid robots are no longer restricted to science fiction. There are many famous examples in real life, such as Sophia, Nadine and Ameca, each with its own sets of skills. All of them, however, have a lot of catching up to do with their science-fiction counterparts.
In fact, robots of today, including those with a human embodiment, cannot match humans in many simple tasks, especially when it comes to using their hands. They cannot, for example, flip a rubber-tipped pencil from the writing position to the erasing position, string a set of beads, or tie a pair of shoelaces.
Then there is the slight issue of mobility: it is difficult to train a robot to climb stairs.
Compared with sci-fi humanoids, real ones are decades behind. Take two iconic film series that began in the 1980s. In the Terminator films, the cyborg T-800 drives a truck, performs motorcycle stunts, and even uses his fingers to remove living tissue surrounding its electronic eye. And in the Blade Runner films, the replicants are capable of climbing walls and performing acrobatics, besides feeling and expressing emotion.
It is said, however, that science fiction predicts the future. The robots of today have a lot to learn but they are making continuous progress — under human supervision.
The challenge lies the clumsiness of robots is best described by Moravec’s Paradox, named after an Austrian-born robotician. It states that tasks that are hard for humans are relatively easy for robots, but tasks that are easy for humans are very difficult for robots. For example, a robot can beat you at chess and perform a difficult calculation that you cannot, but it will struggle to manipulate an object, fetch a newspaper from the front door, or even unlatch a door lock.
Pulkit Agrawal, an MIT professor who has worked extensively on robot dexterity, cited cooking as an example. “If a robot is tasked with cooking food, it can parse these instructions into low-level instructions like, get vegetables from the fridge, chop them, put them into cooking utensils etc. However, actually executing low-level skills — opening doors, climbing stairs, chopping vegetables, mixing them, etc — remains hard,” Agrawal said in an email interview.
“A second problem is generalisation — robots can be made good to manipulate in already-seen scenarios, but still struggle in new scenarios,” Agrawal said.
MIT’s Technology Review explains this problem with an example: “A robot can repeatedly pick up a component on an assembly line with amazing precision and without ever getting bored — but move the object half an inch, or replace it with something slightly different, and the machine will fumble ineptly or paw at thin air.”
Moravec’s Paradox states that tasks that are hard for humans are relatively easy for robots but tasks that are easy for humans are very difficult for robots.
One major reason for robots’ ineptitude is that their hands are nowhere near as efficient as ours. Dexterity by definition covers all kinds of tasks, but it is usually taken to mean those that are performed with one’s hands. And replicating the dexterity of the human hand and fingers presents a difficult bioengineering problem, said Nayan Moni Kakoty, a Tezpur University professor whose research areas include rehabilitation robotics.
“The human hand is a complex structure with 27 bones and 34 muscles, tendons, and ligaments, and all work together to create dexterity and precision. On top of these, the human hand has a multiplexing sensory system in their skin,” Kakoty said.
Efforts so far have centred on developing two key systems. One is a high-end actuation system, which drives various actions by the robot. The other is a sophisticated sensory system to control the movements of the actuation system, based on feedback received. “Also,” Kakoty said, “the coordination from the sensory system to the actuation system needs an advanced controller with cognitive abilities.”
We use all five digits on each hand, some less frequently than others, the number varying from task to task. A good question to ask, therefore, is whether a robotic hand too would require ten digits. But building a hand with fewer fingers, too, offers some advantages at present.
“The most common robotic hands are either suction grippers without fingers that suck in air to make objects temporarily stick to them, or hands with two fingers. The main reason is that these hands are easily controlled and can solve many pick-and-place tasks,” Agrawal said.
“There has been a trend to design soft and multi-finger hands, but often they cannot exert forces precisely and thus don't go beyond grasping,” he said.
The quest is for a low-cost, multi-finger hand that can apply enough force, but will be similar to the human hand in form and sensing abilities. The hand would need to be covered with tactile sensors, allowing them to sense key forces. But the technology here is still in its infancy, Agrawal said.
“While there is debate in the community about whether a robotic hand needs to be like a human hand (similar to whether aeroplanes need to fly like birds), I believe having the human form factor can simplify the problem. I believe one can go quite far with a three-finger manipulator (think of the first two fingers and thumb), but with two arms and wrists,” he said.
Then there is skin. Researchers have fabricated various artificial skins whose properties resemble those of human skin, allowing them to perform various physiological functions, Kakoty of Tezpur University said.
Recent strategies include scaffolds to enable skin regeneration, and electronic skin to enable sensory functions. “Further, advances in artificial control intelligence with self-learning capabilities have coped well,” Kakoty said.
Nadine, the social robot was designed to be a companion for lonely people. She is modelled on her creator, Professor Nadia Magnenat Thalmann, founder of MIRALab under the University of Geneva.
The MIRALab website describes Nadine as “a socially intelligent robot who is friendly, greets you back, makes eye contact, and remembers all the nice chats you had with her”. She can answer questions in several languages, and show emotions in her gestures and in her face. “Nadine is also fitted with a personality, meaning her mood can sour depending on what you say to her.”
Social interactions aside, Nadine’s dexterity is limited: she is a sitting humanoid robot. She can, however, perform a number of tasks with her hands, created using 3D modelling and 3D printing.
In an email, Professor Thalmann said Nadine has been trained to pick up light objects dropped on the floor, and manipulate a small object between one finger and another.
“We have developed a grasping algorithm that allows her to see and recognise some objects and grasp them with a human grasp (and not in a robotic way). She can grasp different kinds of bottles (round shapes), cups, elongated objects, and a telephone,” Thalmann said.
Where Nadine falls short is in recognising any new shape and working out the best way to grasp it, as a human would. “Grasping with two hands like a human does is not fully possible today. First, robots have to recognise any object’s shape and know how to grasp it, which is still an ongoing research topic… However, with machine learning, it will be easier for humanoid robots to grasp any object in the coming future,” Thalmann said.
Kabir Firaque is the puzzles editor of Hindustan Times. His column, Weird Science, tackles a range of subjects from the history of inventions and discoveries to science that sounds fictional, but isn't it.