Exclusive: How robot ERICA can laugh like humans? Here's what project researcher said
Exclusive: Divesh Lala, a member of the project who programmed the script to find and classify solo and shared laughter by robot ERICA, shares the significance of the research which could enable robots to laugh like humans
Humans are all set to lose their exclusive right of possessing humour, an emotion usually attributed only to human beings. A team of research scientists from Kyoto University in Japan have developed artificial intelligence system of a robot named ERICA which can now emulate human laughter - in the right amount and at the right situation.
The research, ‘Can a robot laugh with you?: Shared laughter generation for empathetic spoken dialogue’, recently got published in the journal Frontiers in Robotics and AI Spoken. In an email interview with HT, Divesh lala - co-author of the research, tells about the essence, importance, and intricacy of the research.
Q. Tell us about the latest development in robot ERICA
We've developed a shared laughter model for conversational robots. When a person laughs during a conversation with our robot, the model decides if the robot should laugh along with the person and also what kind of laugh it should use (a boisterous “mirthful” laugh or a quieter “social” laugh). It does this by analysing how the laugh sounds (e.g. the tone of the voice). We programmed this model inside a robot, so it can laugh along with people when talking with them.
Q. What is the significance of the research?
To make conversational robots more human-like, they need to behave similar to the people. One of these behaviours is laughter, which people do regularly. Till now, most robots don't have the capability to laugh, let alone choosing the right laugh to use. Our model provides a way, so they can do this at the right time. Hopefully this becomes a common feature in robots and people can enjoy chatting with them.
3. What are the major difficulties faced by the researchers during the study?
One of our biggest difficulties was how to find good examples of laughter behaviour that we can teach the robot. We had to observe interactions between people and discover when shared laughter happens. This isn't easy since many laughs are not shared between people, instead they might be quiet chuckles which people use to fill time while they speak. We had to carefully decide the conditions in which a laugh begins and ends and if it is shared with the other person.
Another difficulty is how to decide which laugh to use for the robot. The robot only has a limited number of laughs it can use, so we had to carefully pick the one which don't sound unnatural. For example, if it responds with a very boisterous laugh it won't match the conversation and people may feel uncomfortable.
Q. What is your role in the research?
Our team shared many roles in this work, from the initial idea, to gathering data to teach the model, to managing the evaluation experiment. One of my roles was to programme a script which could automatically find and classify solo and shared laughter in these videos. The script has to be reliable enough to classify these as accurately as possible.
Q. What are some limitations or scope of further studies in the research?
There's still a lot that we have to do. For example, the robot does not consider what the human has said to make them laugh, but this information can obviously help the robot make a decision to laugh.
We also only used Japanese conversation to teach the model, but as you know laughter is influenced by culture (Indian people may laugh at different things than Americans or Europeans and so on) so maybe we need to make laugh behaviour depend on this. Hopefully though this work inspires more research to look into this area of human-robot conversation.