A Case for People-Friendly Computers Part I
Posted by Jason Baker on March 6, 2009
From Toshiba’s Akimu Robotic Research Institute comes a rather funny story. A robot they had programmed to emulate human emotions developed a bit of a strange problem:
After some limited environmental conditioning, Kenji first demonstrated love by bonding with a a stuffed doll in his enclosure, which he would embrace for hours at a time.
What they didn’t count on were the effects of several months of self-iteration within the complex machine-learning code which gave Kenji his initial tenderness. As of last week, Kenji’s love for the doll, and indeed anybody he sets his ‘eyes’ on, is so intense that Dr. Takahashi and his team now fear to show him to outsiders.
The trouble all started when a young female intern began to spend several hours each day with Kenji, testing his systems and loading new software routines. When it came time to leave one evening, however, Kenji refused to let her out of his lab enclosure and used his bulky mechanical body to block her exit and hug her repeatedly. The intern was only able to escape after she had frantically phoned two senior staff members to come and temporarily de-activate Kenji.
You can read the full story here.
Computers aren’t human
This is indeed a comical story, but those of us who are programmers laugh for different reasons from the rest of us: we can see it happening. There are two opposite but complementary problems here:
- Humans don’t understand computers very well. Even the smartest of us fall into the trap of thinking that you can make a robot show affection by writing a “hug” procedure. We forget that we will also need a “stop_hugging” procedure.
- Computers don’t understand humans very well. They don’t understand that when a human programs it to hug things, they mean for there to be limits.
While these things seem obvious to most of you, us computer scientists tend to focus on the second point rather than the first. In fact, I would argue that most of our efforts in making software user-friendly run contrary to the first point (this is a point I will elaborate more on later in this series).
There’s a fundamental problem with this approach though. Computers are wonderful inventions that are capable of so much. But in terms of being able to understand humans, I don’t expect any big advances in the near future. And barring a fundamental change in the way computers work, there probably won’t be many advances in the long term either. On the other hand, I think that human beings are capable of so much more than programmers think they are (and indeed, they’re probably more capable than they themselves think they are). So why do we keep beating a dead horse?
I’m starting this series of posts so that I can elaborate more on these points. Some topics that I’d like to cover include:
- What does it mean for software to be people-friendly?
- What’s wrong with user-friendliness?
- What can we do to make software more people-friendly?
And who knows? I might even write something insightful along the way.