“Of course, robots are a common sight these days.” “She’s – she’w not like a machine. She’s like a person. A living person. But after all, she’s much more complex than any other kind. She has to be.”
Philip K. Dick, Nanny
Our perception of the world is a tricky thing. We think of the sight that our brain derives from our eyes as connected to reality as such. But in fact, many of the shapes we believe to see, are formed in our mind, extrapolating the sensual stimuly into something, that has already been sitting there. When we see a gradient of darker space on the wall of our room, we built the image of a streight edge from it, a line – which obviously is a reduction of the complexity of the coloured pixels that originally arrived on our retina. Schemata – pre-set shapes that are partly inherrited genetically, partly acquired in our first days after birth – help us to reduce the deluge of visual information by several magnitudes.
No scheme in our perception is stronger, than the face. We cannot help but seeing faces in any shape that would have two dots and a line beneath. And we immediately start communicating with those faces; we interpret their mood from the mimic we perceive, we regard them as happy, sad, or angry. And with very little interaction, such artificial faces become fully accepted as living beings. This is of course not only true for humans; most mamals don’t care much, if they interact with a robot or a living creature. I have seen this when working in primates research with great apes; and you can try it yourself: just let a cheep windup toy pet walk towards your dog or cat.
Now, our devices become more and more versatile. The biomorphic behavior that we can let our machines show, has long become convincing.
IBM programmers John Kelly and Carol Lockbaum were the first to teach a computer to sing: “Daisy Bell”, interpreted 1961 by an IBM 7094 became the first convincing demonstration of a computer interacting with a human voice.
Things that talk are much more convincing than things giving us text output on a screen. An impressive prove is Procter&Gamble’s talking toothbrush which persuades its users to brush their teeth more than three times longer.
The same is true for user interfaces. Why would we place smileys in our explore app, if not because it would lead to much better interaction? This works independent of people being aware of the fact or not.
The whole thing gets creepy when people develop deeper emotions and bonds towards technology. We might think of robots like Mark13, looking as bad as they behave. In AI, Spielberg pictures android robots differently, however: the rageous mob of anti-robot activists stops their robot-killing, when they encounter David, the fake kid with a child-template face that still convinces everybody even after an x-ray reveils the metal skull beneath.
Recent research shows, that Spielberg was perfectly right in his prediction: people find it offensive being shown violence againced “neat” machines. And people have moral concerns about “hurting” an andoid or animal-like robot, if it feedbacks signs of pain – no matter if they are fully aware that the machine is not capable of feeling anything, at all.
It is nevertheless no guarantee of success, if brands use this anthropomorphic trick to make their products more appealing. Microsoft’s Bob is a nice example. Bob should ad a level of “kindness” to the Windows Operating System. It was a complete failure and nobody would remember it, save the fact that Comic Sans was introduced to the world that way. Bob was way to frumpish and ridiculous to stay.
The art of anthropomorphism in machines lies thus in not assaulting people with kitschy faces, but deploy nonverbal, gesture-like symbols to make us more efficient in our human-machine-interactions.