There’s a discussion today on Slashdot regarding the threats and benefits of using robots to monitor both children and employees. As reported by ABC News, Microsoft is researching such technology:
The teddy bear sitting in the corner of the child’s room might look normal, until his head starts following the kid around using a face recognition program, perhaps also allowing a parent talk to the child through a special phone, or monitor the child via a camera and wireless Internet connection.
The plush prototype, on display at Microsoft Corp.’s annual gadget showcase Wednesday, is one of several ideas researchers have for robots. The idea is to create a virtual being that can visit the neighboring cubicle for a live telephone chat even as its owner is traveling thousands of miles away, or let the plumber into the house while its owner enjoys a pleasant afternoon in the sun.
The issues and concerns related to the interaction between humans and robots in intimate surroundings relate directly to an informal seminar I attended yesterday with Prof. Sherry Turkle of MIT’s Program in Science, Technology, and Society. Prof. Turkle spoke about her research for her forthcoming book on “evocative objects” – technologies we use to think with, to think about ourselves and our relationships. Her work has focused on “relational artifacts,” robots designed to forge relationships with people – especially useful for both children and the elderly. Examples include the therapy robot Paro (a baby seal) and Hasbro/iRobot’s My Real Baby.
During our discussion, important value and ethical issues arose in the design and use of such “relational robots.” These robots are meant to create bonds and simulate “authentic” relationships. They react to voices, track their owner’s eyes, respond and project emotions, and so on. Yet, they remain robots – all these actions and reactions are programmed – pre-determined. So, how do the designers decide what emotions to program and which to omit? In an effort to be realistic, My Real Baby gets happy as well as sad. If you bounce her when she’s happy, she gets more happy; if you bounce her when she’s fussy, her fussiness only increases. How should she react, then, if she is abused? It is not hard to imagine a child (especially one who is herself a victim of abuse) to violently shake, strike or otherwise “abuse” the doll. How should this evocative object respond? Should she show pain? Begin to cry? Eventually “pass out” or even “die” if the abuse continues? How “real” should the robot be in order to create an “authentic” relationship?
[In the end, the designers wanted the doll/robot to react as a child would, with pain and sadness. However, the company’s lawyers stepped in and were concerned that any type of response by the doll might encourage further abuse (stimulus-response theory), and they didn’t want to be accused of actually encouraging abusive behavior. In the end, the doll simply did not react to abuse.]
Other ethical dilemmas related to the design of such robots included whether they should be capable of deception or betrayal, two common features of human relationships. Or, should they “die.” On one hand, the experience of death as part of the life cycle is an important part of psychological development and would add to the “authenticity” of the relationship. On the other hand, one of the benefits of these robots seems to be the avoidance of the emtional damage that can happen when a “real” companion (whether a human friend, or even a companion dog) dies.
Joseph Reagle has blogged his reactions to Turkle’s talk.