Lecturers: Luke Franzke & Florian Bruggisser
...
The topic of Anthopormism in Robotics is as old as the field itself. Can and should a robot look like us? Can interactions between human and machine be more powerful if we can empathise with the machine because of it's a human-like form of behaviour? We are social animals, and a large portion of our brain is dedicated to social tasks, from recognising emotions to predicting the thoughts, intentions and future actions of people around us. It, therefore, makes sense that we exploit these capabilities when designing interactions. Anthropomorphism is an intrinsic tendency of human , but we must also take care to avoid the uncanny valley (Heider and Simmel 1944) but deliberate attempts at anthropomorphic objects have to lead to the uncanny valley phenomena or interactions that appear insincere.
But what of everyday interactive devices that may be informed by anthropomorphic characteristics? Would we be more likely to partake in sustainable consumerism of electronics if the devices were more like people? Would we be healthier if our Fitbit got angry with us? What would an envious Roomba act like? This year's Physical Computing major project will attempt to answer some of these questions, while physically prototyping interactive devices with empathetic qualities and anthropomorphic behaviours.
Extended description:
There are three common explanations for our tendency to anthropomorphize things,
- It's a strategy to assume that the world is composed of higher-level, human-like agents, rather than simplifying. (Rosch et al. 1976).
- We anthropomorphize to make sense of the world, with what we are familiar with e.i ourselves.
- We are searching for relationships and comfort
"Is That Car Smiling at Me? Schema Congruity as a Basis for Evaluating Anthropomorphized Products"
Topics Readings:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.297&rep=rep1&type=pdf
...