Designing Interpersonal Intelligence and Ownership Models for Social Agents
This doctoral thesis investigates how intelligent personal agents and social robots should be designed to behave and interact in social environments. Fifty million Americans now own smart speakers, and over 40% use chatbots regularly. These agents are gaining access to people’s personal information, and they need increasingly sophisticated rules on how to behave and how to both share and protect personal information. Yet, at the moment, they are designed as one-on-one devices (one agent and one user), whereas in reality, they exist in socially complex spaces. This body of work uses design research approaches to examine how designers might break through current underlying assumptions of agent and robot design, map a broader design space for future personal agents and robots, and suggest considerations and guidelines for more sophisticated, transparent, and trustworthy social agents. One aspect of agent design that was revealed in this work was that of ownership. A sense of ownership over artifacts provides individuals with a sense of control, trust, and comfort. It is not clear, in current designs, who an agent belongs to and whether and how agents create a sense of ownership for their users. Do agents belong to one individual or to a group? Do they belong to the person who uses them, or to the company that provides them? The second part of this thesis examines design opportunities within this space, and suggests how different ownership models might impact agent perceptions and interaction with them.
History
Date
2022-02-04Degree Type
- Dissertation
Department
- Human-Computer Interaction Institute
Degree Name
- Doctor of Philosophy (PhD)