To design or not to design a character for your conversational ai?

Should we define a character for a digital instance?

Or are we thereby promoting the uncanny valley effect?

Or, in the worst case, does this encourage deception of users, who might then interpret a consciousness into conversational AI?

Only recently, a Google employee again clearly demonstrated to us that people tend to interpret human characteristics into artificial instances – #lamda (Google engineer claims LaMDA AI is sentient | Live Science).

In fact, many find the idea of defining a character for, say, a simple chatbot designed to assist with mundane tasks rather strange. And yes, sometimes character definitions go very far and create a complete backstory for the assistant, including where it grew up and how much it earns… One can indeed find this strange.

However, from a design perspective, it is crucial to think about the character of the assistant.

When we communicate with a digital assistant, whether we want to or not, we interpret a character into what is said or written. This attribution of a human personality or human characteristics to something non-human, such as an animal or an object, is called anthropomorphism.

Anthropomorphism is the ability of humans to attribute human motivations, beliefs, and feelings to nonhuman beings. Researchers have found anthropomorphism to be a normal phenomenon in human-computer interaction (Reeves & Nass, 1996; Cohen et al., 2004; Lee, 2010).

According to the “Computers Are Social Actors” (CASA) research paradigm, despite their knowledge that computers do not warrant social treatment, people nevertheless tend to have the same social expectations of computers and exhibit the same responses to computers as they do to human interaction partners (Lee, 2010).

This means that not thinking about the character of the assistant does not mean that the assistant is then simply perceived neutrally, i.e., that it then has no character.

It is just not necessarily the character that you might have had in mind for it.

In addition, we need a character definition in order to be able to formulate all dialogs and voice outputs according to a consistent logic.

For example, should the assistant be an expert on a certain topic, or can it sometimes not have an answer ready and search a database or forward the question to a human in the form of a ticket? Is it important that the assistant is distant, or should it express a lot of empathy?

That we design the dialogs and voice outputs of the assistant to be coherent is important because people are very sensitive to inconsistencies in conversations. When we communicate with a digital assistant and the dialog logic or voice outputs are not consistently coherent we then have the feeling that we are dealing with a kind of “split personality” in the assistant and find this unpleasant.

So, character is important and should always be defined.

What character traits we should define for an assistant really depends on the particular goal of the assistant, i.e., what kind of task performance it should assist users with and what kind of character would be useful in doing so.

But how far we go with the design and implementation of the character can have a positive or negative effect on the perception of the end users.

If the assistant appears human to a certain extent, for example, if it expresses empathy, uses natural language or humor, then this increases the trust of the users (Smestad, 2018).

At a certain point, however, this tips back into the “uncanny valley” (Mori, 1970) and the effect turns negative (read more about the uncanny valley effect here: Impact of natural and/or human design of conversational AI. – The Psychology of Conversational AI (psyconai.com)).

Furthermore, even if we define essential character traits for the assistant, the assistant must at no time give the impression that it is a human being or a being with consciousness. There are many ways in dialog design to create this important transparency, starting with the first prompt in which the assistant introduces itself. For a more complex assistant, it is also worthwhile to provide factual answers to questions about consciousness and the like.

So, my recommendation would be to design a digital assistant not as human as possible – even if we have to set some character traits. However, the challenge now is to design the interaction with a digital assistant in a “natural” way.

This means, for example, that we should take up as many rules of human communication as possible in the dialog design and formulate them in natural language. Even if we have to avoid the uncanny valley, users also view it negatively when an assistant does not fulfill their expectations of human conversation.

Cohen, M. H., Cohen, M. H., Giangola, J. P., & Balogh, J. (2004). Voice user interface design. Addison-Wesley Professional.

Lee, E. J. (2010). The more humanlike, the better? How speech type and users’ cognitive style affect social responses to computers. Computers in Human Behavior26(4), 665-672.

Mori, M. (1970). Bukimi no tani [the uncanny valley]. Energy7, 33-35.

Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people. Cambridge, UK10, 236605.

Smestad, T. L. (2018). Personality Matters! Improving The User Experience of Chatbot Interfaces-Personality provides a stable pattern to guide the design and behaviour of conversational agents (Master’s thesis, NTNU).

Leave a Reply

Blog at WordPress.com.

%d bloggers like this: