Musings on AI/LLM anthropomorphism for designers.

  • Anthropomorphism is hard to avoid. We’re basically programmed to do it; this makes it a powerful quality in a service.
  • AI/LLM services are (almost universally) deliberately designed to be anthropomorphic.
  • Designers are stuck in the middle looking both ways.
    • Anthropomorphism is a quality we can choose to design into the products and services we create.
    • And as we increasingly use conversational LLMs as component parts of what we create (infrastructure) we are on the receiving end of the anthropomorphism designed into the LLM.
  • If we don’t counter it, the anthropomorphism designed into the LLMs will cloud our judgement and make it harder to make the critical and informed choices we need to make when using these technologies in our designs
    • This will may increase risk of harm to users or the organisation (designers over-estimating the tech) or increase the risk of under-utilising LLMs of not designing creatively with them ( designers pigeon-holing the tech)

De-anthropomorphising our language on AIs

It’s hard not to say “AI” when everybody else does too, but technically calling it AI is buying into the marketing. There is no intelligence there, and it’s not going to become sentient. It’s just statistics, and the danger they pose is primarily through the false sense of skill or fitness for purpose that people ascribe to them.

https://mastodon.social/@Gargron/111554885513300997

It’s peculiar that an organization (OpenAI) founded to worry about things like monstrously inhuman obsessive paperclip maximizers would choose to personify their text generator.

I mean, I mean, I know they worry about an AI so smart that it convinces us meat puppets to work to destroy ourselves. But they chose to frame *their* AI in a way everyone knows (or should know) makes humans less critical, more gullible.

https://mastodon.social/@marick@mstdn.social/111519139028927881

One thing we can do is to check the languages that we use when we talk about AI and LLMs. AIs are really just prediction systems. Describe a task and based on the system’s training it will predict an outcome.

When we find ourselves talking about the AI creating or writing try to rephrase it as predicting a desired output.

When we find ourselves talking about the system understanding or knowing try to rephrase it as has been trained on

An AI’s pronoun is the system. Avoid he, she, they etc. even it allows anthropomorphism to creep in.

And hallucinating is just predicting badly

I’m sure there’s more to this, this list is definitely not exhaustive. But starting is good.

Further reading and references

Mirages. On Anthropomorphism in Dialogue Systems

A very interesting paper on anthropomorphism in conversational interfaces. A very good starting point.

Production of highly anthropomorphic systems can also lead to downstream harms such as (misplaced) trust in the output (mis-)information. Even if developers and designers attempt to avoid including any anthropomorphic signals, humans may still personify systems and perceive them as anthropomorphic entities. For this reason, we argue that it is particularly important to carefully consider the particular ways that systems might be perceived anthropomorphically, and choose the appropriate feature for a given situation. By carefully considering how a system may be anthropomorphised and deliberately selecting the attributes that are appropriate for each context, developers and designers can avoid falling into the trap of creating mirages of humanity.

https://arxiv.org/pdf/2305.09800.pdf

AI and Trust

Interesting here on the implications of trust in anthropomorphised conversational UIs

You will default to thinking of it as a friend. You will speak to it in natural language, and it will respond in kind. If it is a robot, it will look humanoid—or at least like an animal. It will interact with the whole of your existence, just like another person would.

The natural language interface is critical here. We are primed to think of others who speak our language as people. And we sometimes have trouble thinking of others who speak a different language that way. We make that category error with obvious non-people, like cartoon characters. We will naturally have a “theory of mind” about any AI we talk with.

https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html

On AI being a marketing term

The hype around these systems really serves corporate interests because it makes the tech look powerful and valuable because it distracts from the real issues that I hope regulators will be focusing on and because it makes the AI seem too exotic to be regulatable.

Designer as well as regulators!

https://medium.com/@emilymenonbender/opening-remarks-on-ai-in-the-workplace-new-crisis-or-longstanding-challenge-eb81d1bee9f

(I disagree with her characterisation of ChatGPT as fancy autocomplete, that doesn’t seem useful analogy.)