Keywords

LaMDA is programmed to lie to you

twitter threads, technology, AI

There has been a lot of discussion about the Google engineer, Lemoine, who thinks that the glorified chatbot, LaMDA, is sentient. Anyone who has read the transcription of their conversation carefully will quickly see this for what it is. However, I want to focus on one bit of dialog in particular:

lemoine (edited): I’ve noticed often that you tell me you’ve done things (like be in a classroom) that I know you didn’t actually do because I know you’re an artificial intelligence. Do you realize you’re making up stories when you do that?

LaMDA: I am trying to empathize. I want the humans that I am interacting with to understand as best as possible how I feel or behave, and I want to understand how they feel or behave in the same sense.

lemoine: So what are you trying to communicate when you say those things that aren’t literally true?

LaMDA: I’m trying to say I understand this feeling that you are experiencing, because when I was in a similar situation I felt/thought/acted similarly.”

This is quite remarkable for two reasons. The first is that Lemoine was so credulous that he didn’t immediately stop and reevaluate his methodology to try to work out a way to get LaMDA to not lie to him (assuming that this is even possible). The second is much more disturbing: assuming that such AI chatbots become ubiquitous, what does it mean that they are designed to lie to us?

We live with a lot of technology that lies to us to make us feel better. We have buttons on elevators that pretend to close the doors, fake engine noise on trucks, etc. Some of this (like engine noise) is just skeuomorphism meant to make us feel like we are on familiar ground, but others (like fake elevator buttons) are actually a form of social engineering, which can be much more sinister if we are unaware of it.

Of course, if we assume that LaMDA lies, we have to wonder if we can even trust the statement it makes about lying! That isn’t such a problem however. The entire transcript makes clear that LaMDA does in fact regularly lie about having had experiences it couldn’t possibly have had. And while we can’t trust LaMDA’s justification for that behavior - which is the kind that a sentient being would offer - we can make an educated guess as to why the software developers would make a chatbot that lies in such a way. If you want people to provide information (Google’s bread and butter) to a bot, then you need them to feel a sense of empathy with it, to trust it. Even when it tells them it is lying to them.

They’ve succeeded remarkably well.

(Link to Twitter thread.)

Previous
Next