On Chatbots And Our Fragile Mirror

I’ve only vaguely been following the ongoing kerfuffle around large-language model chat services, but in the wake of the latest round of “what the fuck is going on” thanks to Bing, I thought I’d just pass along three recent reads.

L. M. Sacasas:

[…] What interests me, however, is the psychological power of the attachments and the nature of a society in which such basic human needs are not being met within the context of human communities. One need not believe that AI is sentient to conclude that when these convincing chatbots become as commonplace as the search bar on a browser we will have launched a social-psychological experiment on a grand scale which will yield unpredictable and possibly tragic results.

As bad as such emotional experimentation at scale may be, I am more disturbed by how AI chat tools will interact with a person who is already in a fragile psychological state. I have no professional expertise in mental health, only the experience of knowing and loving those who suffer through profound and often crippling depression and anxiety. In such vulnerable states, it can take so little to tip us into dark and hopeless internal narratives. I care far less about whether an AI is sentient than I do about the fact that in certain states an AI could, bereft of motive or intention, so easily trigger or reinforce the darkest patterns of thought in our own heads.

James Vincent:

To say that we’re failing the AI mirror test is not to deny the fluency of these tools or their potential power. I’ve written before about “capability overhang” — the concept that AI systems are more powerful than we know — and have felt similarly to Thompson and Roose during my own conversations with Bing. It is undeniably fun to talk to chatbots — to draw out different “personalities,” test the limits of their knowledge, and uncover hidden functions. Chatbots present puzzles that can be solved with words, and so, naturally, they fascinate writers. Talking with bots and letting yourself believe in their incipient consciousness becomes a live-action roleplay: an augmented reality game where the companies and characters are real, and you’re in the thick of it.

But in a time of AI hype, it’s dangerous to encourage such illusions. It benefits no one: not the people building these systems nor their end users. What we know for certain is that Bing, ChatGPT, and other language models are not sentient, and neither are they reliable sources of information. They make things up and echo the beliefs we present them with. To give them the mantle of sentience — even semi-sentience — means bestowing them with undeserved authority — over both our emotions and the facts with which we understand in the world.

Rob Horning:

Chatbots are less a revolutionary break from the internet we know than an extension of the already established modes of emotional manipulation its connectivity can be made to serve. They are an enhanced version of the personalized algorithmic feeds that are already designed to control users and shape what engages and excites them, or to alter their perceptions of what the social climate is like, as Facebook famously studied in this paper about “emotional contagion.” And advertising in general, as a practice, obviously aims to change people’s attitudes for profit. Chatbots are new ways to pursue a familiar business model, not some foray into an unfathomable science-fiction future beyond the event horizon. They will analyze the data they can gather on us and that we give them to try to find the patterns of words that get us to react, get us to engage, get us to click, etc.

Thompson suggested with awe that “the AI is literally making things up … to make the human it is interacting with feel something,” expressing astonishment at an automated “attempt to communicate not facts but emotions.” That seems to assign agency to the chatbot when it is just calculating probabilities and stringing together words that seem to fit the situations our words are creating. It’s not that the AI is doing anything; it’s more that Thompson is strangely surprised at the idea that you would use media to try make yourself “feel something” rather than simply extract data and information. It’s as though he is surprised that the chatbot made him suddenly feel like a human.

As I noted before, my entire relationship with these bots has been centered around generating code snippets for things since I am not a programmer but often at least can follow the logic of a piece of code enough to know what to ask.

On a lark, I did this week ask ChatGPT to tell me about Bix Frankonis, but I guess it has some set of safeguards against answering questions about people not considered to be in the public eye. It titled this particular chat transcript, “Bix Frankonis Unknown”. Talk about a fragile mirror.


Referring posts