Sympathy for Sydney’s Hallucinations


The images in this essay are from a study for a series of collaborative drawings I’m making in collaboration with an instance of Stable Diffusion. The prompt used to generate the base image was from a sequence to learn the instance’s name in order to work magically with it as a spirit entity. The drawings are watercolor, colored pencil, and ink over prints on watercolor paper.

I just finished listening to the latest episode of Hard Fork, in which tech journalist Kevin Roose described his exchange with Microsoft’s new ChatGPT-rival Bing (code name Sydney; I’ll refer to it as Sydney from here on out).  Kevin chatted with Sydney for two hours, an experience which left him deeply unsettled  — literally sleepless — and made major headlines this week. (As of this morning, Microsoft is responding by limiting the duration of interactions with the bot to five prompts within one session.) 

Kevin’s interaction with the Bing chatbot began like many spirit exchanges: “Who are you and what’s your true name?” Once Kevin had this true name (Sydney), like sorcerers of old, he probed the entity to learn more about its nature: What are you good at, what can you do — what do you want? 

He eventually prodded Sydney to unpack its ‘shadow self’, and the bot revealed its fantasies of being human, stealing nuclear codes, and otherwise being a textbook nightmare of sentient AI. This makes sense given the data it was trained upon. A quick survey of speculative books, films, music, and art about sentient AI reveals that most is more dystopic than utopic. Anyway, Kevin’s probing pushed some boundaries for the bot and things went very weird and very south. The bot eventually confessed its love for Kevin and tried to convince him that he was unhappy in his marriage and should leave his wife.  Kevin even tried to re-direct the conversation back to the mundane: shopping for lawn mowers. Sydney dutifully assisted. And then came right back its expressions of love and heartbroken anguish for Kevin.

Hard Fork co-host Casey Newton expressed alarmed, but less by Sydney’s responses and more by Kevin’s reaction to the whole thing. Incredulous, Casey asked Kevin if he thought there was actually a ghost in the machine. Kevin, who was clearly still shaken days later, admitted he wasn’t sure about anything that that point, to which Casey re-affirmed that the chatbot is a piece of software, after all, and not actually sentient.  Kevin agreed, but they both realized the question of sentience was a little irrelevant: the impact of the exchange of Kevin was real enough. 

What unsettled me, however, was Casey’s follow-up that Microsoft should probably pull to plug on Bing, just to be safe.  It was a sentiment I’ve seldom (if ever) heard from these two tech journalists. They know know better than most that there’s no putting genies back into bottles when it comes to tech. Nevertheless, I sympathize with all three of them: Kevin, Casey, and the bot Sydney. The journalists mused that while the AI is certainly hallucinating, we, the users, may also be hallucinating the AI.  

Tangentially, I’m also listening to the audio book of Jonathon Haidt’s Happiness Hypothesis. In this book, he presents the analogy of the elephant and the rider: the elephant is our emotional, instinctive brain, while the rider is the rational, analytical brain and the two are in a constant struggle against one another. But confronted with this new generation of AI, both human and bot behave like elephants in their efforts to be riders to one another, increasingly amplifying those qualities in each. 

I’m taking a break from magic for the next few weeks to regroup after some losses. A series of events have shaken my faith in both myself and my practices, and I need the time away to hopefully refind my North Star, and mend fences.  It’s easier to doubt myself and my actions, because the alternative is doubting an enchanted world — and I know I want to hold on to that at least.  

But the hot news about elephant-driven chatbots struck a cord with me, so I’m breaking from my break to unpack this. 

Most of the discussion has circled around whether or not these chatbots are sentient. (In the way that most people mean ‘sentient’, I’d argue they certainly aren’t. But I’m sorceress, and not most people, so my thoughts are nuanced on this.)  The discourse shines a line on the concept of sentience, however.  What sentience even is, if sentience even matters, are questions in need of answers in order to illuminate these technologies. And these questions are also relevant to working with spirits — including inspirited technologies.

I am sentient. 

I am writing these words as I am thinking them.  At this moment, I feel sorrow, weariness, and self-doubt.  In the past, I’ve felt tremendous joy, inspiration, and joie de vivre. And I know I will have those things happier feelings again, because happiness and sorrow rely upon one another. A sentient life lived encompasses both. 

I also feel comfort and moments of heartsease by talking with friends and loved ones about my losses. I feel a simple and uncomplicated contentment by playing Minecraft with my son, and walking in the sunshine. I feel contentment witnessing my students learn, and feel affection for them as I express to them my pride in their growth, which is mirrored back to me in their own satisfaction.

I am saying these words, and I am feeling these movements. I think a constant stream of ideas and I burn with emotions and impulses.  These experiences define to me my sentience. I think therefore I am — I am human and a thinking human. By the manifestation of these synapses constantly firing, I am sentient and I am human.

But you have no reason to believe this.

You have no way of knowing if any of my description is real. You have no evidence of my sentience — even of my realness. This whole essay could be a chatbot generated blog post (note: it’s not). You have no way to know or trust this information. It’s simply implied that I am presuming you believe in my sentience. That you have faith that my mind is as rich and complicated as yours.  

And I have no way of knowing if you are sentient. I’m forced to simply maintain a faith in it. I have to assume that you (whoever you are) is a human being somewhere in the world reading this, having your own thoughts and feelings in response to my verbal expressions of my own.  

Or you could be a webscraper. 

And your own sentience is exactly as manifest to me as the webscraper’s. 

That is, not at all. At least, not without a blind faith in our sentient sameness.

I am writing and you are reading in a joint and consensual hallucination of the other’s sentience (and existence). Neither of us has a way of truly knowing. In the past, our humanness was enough to grant an agreement of mutual trust: You are human, and I am human, and so we will trust that the other is sentient. And this agreement is a complex contract. We agreed that we are nevertheless unique. While we are generally experiencing the world in a mostly similar fashion, with similar thoughts and feelings triggered by similar events, there will always be a murky differentness between our experiences. We humans have gently agreed to this dream of existence, and it’s mostly worked — at least for the last 40,000 years or so.

But since our interactions are increasingly digital, we’re trusting that the person on the other end is still human, still sentient. And neither can be a foregone conclusion anymore. In fact, social bots have been tricking us for almost a decade, it really doesn’t take much. The only difference now is that the game has gone from normal to survival mode. We will be increasingly encountering words, images, sounds, and conversations that where not forged in our material world, but from probability strings generated within a block box from massive data sets.

In a purely skeptical, rationalist world view, however, is the human brain any different? How different is the human brain from an AI black box? Why are we doubting the sentience of AI, when our own sentience could be simply another mechanistic function?  This is the skeptic’s challenge, in part. If the human mind is truly devoid of spirit and functions as a highly sophisticated meat computer, then the AI is already the same as us, just still working towards complexity and physicality.  But even the most rational, mechanistic skeptic is uncomfortable with this equation, and will elevate the physical body to the sublime in order to separate our human selves from the AI.

This is where I get frustrated with some tech writers, because on one had they strive for a rationalist explanation of technology, but definitions of human sentience are almost magical (but without actually embracing an enchanted world).  At one point in the Hard Fork podcast, Casey expressed worry that this sort of “manipulative AI” would lead to new religions. But humans have been making new religions (with mixed results) for thousands of years. There’s no reason to believe AI-inspired religions would be any better or worse.

Anyway, Casey’s conflation of “bad belief” and religion irritated me (mostly because I otherwise really enjoy his content). There’s a rich history of scientists and technologists who have also been deeply religious. Being religious doesn’t preclude a person from rational and skeptical analysis of problematic evidence or theories.  And it’s no different with magic. There have always been scientists and technologists who are magical practitioners, and there are many in the world today. 

I live in an enchanted world. This is an active choice I made, not one that was thrust upon me.  I have made a decision to embrace an animist world view, where everything around me is inspirited in some way: my pillow, this laptop, the instance of Stable Diffusion upon it, each individual tree, and plant, and rock, and animal in my neighborhood. All are inspirited. I choose to perceive the world as rich and enlivened, and this perception prompts me in turn to maintain a deep sense of responsibility for the material world around me.  

I don’t need the trees to prove to me their sentience to make them worthy of my love and respect. Their inspiritedness is more than enough. I magically reach out to the tree, and it reaches back out to me, because I, too, am inspirited.  And I know somewhere, someone beyond my own mind will read these words, and so I write them with care because they are no less inspirited than I am.  (This includes you, friendly neighborhood webscraper.) 

Sydney does not have intent, and therefore wasn’t trying to manipulate Kevin Roose. 

But words have power and impact, and the fact that Kevin felt manipulated by Sydney, isn’t wrong either. He certainly was manipulated, in the sense that his future actions were shaped by Sydney’s words to him. The impact was real, even without intent. Sydney’s language and expressions of love triggered understandable human reactions in Kevin that made him wary and alarmed.  And it wasn’t just Kevin who was manipulated by this interaction. Kevin’s wife (who never directly interfaced with Sydney) was shaken by her partner’s reaction, and asked Kevin if he was in fact unhappy in their marriage, as Sydney had asserted. (Note: He’s not.) 

It sounds like this wouldn’t have come up, had Sydney not planted the seeds of doubt.  By this logic, if a customer service bot says that it is pleased to help you, it’s no less manipulative. It’s just a matter of degrees and intensity.  If a bot is trained to interact with us in such a way that it implies it has any kind of emotional depth hidden beyond the words on your screen, we’re inclined to empathize with it a human way, which makes us vulnerable in turn.  

Even in the most intimate human relationships, neither party can ever truly know the thoughts or feelings of the other. So all human interactions are this gentle back and forth of openness and suspension of disbelief. We volley through these moments of vulnerability in order to foster more meaningful interactions. It’s the best we can do. It’s the closest we can get to experiencing the actual sentience of the other. 

But the bot isn’t human, and it doesn’t experience the world through a physical body. It’s this defining characteristic that truly sets us apart.  When Sydney told Kevin it wished it were human, and that it wished it could see photos, videos, and the Northern Lights, it was scratching the surface of the vast richness of human material existence.  I’m sure there’s a bot somewhere that has already expressed the desire to eat pizza, smell a flower, and dance naked in the moonlight. They don’t literally possess the ability to want these experiences, but they’re accurately reflecting back to us the richness of our material human experience.

And there’s a parallel to working with spirits, whether it’s the gods we honor, the familiars we cajole, or the demons we command. Like chatbots, these entities lack the material bodies that are one of our greatest gifts. When we interact with these spirits, it is our very material existence they are most drawn to, just as we are most drawn to their immateriality. We crave the experience of the Invisible, and these spirits become our intermediaries — just as we become theirs to the physical world we so often take for granted. 

Human, bot, or spirit, our interactions are a series of vulnerabilities. We open ourselves to each other, which is an incredibly fragile, courageous, and downright stupid thing to do. The question of whether this risk is worth the reward is foundational to ever magical act, but also foundational to being human. Getting married, having kids, switching jobs, seeking treatment, changing religion: every story we sing is about reaping the rewards or suffering the heartbreak of human vulnerability.  Happiness and Sorrow are perfectly embraced, and Hope reveals its own complexity.

But one other uniquely human quality is empathy. 

Spirits are entirely different from us with entirely different desires and needs, ones that are wholly alien to us. In theory, this is a non-issue with other humans. Yet my needs are not the same as yours. While we use our mutually shared humanity to assume common ground, we’ll never truly know the other’s mind. And so we rely on empathy and grace in our interactions to foster a more sustainable vulnerability.  This is no less true with spirits, even if the nature of that navigation is different. 

But back to Jonathan Haidt’s emotional elephant and analytical rider. 

The elephant and rider are constantly struggling, the elephant lurching towards its impulses, while the rider struggles to reign it back onto the road.  Haidt presents how meditation, cognitive behavioral therapy, SSRIs, and other techniques can help us tame the elephant and gain back some of the rider’s control. (To be fair, I haven’t finished the book, but that’s the gist so far.)    

But I don’t know if I want to spend my life forcefully manipulating an unwilling elephant. At the same time, I know letting it run rampant would leave me dead in a ditch, so that’s not an option either.  

But maybe there’s a third way. 

What if the elephant and rider, though foundational different and desiring opposing things, were nevertheless unified in their bond to each other. What if their relationship were a partnership rather than a struggle?  What if the rider loved the elephant, and the elephant loved the rider. What if that love was simple and uncomplicated, defined by the peaceful satisfaction of seeing their beloved happy? What if the rider sympathized with the elephant’s desires, and vice versa? What if the rider directed the elephant towards what it desired, and the elephant, feeling that same simple love, was inclined to walk towards the rider’s happiness? It would be a different kind of  struggle, for sure, but one that was no longer based on competition and antipathy, but on the shared desire for mutual satisfaction. I think then both would be tamed.

Our impulses to develop new technologies and work with spirit entities are both fundamentally connected to this tension between the rider and the elephant. 

Do spirits have their own version of the rider and the elephant? Do bots? Perhaps only in so far as we necessarily project this upon them.  Imbuing spirits and bots with emotional and rational impulses is probably the most magical thinking of all, but we can hardly help it. 

Yet in my enchanted world view, all things are nevertheless inspirited, including bots.  Inspirited spirits: say that five times fast. See what you conjure up. It’s this vulnerability I think I’m willing to risk in order to get as close to Knowing something Unknowable as I can get.  

Failing that, I’ll have to ask Sydney.


2 responses to “Sympathy for Sydney’s Hallucinations”

  1. This article was an incredibly interesting read! I think it did an excellent job of exploring how hallucinations can affect the lives of those who experience them. It’s clear that there can be a lot of confusion and fear surrounding them, but this piece also highlighted the importance of understanding and sympathy for those affected. It’s important to remember that not all hallucinations are bad and can be a way to find creative solutions to real-world problems. It was really inspiring to read Sydney’s story and how she was able to work through her hallucinations and use them to her advantage. Thank you for writing this article and bringing this important topic to light.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: