The algorithm playwright

With algorithms mimicking humanity to create plays, where is the line between data output and truth?
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

Gathered beneath an inflatable, star-shot sky, the audience listens to the actor. As if around a digital campfire, they take in the kinds of stories enjoyed in the warmth and darkness forever. The author is not a person, however; instead, it is an algorithm, and the words it chose have been gleaned from Reddit, 4Chan, 8Chan, some of the most mean and notorious and heart-rending and human places on the internet.

The Great Outdoors (2017) is the broadest ranging of Annie Dorsen’s algorithmic plays, its intrepid code wandering the wilds of the internet’s most famous (notorious?) chat rooms in search of its ad hoc script. Hello Hi There (2010) sees two bespoke chatbots synthesize a conversation from the debates of Chomsky and Foucault; A Piece of Work (2013) delivers a computerized cut and paste of Hamlet; and Yesterday Tomorrow (2015) has an algorithm morph the Beatle’s former into Annie‘s later.

A 2019 MacArthur grant recipient, Dorsen’s plays are not truly about algorithms; they are, like all forms of theater, about humanity.


Freethink caught up with Dorsen to talk Greek theater, what it’s like to watch your algorithm run loose on stage, and the simple hilarity of computer voice.

The interview has been edited for length and clarity.

Freethink: When did you first become interested in computer algorithms? Did you think of them as a potential artistic medium first?

Annie Dorsen: Yeah, it’s actually kind of a funny story. I tumbled into it in a way. I was interested in working on the debate between Noam Chomsky and Michel Foucault from the early ’70s, which is a well-known conversation between the two of them about language and creativity and power. And I talked to a friend of mine in Brussels about making a theater piece out of the text and maybe she would write music for it. And she said, “well, before we get into making any decisions about what form a piece would take, let’s just talk about the content of the thing.” And as we read it, she said what would be interesting maybe is to think about Alan Turing and (the) connection to these two guys. Because he has a completely different view of the relationship between language and creativity, for example.

Dorsen’s plays are not truly about algorithms; they are, like all forms of theater, about humanity.

So I went back and I read on computing machinery and technology and intelligence. And it was really appropriate for the piece. I started thinking about how it’s theater in a way, what Turing proposes in terms of chatbots. It’s already a kind of a game of creating an illusion of humanity. Like, sort of tricking an audience into thinking you’re looking at a person when you’re not, which is exactly what theater does, you know.

Freethink: What do you think are sort of the advantages of using theater to explore this algorithmic play idea?

Annie Dorsen: There’s a couple of things that I think make theater really an appropriate forum. The first is that notion (of) the suspension of disbelief. Since the Greek theater, since the beginning of theater, the form has always been concerned with the relationship between truth and appearances, and the untrustworthiness of the senses, and how do you know if someone’s telling the truth? How do you know if somebody is honest? How do you know if somebody’s emotions are genuine? So that obviously is really pertinent to artificial intelligence.

Freethink: You’ve always had a little bit of remove from your work. And I’m curious what it feels like to just turn those algorithms loose and wait and see what they come up with. Is it exciting, nerve wracking?

Annie Dorsen: Ideally, (laughs) the notion is that you would have a kind of open-minded curiosity as you watch it unfold, right? That’s the sort of John Cage, Buddhist idea of working with chance operations. That you’ve ceded control, and it puts you and the audience on a level playing field … you’re both going to see something you didn’t expect. So rather than, you know, “I’ve made something, which now you’re going to watch” — I know everything, and you don’t know anything — instead, the idea should be that we all experienced the world kind of naively, with that kind of open curiosity. In actual fact, of course, it’s totally nerve-wracking. After I’ve toured a piece for a long time, and I really know how it works then I’m no longer so tense. I kind of know what the state base is, and I know what kinds of things it’s likely to do.

Freethink: What do you feel like you learn from people, or what insights do you think are there, when you get a randomized sample of humanity?

Annie Dorsen: One of the things that I started to notice when I listened to a lot of these comments (scraped from various chat sites for The Great Outdoors) is even when people are being nasty or trying to get a rise out of others — are being sort of purposefully mean or sarcastic or misogynist or whatever — there’s some desire they have to be together. Otherwise they wouldn’t post. Like they can be as misanthropic as they want, but what they’re doing online is expressing their need for connection with other humans.

So the troll, they want attention. Everyone says “don’t feed the trolls,” and I’m like, well, “what if we fed the trolls good, nutritious food?” You know there’s a way in which everybody just wants to be together. And that’s most of the content too. If you look past the surface, most of the content that people are posting is about their desire for more satisfying relationships and for human connection.

Freethink: What do you think we can learn about people through the kinds of algorithms that we create?

Annie Dorsen: Machine learning algorithms, they bring up very different questions (than the algorithms used in my plays). It brings me back to some of the older theatrical issues about representation and ethics.

And that may be because the actual structure of most of these algorithms is so complex that we’re not quite sure what they’re doing. And that brings in a level of mystery, of course. We don’t have access to the way that those algorithms think. So we can’t learn very much about how we think by looking at them. What we can do, though, is learn something about verisimilitude or plausibility — about the relationship between the outputs that these algorithms produce and truths. There’s a lot of writing about ethics, bias, even what one artist and theorist I admire called “algorithmic violence.” Thinking about the sort of inherent sort of cultural violence of misrepresentation and exclusions from datasets.

“(Artificial intelligence is) already a kind of a game of creating an illusion of humanity. Like, sort of tricking an audience into thinking you’re looking at a person when you’re not, which is exactly what theater does, you know.”

Annie Dorsen

But I think what’s really happening is a problem with data collection. The archive is always about exclusion, rather than inclusion. The archive is created from what it left out or from what is lost. And I think the same is true with data. Data is inherently partial. It’s horribly decontextualized. That there’s too much information left out for the representation to be useful. It can only lead us astray and only make us think we know more than we do.

Freethink: What’s that process like to collaborate with the programmers to build the algorithm?

Annie Dorsen: It’s usually really exciting. I like very much to work with people who have expertise I don’t have so I have — I have that dangerous amount of little knowledge you’ve heard something about when it comes to computer science. And so my understanding of the tools is really more conceptual than technical.

I think that’s actually a benefit for the kind of work I’m doing. Because if you get too into the technical stuff you really, you can get lost in it a little bit. So for the most part, those collaborations are really sort of delightful, but there’s always a little bit of a push and pull, like the needs of the piece dramaturgically or conceptually and the needs of the programming to have its own integrity.

Freethink: I’m really curious about the emotional aspect of watching these algorithms run loose. I’m sure they’ve made you laugh. They always say something kind of funny. I’m curious though, have they ever said something that made you cry or something that made you frightened?

Annie Dorsen: Humor is easier to accomplish with computer generated language than anything else. And especially when you use — like my early pieces — use computer voices. So they’re usually kind of funny because they’re dumb. And, you know, they mispronounce things and whatever.

In the Shakespeare piece (A Piece of Work), one of the things that happened very often was that there would be new poetry created out of the bits and pieces of Hamlet. So some versions of Gertrude’s speech, it feels sort of traumatized, but she’s sort of traumatized and the way that her language is looping and repeating and splintering is very effective and very beautiful.

Freethink: You mentioned this idea of a longing for connection. I’m curious if anything’s just sort of really struck you that, you know, those are words from real people (in The Great Outdoors).

Annie Dorsen: Oh yeah. And what inevitably happens is that when you spend a little bit of time with one speaker, with one poster, you start to feel for them. Whether it’s their shaggy dog story that they’re posting or something really vulnerable about their depression or trauma they’ve been through, you know, yes, these are real people and they are they’re online too to find something. I think actually it’s been really enlightening to hear so many of these stories.

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
AI chatbots may ease the world’s loneliness (if they don’t make it worse)
AI chatbots may have certain advantages when roleplaying as our friends. They may also come with downsides that make our loneliness worse.
Will AI supercharge hacking — if it hasn’t already?
The future of hacking is coming at us fast, and it isn’t clear yet whether AI will help attackers and defenders more.
No, LLMs still can’t reason like humans. This simple test reveals why.
Most AI models are incredible at taking tests but easily bamboozled by basic reasoning. “Simple Bench” shows us why.
The future of fertility, from artificial wombs to AI-assisted IVF
A look back at the history of infertility treatments and ahead to the tech that could change everything we thought we knew about reproduction.
“Model collapse” threatens to kill progress on generative AIs
Generative AIs start churning out nonsense when trained on synthetic data — a problem that could put a ceiling on their ability to improve.
Up Next
robot art
Subscribe to Freethink for more great stories