CORECURSIVE #041

Beautiful and Useless Coding

With Allison Parrish

Beautiful and Useless Coding

Generative Art involves using the tools of computation to creative ends. Adam talks to Allison Parrish about how she uses word vectors to create unique poetry. Word vectors represent a fundamentally new tool for working with text.

Adam and Allison also talk about creative computer programming and building twitter bots and what makes something art.

Transcript

Note: This podcast is designed to be heard. If you are able, we strongly encourage you to listen to the audio, which includes emphasis that’s not on the page

Allison Parrish: Computer programming is beautiful and useless. That’s the reason that you should want to do it. It’s not because it’s going to get you a job, not because it has a particular utility, but simply for the same reasons that you would pick up oil paints or do origami, or something. It’s something that has an inherent beauty to it that is worthy of studying.

Intro

Adam: Hello, this is Adam Gordon Bell. Join me as I learn about building software. This is CoRecurisve. That was Allison Parrish. I’ve been trying to get her on the podcast for a while. She is a teacher at NYU. She teaches a class called Computational Approaches to Narrative. She’s also a game maker. She’s a computer programmer, and she’s a poet. Her poetry is generative. She uses the tools of NLP and computers to create her poetry. So today we talk about word vectors, and also about poetry and art. Alison was nice enough to answer my somewhat blunt questions about the nature of art. And she also does a couple of readings of her computer generated poetry. And we end with some tips on how people can get started creating things with text. Her thoughts on computer programming for just the pure beauty of it, they also really resonate with me. So I hope you enjoy the interview. Alison, thanks for coming on the podcast.

Allison Parrish: Mm-hmm (affirmative).

Punching Language in the Face

Adam: So I have this quote from you here that I think this is a reason enough for me to try to get you on the podcast that says, “I want to punch language in the face, and I want to use a computer to do it.”

Allison Parrish: Uh-huh (affirmative).

Adam: So what does that mean?

Allison Parrish: I don’t know. I think it’s pretty clear. There are a couple of parts to that quote. So I call myself a poet. And one of the roles of poetry in my opinion is to sort of destabilize language or call to attention the ways that language shapes the way that we think, the ways that language enforces particular worldviews, the ways that language shapes the world, right? The English language in particular has a history of being a force for particular forms of our, and that comes in lots of different forms. Notably in things like movements in the United States of making English the first or the official language of the country, is one example of language being used as a platform.

For politics, in particular kinds of projections of power. So that ability of language to do things to facilitate power, the kinds of power that I’m not interested in facilitating. That’s the kind of language that I want to punch in the face. Right? Anything about language that forces us to see the world in a particular way, maybe not with our consent. Anything that limits the expressiveness of language, that’s what I want to punch in the face.

Though with the computer part, that’s a little bit more difficult to explain. Among poets, there’s kind of I would say a Ludditeism, which I think in a lot of ways is justified. Computers historically have also been tools that have been used to project certain forms of power, and not good forms of power. So I understand the skepticism surrounding, computation and its use in the arts. But on the other hand, I like computers. I like programming. That is my expressive medium. When I do things intuitively, the way that I do things intuitively is through computer programming.

And I also think that computational thinking and the tools that it gives you for breaking the world down and thinking through processes, rethinking processes, I think that has the potential to be a really interesting artistic tool. And so when I say I want to punch language in the face and I want to do it with computers, that’s what I’m getting at. I think language needs to be punched in the face. It needs to be cut down to size, so that we realize the ways in which it is arbitrary. I want to do it with computers, because that’s what I like. That’s what I think is an effective tool, especially for the kinds of ways of that language is used in this contemporary context.

Birth of a Computer Poet

Adam: So were you a computer programmer first who became a poet? Is that your medium? Or how did you come to this approach?

Allison Parrish: Yeah, I got my first computer as a Christmas present when I was five. And it came with a programming manual, and I started doing computer programming for it. I’ve always been interested in language since I was a kid. My dad gave me a copy of The Hobbit to read when I was 10, and I loved reading it. The thing that I loved especially about it was Tolkien’s invented languages. And I think those two interests, and I was captivated with computer programming, and I loved the way that Tolkien used language in his books. That plus in high school, my creative writing teacher had us read Gertrude Stein, the modernist poet.

And so I think those three interests combined together to make this the inevitable outcome of my artistic practice. So I wouldn’t say none of the things that came before or any of the other things. It’s just sort of this bubbling up of all of these interests simultaneously. I have the computer science degree from a really, really long time ago. But my main undergraduate degree was in linguistics. I think only more recently in the past five to 10 years have I been calling myself a poet actually as a thing that I would put my bio. But my interest in language and creative uses of language comes way, way, way before that.

Poetic Langauge Models

Adam: How do you combine the worlds of poetry and computer programming?

Allison Parrish: Mad Libs I think sort of count in the realm of computational language. If you look at the history of computational poetics, some of the earliest projects were canonicalized that became part of art history and literary history, are things that superficially resemble Mad Libs. So actually deeply resemble Mad Libs. They superficially resemble Mad Libs. They resemble Mad Libs formerly, but have a very different tone. Alison Knowles’ House of Dust is a poem that was generated by a FORTRAN program. So things like randomizing words is a really simple technique that you can do computationally, and that just adds new energy to a text.

But it’s also a simple language model. A language model is a statistical model that predicts what the next word in a text is going to be. So something like auto complete on your phone is an example of that, or GPT-2 is a really sophisticated example of that. And so my goal is to take an existing corpus and predict the next word. Tristan Tzara’s How to Make a Dadaist Poem is also a language model that just predicts the next word by the frequency of all of the words in the text. Right? So those techniques are sort of a number of well-known techniques. So something like a Mad Lib or a word replacement, things like language models, and then more mechanical operations on language like randomizing and stuff like that. Those are kind of all of the techniques that you have at your disposal in one form or another. Does that answer help, or does that just make it more confusing?

Adam: Can I say both?

Allison Parrish: Yeah, sure. That’s probably the best answer actually.

What Is Art and What Is Poetry?

Adam: No, it is helpful. What makes something poetry? I mean, you’re not going to be able to answer this in our time. But if I select randomly from a bag of words made out of an article, what makes it art?

Allison Parrish: That’s a difficult and fraught question.

Adam: I’m asking-

Allison Parrish: A question that’s-

Adam: … it in the kindest way, by the way.

Allison Parrish: I don’t care if you’re being kind about it. With this question, it can be asked in a kind way, It can be asked in an aggressive way. But I have my confidence in my techniques as an artist, I think that the things that I do are art. And I’m also not attached to the label of art. It’s still important and interesting, even if people choose to say that it’s not art. But art is literally things that you have made. Right? Art is the same root is an artificial. It’s a thing that has been created, right? So anything that meets that bare definition counts as art. It’s like a word that has connotations of being high society or high status. And that I think is kind of a useless thing to attach to it. Poetry is the same way.

The word, “Poetry,” has this sort of genteel academic, educated feeling to it. And a way that you might compliment a text is by saying that it’s poetic. Right? It’s considered to be this higher form of language. That’s not what I mean when I talk about poetry. When I talk about poetry, I’m talking about forms of language that call attention to language’s materiality. They call attention to the surface form of language, rather than to just it’s semantics. Semantics being the so-called meaning of a text. There are multiple ways for me to ask you to pass me the salt shaker, right? And I can mean that same thing in multiple ways. And I would say narrative art forms, like the novel and things like that, call attention to the underlying story structure and characters, and plots and things like that.

So to contrast narrative creative writing from poetry as a creative writing form, poetry would be forms of creative writing that aren’t about underlying structures of meaning necessarily, but are about what actually happens on the page when language is read, and what actually happens in the mouth when language is spoken out loud, things like that. A creative language is more poetic to the extent that it calls attention to those dimensions of language by that definition. There are lots of poets who don’t care about that stuff and their practice. I’m not saying that it’s not poetry. But for me, when I’m talking about poetry, that’s what I’m talking about.

Working With Word Vectors

Adam: The talk I saw of yours, you used word vectors, which I wasn’t familiar with to do some neat things with text. So what are word vectors?

Allison Parrish: One of the main operations that people want to do in natural language processing, and this has been the case even since the 50s, is determine how similar two documents are, or two units of language, whatever that unit happens to be. Or they want to be able to say, “These documents are all one another. They form a cluster versus this other set of documents, or units of language,” or whatever, form a different cluster. And the main mathematical operations that we have for operating on things like similarity and clustering, and classes for categorization is Euclidean distance, or other measures of distance between points.

All right? So if you want to know how similar two points on a Cartesian plane are, you just get the distance or the length of the line that connects those points. Right? And that math is the same thing if you wanted to … Any data that you can represent as two spreadsheet columns, as two dimensions of data, you can use that same math to figure out whether those two items are similar, or whether it’s concerning clusters or so forth. So that’s the underlying statistical and mathematical need is to take phenomena in the world, and represent them as points on a plane or points in three dimensions, or points in however many dimensions there are, however many features there are of the data that you’re looking at.

So word vector is exactly that. It’s a way of representing a word as a point in space of some number of dimensions. And there are lots of ways of making word vectors. Like I said, if you go back through the literature, you can find stuff from the 50s, 60s, and 70s that are trying to hand assign values to spreadsheets of words. Say like, “Put a one in this column if it’s a word for an animal. Put one in this column if it’s a noun,” so that they can do these distance operations, to do things like clustering. In the more contemporary machine learning, those word vectors aren’t constructed by building the data set carefully.

They’re constructed through processes in a neural network, building a generative model and taking the calculated latent space value for each of those words. Or just doing something like principal components analysis on a word co-occurrence matrix, or something like that to give you a vector representation of that word. So that you could say for example, once you have a vector representation for the word, you can say like, “What are all of the words that are most similar to blue?” It would give you green and sad, and purple and sky, all of the words that occur in similar contexts.

And then you can do weird mathematical operations on them, like classification and categorization is one thing. But the big finding from the famous Word2vec paper from the group of researchers was, you could do things like analogies. The same line that connects France and Paris, if you draw a line from France and Paris, and you transpose that line to start on Germany, the line will point to Berlin. So there’s this space that gets generated. And they were presenting this as a finding specifically about their method of making word vectors. But I think it probably applies more generally. Some of the research that I’ve been doing has been using much less sophisticated techniques that still show some of those properties.

Adam: So we cast all this thing into this multi-dimensional space. And the arrow in that space represents some sort of relationship? Is that what you’re [inaudible 00:13:39] example?

Allison Parrish: Yeah. Just imagine on a two dimensional plane, it’s the line that connects two points, plus the direction. So it’ll be the direction and the magnitude of that line. You can easily think of it as just the line that connects two points. That’s the way that I usually explain it at least.

What Color Is a Work of Literature?

Adam: Yeah, it makes sense. It’s not a way I had thought about words before.

Allison Parrish: Right. That’s what’s interesting.

Adam: Yeah. That talk that I saw you gave, you started off with a very simple example, which was RGB colors. So you had … I think that you took each color and mapped it to an RGB value. And then you answered some strange questions about, “What is the average color of the book Dracula?” I remember thinking like-

Allison Parrish: Right.

Adam: … that doesn’t seem like a type of question that, first of all, I would know to ask. Or that could have a logical answer.

Allison Parrish: Right. And for me, that’s the … When I say that I want to punch language in the face with computers, that’s the utility of doing these things computationally, is that if you’re critical about what computers can do, and about where these data sets come from, you can let kind of the natural affordances of computation lead you to these unusual questions that can reveal things about the world and about texts like Dracula. About just anything in the world. They can reveal things about those things that you might not have otherwise known to look for.

Adam: Right.

Allison Parrish: But the way that a computer programmer, the way that a data set casts those phenomena, their results are particular decisions that people have made. But in that combination, in that synthesis, you can end up with these really amazing and bizarre questions and ways of looking at the world that other techniques can’t do. It’s their worldviews and things that you notice that I think are particular to computation in some ways.

The Phonetic Similarity of Text

Adam: So are there other interesting questions you have asked of the text?

Allison Parrish: Well, one of the things that I talked about in that talk that’s on my mind, because I was preparing a little bit for this interview, is phonetic similarity between texts. And that’s one of the big things that I’ve been working on in my research for the past couple of years, is how to determine whether two stretches of text, whether it’s a word or a sentence, or a line of poetry, sounds similar to each other. That’s something like … Approaching that question about computation I think would be interesting, but difficult. The way that answering these questions might happen back in the middle ages is, you would just get a whole monastery of monks on the question if you wanted to do something like, “Find me every verse of the Bible that has to do with clouds.”

You would have to feed a monk for a couple of days to go through the book and find all of the verses that have to do with clouds. If I wanted to answer the question, “What sounds similar to something else without a computer?” I would have to establish a criteria, and then make a database of those words, and then pay a grad student to go through and say, “Think, does each of these lines match these criteria?” With computation, I can do it much faster. And I can iterate on ideas of what sounding similar means.

Right? And because it’s kind of easy to do that, or more or less easy, it’s a one person job instead of a monastery’s job, I can pick and choose my techniques and develop the technique in a way that ends up being most aesthetically pleasing to me. Aesthetically pleasing because I’m a poet and an artist, and that’s what I’m interested in. So yeah, phonetic similarity has been the big thing lately. It’s a kind of thing that I don’t think I would have been able to do as easily without computation. And it was definitely informed by the computational tools that I had at hand when I was developing algorithms to do that kind of thing.

Adam: Do you have wave samples of words and you map them? Or how do you determine what a word sounds like?

Allison Parrish: If you want to know how to pronounce a word, let’s say in a paper dictionary or on Wiktionary, or something like that, there’s the phonetic transcription that shows you how to pronounce the word. So if you look up a word like, “Torque,” for example, T-O-R-Q-U-E. How do you pronounce that word? Well, you don’t sound it out letter by letter really, because that doesn’t work for those last two letters. Right? Those letters are silent. How do you know that? Well, you learn it in school. And if you’re not sure, you look it up in the dictionary, and you look at the phonetic transcription. Right? There’s a big database of phonetic transcriptions for English words called the CMU Pronouncing Dictionary. And I used that as a dataset for a lot of this work, because it’s computationally readable. It has the word and then also a language for transcribing the phonetics of those words. My own personal idea and what’s interesting to me is that we can talk about sounds and qualities of sounds without resorting to audio files. Right?

I can tell you like, “This poem has a really whispery feel.” Or, “That talk was really … They use a lot of words that were really heavy,” or something like that. Or we can even talk about individual sounds like the M sound in English, or the nasal sounds of French. Or the trilled R of Spanish. We don’t have to resort to audio files to know what we’re talking about. We have a language for discussing that’s not purely based on recall or imitation. So there’s this level of, I’m thinking about language that’s more about an internal, mental representation of the sounds than it is about the audio itself.

Adam: So what do you do with that if you have this mapping of words? I mean, you personally, what excites you and what can you do with this information?

Allison Parrish: So I use it a lot for… It’s a file that I basically always have opened on my computer in one form or another. I think for this stuff that was in the talk that you watched, I was just making word vectors using the features in the CMU Pronouncing Dictionary as the dimensions of the spreadsheet. So counting basically co-occurrence of phonemes. But not just phonemes, but underlying phoneme features as a way of basically making a spreadsheet where every word had a different vector associated with it. And that gives me a really easy way to tell whether two words sounds similar or whether two sentences sound similar, just based on counting up the phonetic features from the phonetic transcription of the words that comprise that sentence or that line of poetry, or that individual work.

Other ways that I’ve used it. One is just finding words that rhyme in English, Right? Which sort of the naive solution to finding words that rhyme is well, we find words that have the same last couple of letters. “Here and there,” those are really a good examples. Frankly, here and there both end with the same four letters, but they don’t rhyme. And so to know whether two words rhyme, you need a phonetic transcription. And then you can look at everything from the last stressed syllable up to the end of the word. If two words share that, then they rhyme. And the CMU Pronouncing Dictionary gives you the ability to find words that look like that.

The Road Less Kikied

Adam: So are you willing to share an example of something with us?

Allison Parrish: So one of the things that you can do with word vectors is sort of tint a word. And then the way that you can do that is, say that I have … If we were working with semantic word vectors, like the word, “Tobacco,” or, “Glove.” I could say, I want a word like basketball, but I want it to be … Be basketball as an example. I want a word like computer but I want it to be a little bit more sad, or something. So you find the word for computer and the word for sad, and then basically draw a line between the two, and find a point on that line and see if there are any words that are close to that point on the line. And then you might end up with … I can’t think of a word that’s computer, but more sad. Maybe Abacus or calculator, or something like that.

And I call this tinting the words. With the word vectors that I’ve made, they’re based on sound. So you can tint a word by saying, “I want you to find a word that’s like this word, but it sounds more like some other word.” The words that I use as an examples a lot for tinting are, “Kiki,” and, “Bouba.” These are nonsense words, but they’ve been shown in different studies, like anthropological and psychological studies, to have this sort of constant emotional valence across cultures. So the word, “Kiki,” is perceived as sharp. And the word, “Bouba,” is perceived as round. And people will match up pictures to these words. It doesn’t matter where you grew up or what language you spoke when you grew up, they’re almost always seen in that way.

Adam: You can hear it, right? Kiki sounds very crunchy or angly, I don’t know.

Allison Parrish: Yeah, and bouba seems like round boldness, right? So there may be these universals in phonaesthetics where language has particular cenesthetic properties, which I think is super interesting. One of the exercises that I did is that I took Robert Frost, The Road Not Taken, which is the two roads diverge in a yellow wood. “I took the one that has been less traveled by, and that has made all the difference.” Well known if you’re primary or secondary education in the United States, and took an English class here. It’s one of canonical texts.

And rewrote it with the phonetic word vectors, replacing words that sound two versions of this. One version where it’s replacing words that sound … Replacing every word with a word that sounds more like kiki. And then another version where it’s replacing words with words that sound more like bouba. We’ll do a little experiment here, and I’ll read all three versions.

The first is without any modifications, and it goes like this. “Two roads diverged in a yellow wood. And sorry I could not travel both and be one traveler. Long I stood and looked down one as far as I could to where it bent to the undergrowth. Then took the other, as just as fair. And having perhaps the better claim, because it was grassy and wanted wear. Though as for that, the passing there had worn really about the same. And both that morning equally lay and leaves no step had trodden black. Oh, I kept the first for another day. Yet knowing how way leads on to way, I doubted if I should ever come back. I shall be telling this with a sigh somewhere ages and ages hence. Two roads diverged in a wood, and I, I took the one less traveled by. And that has made all the difference.” So that’s the original. Thank you, Robert Frost.

So here is the version of that poem plus the word, “Kiki.” So tinting all the words to sound a little bit more like, “Kiki.” It goes, “Cookie roads diverged in a yellow key, and sarti I goki kican kible booth, and pi one traveler. Long I stooki and loki down one as far as I goki, tuki whykiki ik bik in the undergrowth. Then kupak the other, as cheeky as fichara. And having perhaps the bekki came, the ki ik was kici in whykiki waki. Though as for pik, the kiking kikik haki worn them killy kabuki the safekeeping. And booth pik morning kiki lay in teaves no teki haki te garden black. Oh, I kaki the firsti for another ki. Pikiti kiown how way teaks on tucky way, I tiki if I should kever come backi. I kishi pe leki kith withi a siki squeky kazi, and kazi hence. Kooki roads diverged in a woodki and I, I kupak the one let kivelled bi. And pik has peek all the difference.”

Adam: That’s wild, right? I don’t know, it makes me smile. I don’t know how to process that. It’s super interested though. It makes me smile. It definitely highlights the sound things. I don’t know.

Allison Parrish: Do you feel as though language has been punched in the face?

Adam: I definitely feel like that you have computationally represented some of sort of mouth sound. You’re saying it’s above the level of mouth sound. But that’s how I appreciate it, is some sort of … You’ve kicked the poem in a certain direction, in some weird dimensional space.

Allison Parrish: Well, that’s part of what I think is so interesting about it, right? I didn’t use any audio data to make this happen, right? I just used the phonetic transcriptions, the data of the phonetic transcription.

Adam: Yeah. I think it’s cool.

Allison Parrish: Should I read the other one?

Adam: Yeah. Let’s do it.

Allison Parrish: The bouba one?

Adam: Let’s do it.

Allison Parrish: Okay. So this is the same poem, plus bouba. “Chubu roads barbers filled a yellow wood. And borry I couva not travel both and bobi one bosler, dong I stova. And jukebox bode one as fad as I coba, to bowel it bant in the bogard. Babou bo kook the bother as baby as fair. And having perhaps the babbet claim, bages it’s a bawa barissi and wan ba bowel. Though as for bogatch, the bago babou hab worn them bably abut the same. And both bogatsh booming equally lay in babs no babet hab batten blob. Oh, I babkat the first for bather joy. Babet knowing baha how way babs on to way, I boubt if I should ever cabab bab. I shall babi babu bages with sabu. Babi babis and babis hence, tubu roads barbers fil in a wood, and I, I bokuk the one bagu traveled ba. And bobath has bibabo all the balats.”

Creative Computer Programming

Adam: That’s very good. Thank you so much.

Allison Parrish: [inaudible 00:27:02]. Hey, you’re welcome.

Adam: I had a computer when I was a kid too. I built some things. And I feel like the world is a little bit … The world of people who computer program is constrained, and everything’s about building to-do apps, or making super optimized web services, or something. And so you’re doing something super different. And that impresses me. And I don’t know how … Do you think that the world should be more creative in the use of computer programming?

Allison Parrish: I’m trying to pick through that question. Because I mean, the answer’s obviously yes. Right? And it’s yes in a way that I have come to see, or I’ve come to feel as fairly unproblematic. But I have strong opinions about this that are maybe not backed up by anything practical. But one of the things that I tell my students, I teach in a program that’s a combination of design, and technology and the arts. And I teach our introduction to computer programming class. It’s called Introduction to Computational Media.

One of the things that I tell them on the first day is, computer programming is beautiful and useless. That’s the reasons that you should want to do is not because it’s going to get you a job, not because it has a particular utility. But simply for the same reasons that you would pick up oil paints, or do origami, or something. It’s something that has an inherent beauty to it that is worthy of studying. And that beauty stems from a whole bunch of things. One is just the pure mathematics of computer programming, I think are super interesting.

To the extent that pure mathematics is a field of practice that lends itself to that same kind of artistic thinking that is joyful just for the purpose of just for doing it. I think computer programming is like that. And I mean, when you’re making a 2D app, you are doing something creative in the sense that you’re applying your skills and your interest in making those kinds of decisions. It’s just they’re attached to this very uninteresting problem, right?

The same way that you might be really good at oil painting, you could still use oil painting to pain a portrait of a dictator, right? That’s not a good use of that skill. But it is a use that you can put it towards. So yes, I think in general, the world would be a better place if we used computer programming for what to me, seems like its intended purpose of being artistic, of being creative. Of building communities of being citizens of the world of trying to make the world a good and beautiful place to be. And I think it’s a real shame that this delicate artistic process, delicate even though I’m proposing it could be used to punch someone in the face. Punch someone in the face delicately and still do a lot of damage, I have to say. Has been turned towards these other applications that, let’s say at best are uninteresting. And it works very much worse than that.

Adam: That’s a great answer. So I know a ton of people who would be writing computer programs, whether you paid them to or not. But they get paid well to do it. It’s like the world of oil painting, if everybody could get a job for a 100 plus $1,000 doing portraits of people on … Like tourist portraits, right? That would negatively influence the world of oil paintings, I assume.

Allison Parrish: I mean, maybe. It’s hard to speculate about that precisely. I’m certainly not trying to say like, if computer programming is something that you do for a job, that it also needs to be your passion. That’s not part of this equation. It could be interesting to you for other reasons other than it’s something that you feel fixated on for whatever reason. And that’s also true for artists, right? There are lots of professional artists who feel passionate about their medium, but who don’t necessarily feel like they have to do it after office hours, or after their day at the office is over. Something can be your professional practice, and you can feel really passionate about it without it being something that consumes your soul, and this very romantic, “You’re an artist,” kind of way. I just think that I … It’s not a bad thing to just program for your job.

Joyful Coding

Adam: Oh, yeah. Totally. And also, I would say you can have very practical things that you do, and they can be super fun and interesting, and intellectually stimulating.

Allison Parrish: Oh, yeah. Absolutely. Absolutely. But for my purpose, as an artist and as someone who teaches programming to artists and designers, I want to emphasize that it’s not only a vocational thing. It’s not only a way for building things like to-do apps. For that matter, it’s not only a way to write useful applications that help to organize communities, or help to do scientific work and other good applications of programming and software engineering. But there is this very essential, very core part of computer programming that is just joyful. That’s about understanding your own mind in different ways. And understanding the world in different ways.

Adam: Yeah. It’s a great sentiment. That was my big tangent. So that was a vector space of the sounds that we did. What other vector spaces are useful in creation?

Allison Parrish: Usually the way that word vectors are used is for semantics. Because most often for whatever reason, people want to say, cluster documents or sentences based on how similar they are in meaning. So you can say like, “Here are all of the tweets that mention Pepsi. And here are all the tweets that mention Coke,” right? And you can do that without paying someone to read every tweet. And so it’s more about meaning than about other aspects of language. So that’s the original, like the word to back research, or other word vector stuff like the GloVe vectors from the Stanford Natural Language Processing Lab.

Or more recent things like Google’s Universal Sentencing Coder, which is a neural network that turns similar sentences with similar meanings into similar vectors. So that’s the actual academic research in this area. Academic and corporate research in that area. And I just sort of hijacked some of those techniques to make my phonetic vectors, because it was … That was more interesting to me as a poetic thing. But I’ve also done stuff with word to vec, pre-train vectors like the GloVe vectors and so forth, for doing things like grouping lines of poetry by meaning, and doing things just like chat bots are easy to make once you have a way of judging the semantic similarity of two sentences. You can make a really easy chat bot just by having a corpus of conversation, and then always responding to the user with a response that is semantically similar to whatever the response was for something that’s semantically similar to that user’s contribution to the conversation. It’s a convoluted thing to try to explain, but that’s basically how it works, right?

Chat Bots Using Word Vectors

Adam: So it’s like, if I said to the chat bot, “I’m feeling sad.” Then it would find some statement near that and say like, “I’m feeling down.” Or something. Is that the idea?

Allison Parrish: Well, it would find whatever the response was to something that is similar to what you said. So if you’re saying, “I’m feeling sad,” it looks through its corpus and it says like, “What’s the closest sentence to, ‘I’m feeling sad?’” And that might be like, “I’m feeling down.” And then it responds with whatever the line that comes after that was. So if you train it on a movie script, instead of saying … It wouldn’t say, “I’m feeling down.” It would say like, “Buck up, chap,” or whatever the response was to that. I mean, you can go back and forth like that. That’s a very elementary application to this technology, but still shows how effective it can be.

Finding Meaning Using Co-Occurance Matrixes

Adam: So one thing I don’t get is how do you get meaning out of this? How do you group words by their meaning in this multidimensional space?

Allison Parrish: So the way that it’s generally done is through co-occurrence of matrixes. So it’d be like if you go through an entire text, make a really big spreadsheet that has one row for every word, and then one column for every possible context that a word can occur in. That might just be the word, “Before,” and the word, “After.” And then just calculate for every word how many times this can occur between, “The,” and, “Of.” How many times does it occur between, “Abacus,” and, “Alphabet.” How many times does it occur between, “That,” and, “Going.” Or whatever. So you have all the possible context, and then every word. And then you just count how many times the word occurs in that context.

That number then in every row is then a vector that corresponds to what words occur in similar contexts, right? So if two words have similar contexts, that’s a key according to this theory of semantics that those two would share a meaning, or have similar semantic characteristics. And that bears out if you think of words like days of the week, right? So I’ll say something like, “This Tuesday, I am,” or, “Last Wednesday, we will.” So it was context of like, “This,” and, “Last,” and, “Next.” Those are shared by days of the week, and maybe not with other things. That you would never say, “This puppy, I’m going to the store.” Or, “This yesterday, I’m going to the store,” right?

Adam: Would those days of the week examples though, they don’t actually mean the same thing.

Allison Parrish: No, they don’t.

Adam: They in fact mean different things. But I guess they’re related.

Allison Parrish: But they are semantically related, right. And this is one of the benefits of word vectors that once you calculate these things automatically, one of the drawbacks is that there’s no way for it to understand those really subtle differences between the meanings of words. Except by making the window of the context much bigger. So the idea would be, if the context is just one word, before and after, I’m never going to get that meaning. But if the context was a million words before and after, then you would by necessity almost come to capture the unique context of those words. I mean, a million isn’t practical, but there are other ways of getting around it. And all of the more recent research in this field has basically been about, how do you find tricks to make that window bigger and bigger without actually using up all of the RAM in the universe.

Adam: It makes me wonder if in this vector representation, where you have a whole bunch of numbers, if there’s one … Like the 73rd one in, is whether it’s a time and place, or something.

Allison Parrish: Yeah. There’s been some research with prebuilt vectors. Because the dimensions don’t actually mean anything on their own. The researchers have show that some of the individual dimensions of the vector do actually correspond to particular semantic properties. Which is interesting to think about. But it’s just sort of an [inaudible 00:37:28] phenomenon of the way that the dimensional reduction works. Whether it’s done with a neural network, or with a technique like principle components analysis, or something like that.

Adam: Oh, there’s no stability to what these dimensions are? Each process produces some different-

Allison Parrish: Yeah, exactly.

Adam: Yeah.

Allison Parrish: [inaudible 00:37:43] exactly.

Adam: Because there’s the big five personality test, it has five dimensions. And I believe that they way that they were come up with was using principle component analysis. And then they retrospectively gave them names like, “This one is extraversion.”

Allison Parrish: Yeah. That kind of thing I imagine is fairly common. Even something like t-SNE, right? You’re retroactively assigning … These dimensions don’t actually mean anything, but you’re assigning, “Well, this means the X axis. And this means the Y axis.” Right?

Adam: With what? Sorry, what was your example?

Allison Parrish: t-SNE. It’s the dimensional reduction algorithm that’s used commonly for doing 2D visualizations of five-dimensional data. And it makes weird swirly patterns. It was popular four years ago, I guess. But often when you’re doing a visualization of GAN results or something, and you want to show them … Well, GAN isn’t a good example. Because that’s a cleaner latent space. But you’re doing results of your auto-encoder for handwriting digits, or whatever. And the underlying model has 50 dimensions, but obviously our screens only share two dimensions. So you want to show it in two dimensions. t-SNE is a good way of taking 50 dimensional data, and reducing it to just two dimensions, so you can easily show it as a visualization.

Adam: Very cool.

Allison Parrish: Yeah, and I didn’t invent it or anything like that. It’s just something that I know about.

Twitter Poetry Bots

Adam: So another thing you do is make Twitter bots. That is a form of your art?

Allison Parrish: I haven’t made Twitter bots in a while, actually. But for a while, it was pretty important. The benefit of a Twitter bot is that it’s a really easy way to publish any kind of writing really. But especially generative, like computer generated writing. It’s a really easy way to publish that. Because it has sort of the low barrier of attention, but it can still reach a wide audience.

Adam: What is Every Word Twitter bot?

Allison Parrish: So Every Word is a Twitter bot that I started when I was in grad school in 2007. I was inspired by a piece called Every Icon by John Simon, which is a pieces that’s a 32 by 32 monochrome grid. And it’s gradually iterating through every possible permeation of that grid. So basically every possible combination of those pixels being on or off. And it’s doing one every 1,000th of a second, and it’s going to keep doing is obviously for millions of years. Because a 32 by 32 grid is two to the 32 by 32. So it’s a huge, huge number.

And so we were learning about that in class. And that was about the same time that Twitter was kind of kicking off. And people were saying like, “Well, Twitter is useless. People just talk about their sandwiches,” or whatever dismissive thing people were saying about it then. So I was like, “Well, I’m going to do this really project. And it’s going to be like Every Icon, but it’s going to be every word. I’ll tweet every word. So if people think language on Twitter is useless, we’ll whether that’s the case. I’ll just tweet every possible word, and thereby make every possible statement.”

So it lasted for seven years. It ended in June 2014. It went through every word in word lists that I happened to find somewhere. I don’t remember where the word list was. At its peak, it had a little bit more than a 100,000 followers, which doesn’t seem like a big deal now. Because yeah [crosstalk 00:40:57].

Adam: It seems like a big deal to me.

Allison Parrish: Well, but the standard for Twitter now is much higher. Like Barrack Obama has a 100 million followers, or something like that, right? But for an experimental writing project made by a grad student, it was a pretty big following. So yeah, and then I published a book version of Every Word with Instar Press a couple of years ago, that has every word. The Twitter bot tweeted along with the number of favorites and retweets that I got.

Adam: I feel like I need to unpack this. So you have on this bot, a 100 times the number of followers that I have. And you tweet each word. And then you also published it. It sounds a little bit a successful joke that’s taken off, I don’t know.

Allison Parrish: Yeah. It was a tongue in cheek project. It was like a lot of arts projects that are kind of in the avant-garde. It has its roots in a little bit of satire, a little bit of just like, “What if we did this? Let’s see what happens.” So yeah, it was definitely a joke, and it was funny. And I had fun. And I think it showed a lot of different things. It was a successful experiment as well. I think in the sense that words, when you … It was sort of like this ultimate project in the decontextualization of words. What do words actually do when you forcefully take them out of language? How do people respond to them in that context?

And the fact that people had used every word like … People still favored and retweet the words even to today. What was interesting while it was running is that this was back before Twitter really leaned into the algorithmically moderated feeds, is that the tweets would just come up in the middle of your feed, right? Like your friend might be tweeting about their sandwich, or … It’s hard to imagine anybody using Twitter for anything except either spreading fascism or being unhappy about fascism right now.

But back in the day, if you cast your minds back to 2009, people actually used Twitter for actual purposes and not for self-promotion and trying to either destroy or save the world. So these words would come up between two people’s tweets, and the tweet for, “Happy,” might come up. And the next tweet after that might be a friend that got good news. Or a tweet might come up, and there would be like, “Lunch,” and you’d be like, “Oh, maybe I do want to get lunch.” So it’s sort of this heartbeat. It was tweeting every half hour. So it’s kind of this weird heartbeat that was injecting your Twitter feed with a little bit of not randomness, but serendipity. So I think it was successful from that perspective.

Adam: I would totally take a clock that just had instead of actual time, with just words. I think that would be interesting.

Allison Parrish: And it was also about, how does reading on Twitter actually work. Can Twitter be used as an artistic medium? Every Word was one of the first … Maybe not one of the first Twitter bots, but is sometimes recognized as one of the first specifically artistic Twitter bots. So it kind of was participating in this idea of, can social media be a canvas for artworks?

Adam: Yeah. It’s amazing. I remember there used to be this thing … I mean, it probably still exists. So I’ll just try and Google it. It was Gargov, it was Garfield with Markov chain written text. Have you ever-

Allison Parrish: Yeah. Yes, I think … I know at least Josh Millard made a version of that. There might be others. But yeah.

Adam: Yeah. No, you’re right. It’s Josh Millard. Yeah. Okay.

Allison Parrish: Yeah. Josh is brilliant. Josh is another internet prankster/artist who makes really great work.

Building Text Bots With Students

Adam: So you teach students. And if somebody is fluent in computer programming and wants to make things using text, where would they start? Do you have a recommended what they should play around with, what should they try to create?

Allison Parrish: So there are a couple of really interesting, easy resources. All of my class material I put online. So if you go to decontextualize.com, which is my website, there’s a whole bunch of materials. Mainly in Python, that’s the language that I work with the most. For a really easy and really powerful tool for working with generative text, I would recommend Tracery, which is a tool that Kate Compton made. Kate Compton’s a recent PhD graduate from UC Santa Cruise’s Computational Media Program. And also just a brilliant all around person. And so she made this kind of a simple programming language called Tracery that makes it really easy to write Mad Libs style text generators, but also text generators that have recursives and tactic linguistic structure.

And there’s a tool that goes along with that called Cheap Bots Done Quick, which makes it really easy to turn your Tracery grammar into a Twitter both. And that workflow of like, “I learned Tracery, and then I learned Cheap Bots Done Quick. And then I made Twitter bots.” That to me is sort of like the gateway drug path of getting involved in this kind of work.

Adam: I think that gives me more context to some things you said earlier. Because that would be very easy for me to just take somebody I don’t like, and build some bot that constantly harasses them.

Allison Parrish: Yeah. I mean, that’s what you have just described is contemporary global politics. And that’s part of the reason I don’t make Twitter bots anymore is first of all, because Twitter close down the developer rules to make it more difficult to make Twitter bots. For good reasons, right? Because bots are a huge vector for harassment and gaming engagement algorithms, and stuff like that. But it just made it harder for me as an artist. And also because Twitter is not good anymore. Twitter is not a good influence on the world. And I didn’t want to be in the business of adding legitimacy and beauty to a platform that I don’t think is actually contributing to the world being a better place.

But it really is easy to get started making Twitter bots with Tracery and Cheap Bots Done Quick. And it’s still I think an important place for arts and for experimental artists to publish their work. And for artists to intervene and make things that question the way that the social media is supposed to be used.

Computational Media and Generative Art

Adam: And you mentioned this word, “Computational media.” If I want to learn more, is that what I should be Googling? Or what’s the term of phrase for this area?

Allison Parrish: I don’t know. I should know that. Computational media is more of a broad phrase that includes just anywhere that computation and media intersect. My own interest is in generative stuff. That’s the word that’s usually used with it, even though that term has different meanings in different contexts. Generative art is art that’s generated with computer algorithms. So that’s sort of the phrase that I would go for is, generative text, generative poetry, things like that.

Adam: Generative art. That’s awesome. I heard the old story about Brian Eno, and he would get sound loops that all had different lengths. And he would get them all playing, so it was just constantly generating music. And a lot of times it would be bad. But sometimes it would get in a weird offset, and produce something cool.

Allison Parrish: Yeah, exactly. And the thing that I like to emphasize when I’m talking about this kind of work is that it’s not new, right? I mean, you’re talking about Brian Eno who was making art that used these techniques before he had his hands on computers. But then even before that, you had Steve Wright. You had artists like John Cage, Alison Knowles who I mentioned earlier, Jackson [Macba 00:48:10], Tristan Tzara. Going back even before the 20th century, you had artists working not necessarily with computers, but with techniques that we could label as computational. So it’s not like this new thing. The first computer generated poem didn’t happen yesterday when the newest model from Google came out. It happened a 100 years ago when Tristan Tzara was pulling words out of a hat. And maybe even earlier than that.

Adam: That’s awesome. Yeah, I mean, it’s obviously not new. But it’s a bit new to me. So I’m learning. And I think it’s-

Allison Parrish: Oh, that’s fine.

Adam: Yeah. I think I just want people to make more weird stuff, that’s my perspective.

Allison Parrish: Yeah. I agree.

Adam: So you mentioned your book. And I think we’re running out of time. I was wondering, is there anything you’d like to share with us from the book?

Articulations Reading

Allison Parrish: Okay. Yeah, so I will read a short selection from Articulations, which is the book that I wrote. It’s part of Nick Montfort’s Using Electricity series, which is a series of books that are computer generated. So it’s from Counterpath Press. You can buy it online at Small Press Distribution, or Amazon. And this is just a short section from it.

Allison Parrish: “A shape of the shapeless night, the spacious round of a creation shake the seashore, the station of the Grecian ships. In the ship them in, she stationed between the shade and the shine, between the sunlight and the shade. Between the sunset and the night. Between the sunset and the sea. Between the sunset and the rain. A taint in the sweet air when the setting sun, the setting sun. The setting day a snake said, it’s cane. It’s a kill. It is like a stain like a stream, like a dream. And like a dream sits, like a dream sits like a queen shine, like a queen when like a flash, like a shell fled like a shadow. Like a shadow still. Lies like a shadow still. I like a flash of light, shall I like a fool [inaudible 00:50:03] he, you shine like a lily, like a mute shall I languish. And still I like Alaska. Lies like a lily white is, like a lily white. Like a flail like a whale, like a wheel, like a clock. Like a pea like a flea, like a mill like a pill, like a pill like a pall. Hangs like a pall. Hands like a bull. Bounds like a [inaudible 00:50:26].”

“Falls like a locust swarm on bows who [inaudible 00:50:29] was like a cloak for me. This form is like a wedge. But I was saved like a king. This lifted like a cup, where leave a kiss [inaudible 00:50:36] cup, the cup she fills again, up she comes again. Until she comes back again. Until he comes back again. Until I come back again.”

Ending

Adam: That was great. Thank you very much.

Allison Parrish: Thank you.

Adam: Awesome. Thank you for punching language with computers.

Allison Parrish: Sure. Anytime.

Adam: All right. That was the talk with Allison. I hope you liked it. Back in 2017, I went to Strange Loop Conference. It was the first time I’ve been there. The only time so far. And I was kind of doing a little bit of podcasting then, but not for CoRecursive. I did some episodes for SC Radio, and so podcasting was kind of on my mind. And the conference was super interesting, because there was a lot of people talking about scaling X, or type-level programming. But then also people just doing interesting things. Just building fun things with computer programming, and showing the code and walking through it.

And Allison was one of those people. She did a talk that included manipulating text using pronunciations in some of the readings that we did today. And at the time I was thinking like, “This is just something that’s very well suited to an audio format,” like a reading, manipulating the pronunciations of words. Coming up with this multidimensional representation of words, and then kind of moving things in certain directions in that multidimensional space, and then hearing the results. I thought, “This is something that would work great in an audio format.” So I hope everybody enjoyed it. I think it was a little bit of a different type of show. But I hope it encourages people to play around with creating art with computers, computer programming without a specific purpose. Yeah, let me know what you think. I thought it was a great episode.

Support CoRecursive

Hello,
I make CoRecursive because I love it when someone shares the details behind some project, some bug, or some incident with me.

No other podcast was telling stories quite like I wanted to hear.

Right now this is all done by just me and I love doing it, but it's also exhausting.

Recommending the show to others and contributing to this patreon are the biggest things you can do to help out.

Whatever you can do to help, I truly appreciate it!

Thanks! Adam Gordon Bell

Audio Player
back 15
forward 60s
00:00
00:00
52:19

Beautiful and Useless Coding