When I applied to get an M.F.A. in poetry, in the ancient times, before the financial crisis of 2008, if you had an M.F.A. from a reasonably good school with a poetry book under your belt, you could expect a tenure-track job offer. By the time I graduated in 2009, with the financial crisis in full swing, my job prospects and life plan had been basically decimated, the social contract I had entered into when I matriculated all but burned, its ashes fed to the gaping maw of the gig economy to follow. There were maybe a grand total of 8 poetry tenure track jobs in the nation, if that. I managed to get a few adjunct jobs, where I was paid a handsome sum of several thousand dollars to teach a writing class, without benefits. I did the math, and after accounting for gas, coffee, printing fees, and medical fees to deal with all the anxiety, I basically paid the school to let me teach. I had tens of thousands of dollars in student loans and an advanced degree in poetry where even the best poets could hope to earn a high four-figures a year in poetry book sales.
I found other ways to make money: as a college admissions tutor, as a blog writer for law firms, as a travel writer for travel companies. I’ve managed to find a niche for myself in capitalism’s sea of troubles. The plight of the worker in capitalism is the plight of living in a constant state of precarity. Now, I read in the New York Times that another force aims to take my job—an A.I. Last week the New York Times made the announcement: “Meet GPT-3. It Has Learned to Code (and Blog and Argue).” The tech universe is reasonably worried. If you google A.I. that can code, you’ll find a smattering of articles written by coders and their sympathizers musing on a future where their jobs will go the way of those who worked Henry Ford’s assembly line. Since the beginning of the industrial revolution, workers have had to constantly face, encounter, and survive revolutions that threatened their obsolescence. I’m not surprised that coding is facing a similar revolution, though I imagine it will be quite some time before GPT-3 is able to fire all the brains behind Google, Facebook, and the other nameless tech companies that keep Silicon Valley buzzing and Elon Musk sending rockets into space. And yet, the fact that this A.I. is partially funded by those same Silicon Valley giants should leave lower-level developers and coders on high alert. An A.I. that could repair or produce basic code could possibly supplant entry-level and lower-level jobs. GPT-3 seems able to create simple applications reasonably well. It could even create its own version of Instagram.
But GPT-3 can also write. As a writer, should I be scared? Should bloggers and the small army of content creators out there think about going back to school for new skills?
It depends.
GPT-3 seems to have learned “natural language” by basically reading the entirety of the Internet and as many digital books as possible (I assume all the digital books available in the public domain). The New York Times notes that “GPT-3 is what artificial intelligence researchers call a neural network.” Unlike the network of neurons of your brain, GPT-3 creates networks of meaning out of the vast array of information and text available on the Internet. It finds connections between things and exploits them, much like the brain, but GPT-3’s connections are digital. And yet, a system whose power is based on the Internet will also suffer the same limitations of the Internet. Biological systems are fed information from the real world. GPT-3 forms its meaning from the vast oceans of text and images available on the web.
And this is why I’m perhaps not too worried about GPT-3 taking my writing job anytime soon. GPT-3 is a universal language model, and its limitations are also the limitations of the Internet, where the quality of writing and the quality of research often leaves much to be desired. GPT-3 can’t distinguish between Shakespeare and shitty sponsored content from Bud Light.
GPT-3 can imitate natural language and even certain simple stylistics, but it cannot reason and it cannot perform in-depth research. It cannot perform the deep-level analytics required to make great art or great writing.
GPT-3 seems to have learned how to write from Wikipedia and internet blogs. When it comes to writing clickbait, the New York Times suggests that GPT-3 may have the skills to supplant this type of writer. Several blog posts “generated by GPT-3… were read by 26,000 people, and considered good enough to get 60 people to subscribe to GPT-3’s blog.” The blog was about how one could increase one’s productivity. GPT-3 may be excellent at spewing out platitudes, but this may be as far as it will go.
GPT-3’s greatest flaw may be the Internet’s greatest flaw. Because the program has “learned” to write from the Internet, when the A.I. writes its posts, its pieces are unsurprisingly often biased and racist. And that’s the problem. When left to its own devices, GPT-3 is much like a toddler, making manifest the soul of the Internet.
Leaders who study the use of artificial intelligence in business note that any company that chose to use GPT-3 to run its blog would face “reputational” and “legal” risks.
When a writer writes, she uses conscious memory, but also her unconscious learning. This play between the conscious and unconscious is also difficult to emulate. Could we ever create an A.I. with an unconscious? I don’t know. More disturbing is this question: What would that unconscious look like?
GPT-3’s great skill, if we want to use the word “skill” when referring to an A.I., is its ability to identify patterns in given text and imagery. If great writing and great art were merely the identification of patterns, I’d be scared for my writing career. But I think that there’s more to it than this. GPT-3 may be able to imitate a style or writer, much in the way a beginning writer can imitate a style, but great writers do more than just imitate.
But what is it, exactly, that great writers do? And what is it, exactly, that GPT-3 cannot do?
I venture to guess that storytelling plays a major role in what separates humans from silicone. Oliver Sacks, in his beautiful essay, “The Creative Self” writes about the difference between mere imitation and creativity: “Voracious assimilation, imitating various models, while not creative in itself, is often the harbinger of future creativity.” GPT-3 may perhaps be on the verge of true creativity, but it certainly isn’t there yet, and it isn’t clear whether it ever will be.
For me, GPT-3’s ultimate “Turing Test” of creativity will be its ability to create a meaningful narrative, to tell a structured story, without prompting or help. (Turing was a philosopher who speculated that we would call a computer “intelligent” only if the computer could imitate a human, or even be mistaken for a human; many systems have passed the Turing test since Turing came up with this formulation, but we still, as a whole, don’t consider computers intelligent or as having properties that truly can be called “thinking,” though many eager A.I. developers are quick to describe the “emergent” qualities of their systems).
There is something more to creativity than merely making connections, or making random connections. There is more to it than even “guided” connections. Sacks writes about “the energy, the ravenous passion, the enthusiasm, the love with which the young mind turns to whatever will nourish it…” This is where a computer fails. GPT-3 can make connections and can be fed the entirety of the Internet, but this “feed” is encyclopedic and complete. What makes the human mind fascinating is its incompleteness, the fact that in its incompleteness and finitude, it must make a decision about how to allocate resources, and this decision-making is the seat of desire, is the source of desire, and yes, I’ll argue, the seat of love. GPT-3 will never be temporally limited. Its abundance makes it poor.
GPT-3 may be thorough, but it is not obsessive. It can never be. That is its ultimate limitation.
Sacks distinguishes between technical mastery and true creativity and innovation. He distinguishes between craftsmanship and art. GPT-3 may be a craftsman, but it is no artist, it is no writer.
In fact, Sacks makes a point to differentiate mimicry from creativity. Mimicry is an act performed by humans, but also by animals. Mimesis is something else. It requires the assimilation of meaning, something which A.I. cannot do. We have yet to create an A.I. that truly understands what it is reading, that can feel what it has read, that can emotively create from its experience. When we can create an A.I. that can do this, I will have officially lost my job.
Still, I have to admit that GPT-3 is fairly good. When the system was asked to create a poem about Elon Musk in the style of Dr. Seuss, it did surprisingly well. You can read the poem here: https://arr.am/2020/07/14/elon-musk-by-dr-seuss-gpt-3/. That said, Arram Sabeti admits that the system doesn’t rhyme well and that he had to “delete and retry lines.” “The whole process took several hours of trial and error,” Sabeti notes. It appears that the poem that was produced was less the work of an A.I., but rather the work of Sabeti with A.I. assistance. I don’t know how much of the poem was GPT-3’s skill or Sabeti’s cleverness.
GPT-3 is haunting, in the way a young artist, still stuck in the throes of imitation is haunting. Sacks writes: “All young artists seek models in their apprentice years, models whose style, technical mastery, and innovations can teach them.” If Sacks is correct that “imitation and mastery of form or skills must come before major creativity,” perhaps some writers should be afraid, very afraid; and perhaps I should be, too.
Even if GPT-3 could ever be a great writer, great writers have nothing to fear. Greatness is by its very definition unique. GPT-3’s greatness will be unique to it, leaving enough room for the future Virginia Woolfs and Shakespeares.
If any writer should be scared today, it is the writer that deals in platitudes. GPT-3 may soon be able to help marketers write promotional e-mails and tweets. But the system still appears to need a lot of babysitting and will likely need it for some time.
Will GPT-3 ever take away my job? I doubt it. Even though the system has been used to create blogs, I doubt the system will ever be able to replicate critical thinking, which is something our culture has in very short supply. And while the system has been used to create a tutoring application called LearnFromAnyone, where the system imitates famous people, answering user’s questions, I don’t see a future in which tutors are replaced. Students often don’t know which questions they need to ask. The system seems best at answering questions when the “learner” already had a great deal of knowledge about the subject and the person teaching. The magic of a good teacher is that he or she can adapt to a student’s gaps and fill them. A.I. can’t do that.
How ironic that a system with so little gaps, still would find difficulty finding our own.
About the Writer
Janice Greenwood is a writer, surfer, and poet. She holds an M.F.A. in poetry and creative writing from Columbia University.