Want a poem in the style of Frank O’Hara or a story in the style of Flannery O’Connor? Your wish is ChatGPT’s command: Just send the request and the words unfurl across your screen. In about thirty seconds, ChatGPT can unleash a 494-word “Flannery O’Connor–style” story about a quiet, plain-looking girl in a small Southern town hurling a Bible at a preacher’s head. A five-stanza “Frank O’Hara–style” poem, rhapsodizing the bustle of music and strangers on city sidewalks, takes ChatGPT less than fifteen seconds to write.
A chatbot created by the San Francisco–based research lab OpenAI, ChatGPT was released for use by the general public in November. Although artificial intelligence (AI) has been in development for decades, ChatGPT has struck a cultural nerve—particularly among those who make their living with language. ChatGPT’s learning model was trained on a mind-boggling 570 gigabytes of text, mostly English-language material from various sources and time periods. With powerful algorithms, ChatGPT uses a probability-based language-prediction model to generate each word in a sentence, producing coherent content in a conversational tone. Its poems and stories demonstrate an understanding of basic narrative structure and different genres while also incorporating customizable details about setting, style, and character.
These capabilities have raised fears that AI could render the human writer obsolete. ChatGPT has, indeed, already replaced some human authors for a certain set of readers. Through Amazon’s Kindle Direct Publishing program, users have been selling AI-generated picture books and quick-premise, high-concept works of “literature.” In February, Reuters reported that there were more than “200 e-books in Amazon’s Kindle store as of mid-February listing ChatGPT as an author or co-author.” And there may be more than that, since Amazon does not require writers to “disclose in the Kindle store that their great American novel was written wholesale by a computer,” according to Reuters.
For teachers of writing, many of whom are writers themselves, ChatGPT also poses concerns. Commentators have written diatribes about ChatGPT’s threat to education in publications such as the Atlantic and the Chronicle of Higher Education. Some have discovered that students are turning in assignments written by AI that contain factual errors and that they present as their own original thinking. It stands to reason, however, that creative writing students derive pleasure from the writing process, and so there may be less incentive for them to turn in an AI-generated story or poem to a workshop.
If the existential threat posed by ChatGPT to authorship is overblown, it has already caused havoc in parallel realms. In February the popular sci-fi magazine Clarkesworld announced that it would have to temporarily close to submissions because of an influx of AI-generated stories. Neil Clarke, Clarkesworld’s publisher and editor, tweeted about the decision, saying he thought the phenomenon was financially driven: With a pay rate of 12 cents per word and stories of as many as 22,000 words, Clarkesworld authors can earn $2,640 per story.
It’s not clear how lower-paying or nonpaying magazines will be affected, but editors of those publications are talking about it on the online member forum of the Community of Literary Magazines and Presses (CLMP). “We’ve decided to proactively ban the use of AI in works we publish unless specifically invited,” wrote Joshua Wilson, editor and publisher of the Fabulist. “Along with this stated policy, we’ve also updated our contract to include a no-AI section.” Miracle Jones, an editor at Epiphany and a contributing editor at the Evergreen Review, suggested that nominal submission fees could thwart people from running blanket-submission schemes with AI-authored work. And new apps designed to detect AI-generated writing, such as GPTZero, could help overwhelmed editorial staff by filtering such submissions.
Not all editors are raising red flags, though. Liza St. James, a senior editor at Noon, said the uproar over ChatGPT has prompted her to think about Oulipian constraint-based practices. “I guess my first thought is that if a bot-generated text merits publishing, we would likely credit the human who called it forth from the machine—who made it exist in the first place,” she wrote in an e-mail. “Most published works are already the result of some amount of collaboration. Why not find creative ways to embrace this, or even to showcase it?”
“I want to be open-minded about what imaginative people can do with new tools,” Jacob Smullyan, founding editor of Sagging Meniscus Press, wrote on the CLMP forum. “One reaction I hope to see is a critique of human practice akin to how artists reacted to photography. How much of what we already do is revealed by these new tools to have less inherent dignity, to be already mechanical and superficial?”
Writers dazzled by ChatGPT, however, may not be aware of the potentially harmful biases embedded within the texts it generates. When I asked the chatbot directly about the kind of texts it draws from, it told me that the majority was “likely to be from the public domain.” For books, that typically refers to volumes published in the first two decades of the twentieth century and earlier. “It is possible that a large portion of the literature in the public domain was written by white male authors, especially in earlier time periods when women and people of color had fewer opportunities to publish their work,” ChatGPT conceded. This is troubling because it means that ChatGPT’s language algorithm contains all the biases, including racist and sexist tropes, entrenched within older literature. (I requested a story “written in the style of Raymond Carver, if he were Chinese” and received as a first sentence: “He looked at the bowl of rice in front of him, feeling empty and lost.”)
Margaret Rhee, a feminist scholar and new media artist and poet, conjectured that writers who are new to experimenting with AI might not have thought about the ethical implications of working with biased technology, which should be reckoned with for a project to be effective. Often the best AI-assisted projects, Rhee said, will have a turn of some kind, using technology to serve a subversive agenda. One example is artist Rashaad Newsome’s recent “Digital Griot” projects, in which Newsome trained a chatbot with texts by writers such as bell hooks and James Baldwin, allowing users to move through indexes, archives, and history in a counter-hegemonic way.
In the New York Times and MIT Technology Review, various people associated with OpenAI and Microsoft’s Bing chatbot have said that the only way to continue improving AI technology is to release versions that are bound to make mistakes, which they will then address. AI is a new frontier, after all, with the many challenges and unimagined outcomes—both troubling and exciting—that a new frontier entails.
Bonnie Chau is the author of the short story collection All Roads Lead to Blood (Santa Fe Writers Project, 2018). She currently serves on the board of the American Literary Translators Association, teaches fiction writing and translation at Columbia University and Fordham University, and edits for 4Columns, Public Books, and the Evergreen Review.