I have been teaching creative writing for forty years, from first-year composition to the most advanced fiction and nonfiction MFA seminars. In the past few months I’ve been asked many times if I’m worried my students will resort to ChatGPT, LLaMA, or other AI programs to write their essays or stories, or if our best creative writers will be replaced by their artificially intelligent counterparts.
At first I scoffed at the possibility. Not that I’m suspicious of newfangled technology. I might be old, but my undergraduate degree is in physics. In the mid-1970s I taught myself the programming language Fortran and developed primitive simulations of particle interactions and radioactive emissions from power-plant smokestacks. I might have been the first student in the Iowa Writers’ Workshop to write a thesis on the university’s mainframe computer. (When I turned in a story on fanfold paper, my adviser sneered that my prose must be “very cold.” A few months later she begged me to help her set up her new word processor.)
But I recently spent a few weeks fooling around with ChatGPT (the free version rather than the souped-up model available for a fee). Immodestly, I fed in prompts that challenged it to churn out paragraphs that matched the humor, pathos, and originality of an essay of my own, as well as a passage by one of my former students, the prodigiously gifted Jesmyn Ward, who went on to win the National Book Award not once but twice and whose latest novel, Let Us Descend, was published by Scribner in October. Although ChatGPT performed far better imitating me than my former student, I was stunned by the quality of its output. When I expressed my amazement to a friend, he said, “And ChatGPT is operating only at the level of a six-month-old child. Imagine what it will be doing a few years from now.” Let alone in our grandchildren’s lifetimes.
And yet I stand convinced that no artificially intelligent “author” will ever produce a work of literary genius. I don’t want to get bogged down in definitions of genius or creativity. But I hope to use my own experience as a writer and teacher of creative writing to illustrate the crucial limitations of any program like ChatGPT.
Put simply: A computer will never suffer the shitty childhood that endows many humans with the treasure trove of unique material upon which they can draw for the remainder of their writing lives. Even if they don’t focus on their own triumphs and travails, writers rely on their idiosyncratic observations of the world around them to provide the eccentricity of detail that allows their prose to seem so vibrant, persuasive, and three-dimensional. Of course, writers also derive inspiration from listening to other people describe their experiences, reading other people’s books, watching movies and TV shows, visiting art galleries, and surfing YouTube, which an AI program might do to accumulate its raw material. Such a program might even be able to use its “imagination” to combine the flotsam and jetsam in its memory in unique ways. But humans can mine their embodied experiences in an offline world to produce fresher, more varied, and more accurate descriptions than a computer.
Beginning writers often assume readers will connect with their work only if they create “relatable” characters and situations, a tactic that results in stereotypes and clichés. Readers connect to narratives as much through comparison as recognition. Presented with a plump, rosy-cheeked grandmother who never utters an unkind word and spends her days baking cookies, a reader will be more inclined to think of Mrs. Claus than their own foul-mouthed, chain-smoking Meemaw. If I tell you that my nana in Florida forced me to eat boiled hamburgers and cold canned string beans, that she stomped giant palmetto bugs with her bare feet, that she was stone deaf and could be communicated with only by written notes, and that once, after I handed her a scrap of paper on which I thanked her for allowing me to visit, she snorted, “You can wipe your tuchus with your thanks!” then crumpled the note in her fist and tossed it to the floor, wouldn’t you immediately contrast her with your own kinder or more obstreperous grandmother? And wouldn’t you be curious to find out whether my nana meant my gratitude was so inadequate I could use it to wipe my butt or whether she was saying I needn’t thank her because having a grandchild visit was her greatest joy?
Of course, a wealth of eccentric detail isn’t enough to produce great literature. The writer must also infuse every description, scene, and interior monologue with the appropriate complexity of emotion. To test how well ChatGPT might carry out both objectives, I prompted it to compose a paragraph in which the narrator describes her parents’ obsession with maintaining neatness and order in the 1950s ranch house in which she grew up (through no coincidence whatsoever, this mimics an essay of my own called “Ranch House,” which was originally published in the New England Review in 2011). As I would do with any novice writer, I advised ChatGPT to include examples of the parents’ obsessive behavior.
To my amazement the program reeled off a paragraph that would warrant high praise if one of my undergraduates turned it in:
In my parents’ 1950s ranch house, neatness and order reigned supreme, shaping our daily lives with an unwavering grip. The perfectly aligned cushions on the pristine sofa served as a testament to their obsession with tidiness. Not a single crumb dared to linger on the spotless kitchen countertops, a domain ruled by their meticulousness. The floors gleamed under the attentive strokes of my mother’s mop, and not even a strand of hair could escape my father’s relentless vacuum cleaner. This quest for flawless order extended beyond the walls of our home; the meticulously organized toolshed and precisely trimmed lawn attested to their dedication. As a child, I marveled at their ability to maintain this immaculate universe, yet I couldn’t help but yearn for a hint of chaos or the joy of unruly spontaneity that seemed to elude the confines of our picture-perfect dwelling.
The grammar is perfect, the syntax varied; the language, if not sparkling with originality (“picture-perfect dwelling”), isn’t riddled with clichés. The mother’s impatience with crumbs might be predictable, but the hair the father relentlessly pursues would earn “nice detail!” in the margins, especially because he is the one to vacuum rather than the mother. ChatGPT even reveals its yearning for “a hint of chaos or the joy of unruly spontaneity,” demonstrating an ability to generate the appropriate fake emotion.
Still more startling, when I requested that ChatGPT rewrite the paragraph in an angrier tone, the bot erupted in a convincing burst of fury:
In our stifling 1950s ranch house, my parents’ obsession with neatness and order reached infuriating heights. … God forbid anyone left a book or a magazine askew on the meticulously polished coffee table, for it would be met with a torrent of reprimands. Even the air seemed to tremble in anxiety, fearful of disrupting the perfectly aligned family photos that adorned the walls. And the kitchen, oh, the kitchen! It was their hallowed ground, where a misplaced fork or a slightly crooked cereal box triggered a hysterical lecture on the virtues of cleanliness.
The details here are less predictable (“the air seemed to tremble in anxiety”), and the bot seems to have acquired a distinct voice and personality (“God forbid…,” “And the kitchen, oh, the kitchen!”).
Less successful was my request that ChatGPT take a humorous approach to the parents’ foibles. The tone grew lighter, but the bot’s method of creating humor lay in piling up a jumble of conflicting metaphors:
… My parents wielded their dishwashing prowess like culinary maestros, turning every post-dinner cleanup into an immaculate symphony. The cutlery lined up in perfect soldierly rows, and the Tupperware lids were matched with the precision of a puzzle mastermind. No wonder I had to eat my sandwiches outside; the crumbs in the kitchen could trigger a state of emergency! In that orderly kingdom, I was an unruly jester, a tornado in a china shop, forever challenging their zeal for tidiness—a battle they fought valiantly, armed with feather dusters and disinfectant wipes.
The language derives not from the specific time and place in which the author grew up, but a communal grab bag of metaphors related to sports and war and fairy tales. Like a person learning a foreign language, ChatGPT mistakes an ability to use idioms and clichés for eloquence. (Strangely, if I asked the bot to avoid clichés, it tossed in even more.)
Allow me to compare ChatGPT’s version of growing up with obsessive parents with a passage of my own—a testament less to my literary prowess than to my screwy childhood:
[My parents insisted] that at all times there be a roll of toilet paper on the spindle, with the tissue coming from the bottom, and on the tank a spare roll, its true appearance disguised beneath the dress of a plastic doll, said dress having been crocheted in hot-pink yarn by my grandmother down in Florida. Whoever used the last sheet of toilet paper on the roll on the spindle was obliged to replace that roll with the spare from the top of the toilet, then retrieve a fresh roll from the closet, jam the doll’s legs down the center, pull down her skirt, and place that roll on the tank. Once, my father scolded me so excessively for failing to perform that obligatory final step that I took five rolls from the closet, piled them on the tank, and stuffed the doll’s legs in the uppermost roll, but purposely neglected to pull down her skirt. Instead, I yanked down the bodice of her dress, dabbed nail-polish nipples on her breasts, and smeared a sensuous leer on her dollish lips. To which my father—no tyrant he—responded by holding up his hands and telling me that I had won.
Was I angry at my father for imposing these ridiculous rules? Was I annoyed at his insistence that we hide the basic human need to wipe our butts beneath the skirts of a prim Victorian doll? Did I sense he wanted me to repress my own budding female sexuality? Did his upbringing at a Borscht Belt hotel and his propensity for telling filthy jokes render his insistence on my propriety hypocritical? Looking back as I wrote my essay, did I allow my love for my father—and my terrible grief at losing him—to seep through my anger?
If that paragraph has any power, it derives from the complexity of my feelings toward my father, and I fail to see that any AI program might acquire the ability to simulate such a delicate web of emotions, if only because ChatGPT didn’t grow up with a maddeningly complex and hypocritical parent of its own. Nor could ChatGPT simulate an adult’s sense of discovering what she felt as a child and how those feelings might have evolved in the intervening years, if only because ChatGPT would need to simulate everything that might have altered its perspective in the meantime.
Most vital of all: Would a bot be able to simulate a convincing reason for writing such an essay in the first place? Despite the profusion of virulent arguments on social media, writers don’t usually write to persuade their readers to believe what the writers already know. The most powerful, engaging essays, stories, and novels spring from the writer’s perplexity over some aspect of their own experience, the natural world, or human behavior. I admit I was shaken when, at the end of its response to one of my prompts, ChatGPT seemed to sense a question would make its musings seem more human:
As I stared at my reflection in the mirror, I couldn’t help but wonder if there was a way to embrace [my parents’] devotion to tidiness without losing the spirit of spontaneity that made life an adventure. In that moment, I resolved to find a harmonious balance, where the charm of our 1950s ranch house could coexist with the delightful messiness of life’s cherished moments.
And yet the question here is rhetorical. ChatGPT already knows a harmonious balance between order and spontaneity is the best way to raise a child. Every response the app provided made the point that parents shouldn’t be so obsessed with tidiness that they make their children miserable. Such an essay might be funny or relatable, but the writer—and therefore the reader—learns nothing they didn’t already know. Why are some parents panicked by an errant crumb while others raise their children amid food-crusted dishes, mildewed towels, and carpets stained with a dozen pets’ urine and feces? Do children raised in obsessively neat households grow up neurotically neat themselves—or outrageously careless and disorderly? You might instruct ChatGPT to interrogate its own hypotheses. But the program could do that only by calling on the information already in its data banks and any research it might access on the internet. Certainly, ChatGPT would have no way of figuring out that my own parents were desperate to fend off the danger and disease they’d learned to fear as the offspring of Jewish immigrants who’d fled the pogroms of eastern Europe, growing up amid the poverty of the Great Depression and the perils and deprivations of World War II, and then raising us kids during the polio epidemic of the 1950s.
ChatGPT is touted for its ability to brainstorm ideas, but when I asked it to get me started on a memoir, it spit out the laughably vague suggestion that I try writing “a gripping memoir of a person’s life journey from a troubled past filled with hardships and poor choices to a path of redemption, self-discovery, and transformation,” followed by seven paragraphs of equally abstract plot summary, ending with this ludicrous advice: “Remember, a memoir is a deeply personal story, and this idea can be adapted and enriched with your unique experiences and perspective. Happy writing!” You might as well have advised Leonardo to paint “an artistic representation of a person,” then supplemented that Wikipedia definition with the caveat that he add the deeply personal interpretation that might result in the Mona Lisa.
Similarly, ChatGPT’s suggestions for a literary short story would make any writing teacher groan: “A reclusive elderly woman lives alone in a small apartment, seemingly forgotten by the world. When a curious young girl moves into the adjacent unit, she becomes determined to break through the woman’s barriers and uncover the extraordinary life hidden beneath her seemingly mundane existence.” The app performed better when I asked it to provide suggestions for a novel, but its preference clearly lay in mystery, romance, sci-fi, and fantasy—genres that lend themselves to standardized plots and archetypal characters.
Undoubtedly, ChatGPT could be taught to subvert the expectations associated with a given genre. But it must remain limited to the plots and characters earlier writers have created, or combinations of those possibilities. A bot will never be inspired by its own embodied experience in the unpredictable physical world to experiment with new literary forms or content or to seek answers to previously unexplored questions.
In George Orwell’s famous essay “Shooting an Elephant,” the author looks back on his stint as an officer for the British raj in Burma and tries to figure out why he shot an elephant that belonged to a native laborer, even though he knew the elephant didn’t need to be shot. No computer would ever be moved to consider such a question; nor could it rely on its own experience to reason its way to Orwell’s remarkable conclusion:
To come all that way, rifle in hand, with two thousand people marching at my heels, and then to trail feebly away, having done nothing—no, that was impossible. The crowd would laugh at me. And my whole life, every white man’s life in the East, was one long struggle not to be laughed at.
This is why I so rarely worry about my undergraduates plagiarizing their assignments. Rather than provide a topic, I ask them to come up with a question they genuinely want to answer, whether about their own experience, the texts we’re reading, or a historical, scientific, or cultural event or issue. They need to explain why they settled on this question and why its answer might be significant to themselves and others; then they must describe the sources they’ve consulted, the experts they’ve interviewed, the places they’ve visited, the experiments and surveys they’ve carried out, and the documents or artifacts they’ve turned up and studied, ending with the conclusions they’ve drawn and why we should believe them. Perhaps ChatGPT could be trained to search its memory and formulate a question based on a gap in its understanding. The bot might be better than a human at finding and digesting all possible online resources relevant to its research. But even if the AI program were embedded in a robot, it would remain limited in the real-world sources it might consult—at the least the bot would be constrained by its relative lack of mobility—and I doubt it could fake its excitement as it conveyed to its readers the evolution of its beliefs and knowledge.
Besides, if a student chooses to rely on ChatGPT, who is cheating whom? A classmate who comes up with her own question, does her own thinking and research, and agonizes over the most lucid, engaging way of presenting her findings will gain insights and skills that can’t be provided by ChatGPT, not to mention she might discover the mysteries and joys of writing. The first time I taught undergraduates, I asked them to close their eyes and imagine what their families might be doing at that moment and then, without losing that daydream, to capture on the page what they saw and heard. A few minutes into the exercise, a burly farm boy started crying. “This feels like magic,” he choked out. “I feel as if I am making my mother put those dishes on the table and say what she is saying to my sister because I’m writing it.”
I’m confident the students in my graduate workshops would never think of using ChatGPT, except in some experimental, metafictional way. The whole reason for applying to an MFA program is to express your own identity and experience the rush of creativity. Years ago I read the first line of an applicant’s manuscript—“My father was a coal miner and a taxidermist”—and immediately wanted to accept him. A few lines later the female narrator described washing the coal dust from her father’s hair. Could a computer have come up with that odd combination of coal mining and taxidermy? I suppose it could have. Could a bot have guessed a reader would be drawn in by a daughter washing coal from her father’s hair? I somehow doubt it.
Even more striking was Jesmyn Ward’s application to our MFA program. She was young, and the manuscripts in her portfolio didn’t necessarily work as stories. But I will never forget her description of two Black boys playing with a litter of newborn pit bulls that would be trained to compete in dogfights. The sentence describing the fragility of one boy’s wrist was all I needed to convince me that here was a writer who was alive to the beauty and sadness of human existence and had a poet’s eye for detail.
Those boys and pit bulls reappeared in Ward’s novel Salvage the Bones (Bloomsbury, 2011), a riveting account of a pregnant teenager and her family living through Hurricane Katrina. Here is the narrator describing her brothers Skeetah and Randall as they help Skeetah’s beloved pit bull, China, give birth to her first litter:
China buries her face between her paws with her tail end in the air before the last push for the first puppy. She looks like she wants to flip over into a headstand, and I want to laugh, but I don’t. Blood oozes from her, and Skeetah crouches even closer to help her. China yanks her head up, and her eyes snap open along with her teeth.
“Careful!” Randall says. Skeetah has startled her. He lays his hands on her and she rises. I went to my daddy’s Methodist church one time with my mama, even though she raised us Catholic, and this is what China moves like; like she has caught the ghost, like the holiest voice moves through her instead of Skeetah’s. I wonder if her body feels like it is in the grip of one giant hand that wrings her empty.
Could that scene have been written by a computer that scanned its memory for a video of a pit bull giving birth? Would the computer have thought to describe the dog as trying to turn a headstand? Would it have compared a boy laying hands on his dog to a woman catching the holy ghost? A human might prompt ChatGPT to compose such metaphors, but if the human needs to micromanage the program to such a detailed extent, why not write the passage to begin with? Salvage the Bones could have been composed only by a human who possesses a bone-and-blood understanding of Black life in Mississippi, of love and loss and survival, of Greek tragedy and myth, of the way Mississippians of all kinds talk and think and act.
When I asked ChatGPT to write a passage in which a Black boy in Mississippi plays with the pit bulls his father is raising to fight, the program was astonishingly adept at paralleling the puppies’ likely trajectories from innocence to violence with the boy’s own destiny. But the paragraphs it churned out were gracelessly heavy-handed and dripping with a sentimentality Ward would never indulge in:
The world beyond the kennel seemed distant, lost in a haze of cotton fields and poverty. But within those four wooden walls, Elijah’s heart knew only the pure, untainted love he shared with the puppies. They would tumble and wrestle, tiny paws and hands intertwining in a dance of innocence, as if they were oblivious to the dark destinies awaiting them. In those fleeting moments, Elijah could forget the weight of expectations and the heavy burden that life had placed on his young shoulders.
No matter how hard I pushed ChatGPT toward subtlety, it couldn’t resist blurting out its “message,” then repeating that message again and again. Nor could I persuade it to accept a darker outlook. Those puppies (which, in one version, the boy insisted on naming Courage, Justice, and Hope) always survived, always triumphed. The boy always learned a lesson that allowed him to withstand the brutal realities of poverty and racism. (“As the pups grew, Elijah’s heart swelled with pride at their strength and compassion. They became symbols of resilience and love, gentle souls molded in a world that sought to crush them. The winds of change whispered through the cotton fields, and the once-faded dreams of a young boy found wings, soaring above the dark shadows of dogfighting.”)
Never did ChatGPT offer its readers a hint of the parvovirus—or the hurricane—that would doom the pups in Ward’s novel. Nor did it occur to ChatGPT that a Black teenager in Mississippi would fall in love with the Greek classics she had read in her high school English class. When I asked the app to include some dialogue, it mercifully avoided any particularly cringeworthy Black dialect.
“Daddy, look how fast they’re growing!” Jamal exclaimed, his eyes bright with wonder.
Marcus nodded, a hint of pride in his gaze as he watched his son interact with the pups. “Yeah, they’re gonna be fierce fighters one day.”
But how would an algorithm ever think to have the narrator tell her little brother to stop being “orner,” a word he mistakes for “horny” and Ward’s protagonist realizes was her mother’s mangled pronunciation of “ornery”?
It made me wonder if there were other words Mama mashed like that. They used to pop up in my head sometimes when I was doing the stupidest things: tetrified when I was sweeping the kitchen and Daddy came in dripping beer and kicking chairs. Belove when Manny was curling pleasure from me with his fingers in mid-swim in the pit. Freegid when I was laying in bed in November, curled to the wall like I was going to burrow into another cover or I was making room for a body to lay behind me to make me warm.
According to an eye-opening article in the September 25, 2023, edition of the Atlantic, ChatGPT schooled itself not only on Salvage the Bones, but on all five of Ward’s books (along with 183,000 other titles). And yet I defy any AI program to match the power of Ward’s work, especially Men We Reaped (Bloomsbury, 2013), her memoir of losing five Black men she loved, or Sing, Unburied, Sing (Scribner, 2017), soaked with the bitterness of its characters’ histories of poverty, racism, addiction, and incarceration.
Don’t get me wrong. AI is a marvelous innovation. I already have used ChatGPT to clarify the dense, technical arguments in a book about evolution I was reading. A professor friend raved to me about ChatGPT’s ability to provide a template for a recommendation for a less-than-stellar postdoctoral fellow. But even ChatGPT admits its creative powers are “a result of pattern recognition and probabilistic associations in the data it was trained on, rather than true conscious creativity” and “its abilities are still limited compared to the creativity and understanding exhibited by human writers.”
Lazy students, hack writers, and cost-cutting bosses might well employ AI to churn out mediocre essays, novels, newspaper articles, and television shows. But the world has never been short on mediocrity. Humankind has long traded excellence for convenience, making do with frozen TV dinners rather than meals prepared with the freshest ingredients by a loving parent or an innovative chef. As many writers fear—and lawsuits already are seeking to prevent—training AI bots on published texts, one of my own books among them, might lead to an increase in plagiarism of those same texts and a loss of income by their authors. AI-powered bots might one day take over an air-control tower and wreak havoc on our navigation system. They might put millions of workers out of jobs. But a program like ChatGPT will never produce truly original and moving art or literature, if only because it is unable to think and feel and move its body through our baffling and chaotic universe.
Developers could provide ChatGPT with an avatar and release it into the metaverse, where it might be subjected to abusive parents, racism, bullying, romance, virtual bad sex, virtual mind-blowing sex, financial success, or bankruptcy, after which an encouraging professor might offer it a place in its creative writing seminar. But the avatar would be able to write only about the experiences the developers had programmed into the metaverse, which must always be more limited than the possibilities the real world provides (as well as less deeply felt, or felt at all, since a simulation cannot be as authentic as what it is attempting to simulate). Unlike Dostoevsky, who was reprieved from execution at the last instant, a computer will never be able to write a passage as powerful as Prince Myshkin’s monologue about the physical and mental anguish of a man who knows he is certain to be beheaded. To believe that a computer—if only its creators endow it with enough fake neurons and set it loose to scan enough previously produced human verbiage, photographs, and videos—will ever achieve the sentience required to describe its own unique sensations seems as fantastic as the medieval theory that mice might spontaneously be generated from a pile of rags.
Besides, the whole point of great literature is connecting one human mind and heart with another. Not long ago I began dating a mathematician. We traded flirty texts, and the next thing I knew my beau had sent me a long, beautifully written story about Alexander Knottingham, whose mind was consumed by the abstract beauty of mathematics yet who yearned for “someone who could share in his love for numbers and the secrets they held.” After meeting and wooing a mysterious woman named Elena, the professor discovered an intimacy he had never known. With each caress, he and Elena “found new ways to weave their bodies together, drawing inspiration from knot theory’s elegant mathematics. They reveled in the art of knotting and unknotting, delighting in the symphony of sensations that unfolded. … It was as if the mathematical secrets they had unraveled had permeated their beings, guiding them toward an exquisite unity of mind, body, and soul.”
I was stunned by this scientist’s ability to compose such a witty, charming story. Then I realized he must have used ChatGPT as his Cyrano. As impressed as I was by his ingenuity, I felt cheated. I wanted to get to know the inner workings of this mathematician’s mind and heart. I wanted to knot my body with his, not with ChatGPT’s. And that’s why, even if someone develops an app that can spit out an essay or a novel whose uniqueness and specificity of details, complexity of emotion, and subtlety of thematic significance fool me into believing it has been created by a human writer, I will refuse to consider it a work of great literature because I have absolutely no desire to connect with the silicon mind and digital heart of an AI program.
Eileen Pollack is a former director and longtime faculty member of the Helen Zell MFA Program at the University of Michigan. Her latest books are the essay collection Maybe It’s Me: On Being the Wrong Kind of Woman (Delphinium, 2022) and the novel The Professor of Immortality (Delphinium, 2019).