When I first discovered earlier this year that Meta had trained its flagship large language model (LLM) Llama 3 on books from LibGen, one of the largest pirated libraries on the web, containing over 7.5 million books and 81 million research papers, I felt the same surge of betrayal as many authors. How dare this tech behemoth exploit the books into which we had poured our hearts—using them as training data without our consent, especially since such LLMs might one day pose an existential threat to our art.
Overnight my social media feeds exploded with fellow authors vehemently criticizing Meta, with screenshots of their titles on the pirated books platform. It was a dramatic outpouring of collective outrage from authors, but despite our anger and apparent sympathy from the public, after speaking to friends in Silicon Valley, I soon realized that none of this was likely to slow the tremendous rate of AI adoption. No matter how traditional media may claim to support authors and even file lawsuits against AI companies—last April, eight newspaper publishers in California, Colorado, Florida, Illinois, Minnesota, and New York sued Microsoft and OpenAI over copyright infringement, saying millions of their articles were used without payment or permission to develop artificial intelligence models for ChatGPT and other products—those same media companies will all eventually adopt AI.
My novel, These Memories Do Not Belong to Us, set in a dystopian world where memories are commodified and under surveillance, grapples with themes of resistance in the face of overwhelming odds. I see today a more urgent struggle emerging: our fight as artists to protect storytelling from the advancing encroachment of AI.
Not even our publishers are immune to the temptation of integrating AI into their businesses, as I recently witnessed firsthand.
![]()
I was fortunate that my publisher greenlit the audiobook for These Memories Do Not Belong to Us to feature fourteen voice actors who together would reflect the constellation of characters and narratives in the novel. I soon discovered that before voice actors begin recording, they often enter the words that they aren’t sure how to pronounce into an online software. After my editor sent me a link to confirm the final pronunciations (both phonetic and via voice recordings), I noticed while I was correcting pronunciations that there were often prerecorded suggestions. At first I wasn’t sure what the origins of those suggestions were—I thought that perhaps certain actors were recording their initial takes. But when I played the prerecorded pronunciation for the Chinese character Jiang, which means “River,” I was shocked to realize that the system had appended two additional syllables to the clip: Jiang Zemin.
Jiang Zemin was the president of China from 1993 to 2003. He also has nothing to do with my story. While it may be considered a small mistake—and one that I was fortunate to catch—it revealed AI’s utter inability to understand the context of a passage, solely relying on word associations instead. It became clear to me then that AI had already infiltrated our publishing processes, and if my publisher, an imprint of a Big Five house, is doing it, it’s not a stretch to believe that every large publisher is likely also integrating AI into their workflows in some shape or form.
One week later Audible announced that it soon planned to offer “fully integrated, end-to-end” AI narration and production for audiobooks, including machine translation capabilities. In the face of such corporate inevitability, authors may choose to keep up the fight and resist AI—and I hope they do—but we need to be aware that our publishers, being businesses themselves, will not necessarily be allies in this struggle. Moreover, we should also expect large AI companies to soon launch LLMs specializing in fiction, as demonstrated by OpenAI CEO Sam Altman’s March 11 post on X of a short metafictional story generated by “a model that is good at creative writing.” Regardless of whether you agree with Altman that the model “got the vibe of metafiction so right,” it is evident that such LLMs will one day be released. In the eyes of AI companies, creative writing is no longer seen as the domain of humans alone.
Beyond the threat of LLMs directly eliminating the professional pursuit of creative writing, AI giant Anthropic CEO Dario Amodei warned this summer that AI could abolish “50 percent of entry-level white-collar jobs within the next five years.” In his message Amodei asked governments and Anthropic’s AI competitors to stop “sugarcoating” the potential mass job layoffs, which would invariably also impact most writers who cannot rely on book advances alone to survive. When I spoke to various tech professionals over the past six months, however, every person privately told me that they were far more nervous about not integrating AI enough into their companies and falling behind competitors as a result, rather than prioritizing an ethical transition.
“The AI transition? Taking care of employees? Isn’t that the government’s job instead?” one tech investor told me when I confronted him. “Anyway, your genre of literary fiction will take longer to replace than more commercial writing,” he said. While I appreciated that my friend was only trying to comfort me, I couldn’t help but marvel that the concept of being in solidarity with authors from other genres was foreign to him.
In May 2025, ChatGPT had nearly eight hundred million active weekly users and more than a hundred million active daily users. In terms of web traffic, the AI chatbot increased from 3.9 billion visits in February to 5.1 billion in April—elevating ChatGPT to the fifth spot in traffic share, below only Google, YouTube, Facebook, and WhatsApp.
The momentum of AI adoption is extraordinary. But what will happen to those impacted? Using Audible’s recent announcement as a starting point, will we simply forget about the voice actors losing their livelihoods? When will we stop to consider what our society loses from abandoning the more nuanced and human interpretations of stories in our future audiobooks? What about the potential for bias and stereotyping in AI-generated content or performances, as demonstrated by my publisher’s own AI system? What if I had not caught that error?
There has never been a more urgent time for us authors to unite, to push for more ethical AI practices and pressure our governments to take care of those innocents affected by such tectonic shifts.
As a first-generation immigrant, I wrote my novel with survival as a core theme. I would ordinarily be the last to judge any author who wishes to latch on to the AI revolution to improve their creative work. But given all the recent developments, I believe that collective resistance is necessary for our long-term survival as artists. At minimum that requires demanding transparent, ethical AI guidelines. All training data must be properly licensed, and their creators must be properly compensated for their work—both now and in the future. There can be no progress without addressing those two points first.
In Ai Jiang’s Hugo Award– and Nebula Award–nominated novelette I AM AI (Shortwave Publishing, 2023), none of the above happens. In a meta-moment the protagonist is a cyborg named Ai, like the author, who lives beneath a bridge at the end of a futuristic city called Emit. Owing significant debt to the megacorporation governing the town, Ai struggles as a freelance writer pretending to be an AI to survive in the dystopian society, while taking care of others under the bridge using her battery.
As the already-impossible expectations for her work continue to rise (i.e., writing a 180,000-word book in twenty-four hours), the cyborg Ai increasingly considers selling parts of her human body in exchange for new technology (in the form of a brain implant) that can improve her productivity. In the end it is only through the kindness and generosity of her community that Ai is saved from losing her physical heart forever.
When I spoke to Jiang for this essay, I was moved by her strong position on the collective nature of humanity being central to the making of art. “I have always seen artistic creation as a conscious conversation that spans generations, between different cultures and individuals—like oral storytelling by our ancestors transformed over time and infused with new voices and ideas, something molded by humanity’s evolution,” she said. “That is the core of art that I hope we never forget.”
![]()
At the risk of sharing a very unpopular opinion, I don’t believe that resistance necessarily means rejecting AI outright. There is a pragmatic part of me that sadly knows it’s already too late. Chatbots from OpenAI and others are gaining more than a million new users a day. We can do our best to highlight the staggering environmental impacts of AI via the electricity and water consumption required to power the models, but we are unlikely to dissuade the public from their increasing reliance. And if we haven’t reached this moment already, I believe that soon we may be in a world in which the freedom to not use AI in our work will be a privilege.
Nearly every company is investing in AI integrations today, including many of our publishers.
I won’t pretend to have all the answers at this existential moment. I do believe that speaking up as individual authors will be less effective than combining our voices in the form of existing author guilds, literary communities, and advocacy groups. I also think that we need to engage sooner rather than later, that we would benefit from focusing more on what we artists wholeheartedly believe in, rather than debate for too long on the best ways to fight back.
There is a narrow window of time for writers to advocate that AI tools should be used to augment creativity (for instance, with research) rather than replace our humanity. Whenever AI is used, publishers and tech companies should disclose it, and creators must have the right to opt out of including their work in any training sets. In the best-case scenario, I wonder if AI can be trained to better reflect our collective values—of empathy and resistance against dehumanization—rather than undermine them. Maybe AI can even amplify diverse perspectives rather than silence them. But all of this is only possible if we resist inertia, organize now, and refuse to surrender our voices.
Our stories, memories, and experiences are too precious to be consumed by algorithms that do not understand or value them. The path forward will not be easy, but neither is storytelling. It never has been easy, and the same will be true for the stubborn work of maintaining our humanity in art.
Yiming Ma is the author of These Memories Do Not Belong to Us (Mariner Books, 2025), a dystopian novel set in a world where memories are bought and sold, the audiobook of which was named a Spotify Editor’s Pick. Born in Shanghai, he attended Stanford University for his MBA and holds an MFA from Warren Wilson College, where he was named a Carol Houck Smith Scholar. His stories and essays appear in the New York Times, the Guardian, Hazlitt, the Florida Review, and elsewhere. His story “Swimmer of Yangtze” won the 2018 Guardian 4th Estate Story Prize.
Thumbnail: South Spring Breeze






