The Fully Fact-Checked Memoir: Backing Up Facts, Standing Behind Truth

Sarah Fay

My dream came true on a dismal March afternoon. Snow still crusted the sidewalks outside my apartment window, and it was raining. It was early pandemic—very early pandemic. Quarantine felt like we’d all been cast in a surreal episode of Black Mirror, and I and almost everyone I knew assumed it would—like a television drama—last for a finite amount of time. A couple of weeks. A month, tops.

In my twenties, I’d dreamed about the day when a powerhouse agent would get in touch to tell me that she was two-thirds of the way through my manuscript and loving it and not to talk to any other agent until she could finish and discuss representation with me. (In true want-to-be-a-writer fashion, I’d imagined this even before I had a manuscript.) I assumed it would happen with my first manuscript. Then with my second. Then my third, my fourth, and my fifth. By the time I was in my forties, I started to think it was never going to happen.

But there I was, standing in my kitchen, my face lit by the blue light of my cellphone screen, reading just such an e-mail. Another e-mail arrived saying that the agent was cc’ing her assistant to schedule a phone meeting.

An almost anguished elation came over me. I pumped my fist into the air and went to the window, as if to shout my good news—almost twenty years in the making—from the rooftops, but the streets were empty.

The phone meeting occurred a couple of days later. Just when I thought it couldn’t get better, it did. The manuscript, she said, needed no revisions. It was ready to be shopped to publishers. This was unheard-of. I asked about the pandemic. She said it hadn’t slowed down publishing.

Before we got off the phone, she said, “One thing. You’re going to need support.”


“You make pretty strong claims in the book.”

It was true. My memoir was partly about the six diagnoses I’d received since the age of twelve—anorexia, major depression, obsessive-compulsive disorder, anxiety disorder, ADHD, bipolar disorder—and partly an exposé of the book from which all mental health diagnoses come: the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM), often referred to as psychiatry’s bible. While in my forties I learned that the diagnoses I’d received—and all mental illnesses/mental health diagnoses—are not scientifically valid: Aside from diagnoses like dementia and rare chromosomal disorders, there are no biological markers to prove a person has what the DSM calls “major depressive disorder” or “social anxiety disorder” or “ADHD” or “schizophrenia” or any other diagnosis. They’re wholly subjective—based entirely on self-reported symptoms and a clinician’s opinion. They aren’t even reliable—most of the time two clinicians won’t give the same patient the same diagnosis based on the same symptoms, even if they meet with the patient at the same time.

(If we were to put a number on the reliability of a diagnosis, researchers estimate you have a 50 percent chance of being misdiagnosed. In other fields of medicine, in which misdiagnosis can actually be proved by a biological marker like a blood test, X ray, or scan, that likelihood is said to be anywhere from 5 to 20 percent. But, remember, in psychiatry we’re always being given theoretical diagnoses.)

My purpose for writing the memoir was to reach the 46 percent of American adults and 20 percent of children and adolescents who will receive unproven DSM diagnoses in their lifetimes and to warn them that those diagnoses are, essentially, invented. They exist as theoretical symptom lists and diagnostic labels created by “task force” committees of primarily white, cisgender, heterosexual men based not on data, but opinions. (When asked why a person needs five of nine symptoms to receive a depression diagnosis, Robert Spitzer, chair of the DSM-III task force committee that developed the book’s third edition, said, “It was just consensus. We would ask clinicians and researchers, ‘How many symptoms do you think patients ought to have before you give them a diagnosis of depression?’ And we came up with the arbitrary figure of five…because four just seemed like not enough. And six seemed like too much.”) I wove into my story a historical account of the DSM and the lies we’ve been told about mental health diagnoses. After reading my book a reader would be able to walk into a mental health care professional’s office—or counsel a friend or sister or parent or child who’d just walked out of one—and understand how flimsy diagnoses like “clinical depression” and “anxiety disorder” and “bipolar disorder” and any other “disorder” really are. Before they ended up like I did—spending their lives believing in and identifying one diagnosis after another, even perhaps suicidal as a result—I’d make sure they knew the truth, which would give them agency over their mental health treatment.

Since at least 2009 some of the most prominent psychiatrists and researchers in the field have been sounding the alarm, urging us not to trust the DSM-5 or its diagnoses. Former director of the NIMH Thomas Insel called DSM diagnoses “constructs” with “no reality.” Steven Hyman, another former NIMH director called them “fictive diagnostic categories” and the DSM itself “a scientific nightmare.” Psychiatrist and DSM-IV task force chair Allen Frances has spoken and written at length about the falsehoods of DSM diagnoses, going so far as to tell us not to buy, teach, or use the DSM anymore.

Their siren call has yet to make it into the mainstream. Most Americans still believe in DSM diagnoses. They abide by them, accept them as truth. Nearly half of all Americans think they know “a great deal” about mental illness and DSM diagnoses, some (in my experience) without knowing what the initialism DSM even stands for. According to a 2019 poll by Universal Health Services, 98.6 percent of Americans say that mental health diagnoses “represent legitimate medical illnesses.” While mental illness itself is absolutely real, relying on the diagnoses of the DSM to understand these conditions can do more harm than good to the millions of people who would benefit from proper treatment.

I studied the DSM and read substantial parts of it (parts of it because I find it hard to believe anyone has read the DSM’s more than nine hundred pages cover to cover. Such a feat would surely drive someone to madness, pun intended.) I read Allen Frances’s book and Thomas Insel’s NIMH blog posts and Steve Hyman’s academic publications. Books upon books upon articles and peer-reviewed studies followed until I believed I was enough of an expert on the DSM and the APA’s sordid history to write about both.

Which was why I didn’t understand the reason my agent was asking for sources. “They’re hyperlinked,” I told her. It turned out she’d read it on her Kindle and the hyperlinks hadn’t come through.

Over the next week I received the contract and some interest from other agents I’d queried, all of whom I refused without a second thought. The agent I’d signed with e-mailed a list of editors we’d approach. A week passed. Then another. Editors passed. Others showed interest. We met with them via phone and Zoom. The book sold in a month. It was a sunny early-April afternoon when my agent called to tell me that it was settled. I had a publisher. And an editor. And it was all suddenly very, very real.

That night, as I sat on my couch staring out at the Chicago skyline, doubts flooded in. I started to cry. Why was I crying? The best thing in my life had just happened. I’d written for reputable magazines and newspapers—the New York Times, the Atlantic, the New Republic, the Paris Review—and knew how intense fact-checking could be. And I had a PhD in English literary studies. Academia has its faults, but it’s the best research-integrity training ground available. I’d spent six years wandering the dusty stacks of a dimly lit library. My program had trained me—practically Navy SEAL fashion, albeit in an ivory tower—to construct literature reviews, interpret data, look for flaws in studies, and articulate research trends over time. And to cite. Citations were everything.

My fear wasn’t really about my book. The sources I used primarily came from peer-reviewed journals and books from academic publishers. (I knew better than to cite anything published by the mass media, save a handful of news outlets.) But even with over five hundred citations, I was afraid no one would believe me.

Partly it had to do with stigma. Not only was I a woman admitting to being diagnosed with not one but six mental illnesses, I’d also had several suicidal episodes. These admissions could lead some readers to wonder if I had an “unstable” mind and question my credibility.

Much has been made of memoir’s inconsistent track record when it comes to telling the truth. Best-selling memoirs, all advertised as accounts of true events, have been shown to be falsified: Asa Earl Carter’s The Education of Little Tree (Delacorte Press, 1976), Margaret Seltzer’s (aka Margaret B. Jones’s) Love and Consequences: A Memoir of Hope and Survival (Riverhead, 2008), Misha Defonseca’s Misha: A Mémoire of the Holocaust Years (Mt. Ivy Press, 1997), and, of course, James Frey’s A Million Little Pieces (Doubleday, 2003). These weren’t literary hoaxes like the one played by JT LeRoy or autofiction like some of Tao Lin’s work. They were intended as memoirs, or what Ben Yagoda defines as a “book understood by its author, its publisher, and its readers to be a factual account of the author’s life.” They were lies.

Memoirists, readers, and critics make a habit of deconstructing the genre, asking if it even belongs in the category of nonfiction. The trend was (and still is) to question truth’s place in memoir. Three assumptions justify the truth-isn’t-important-in-memoir position: (1) memory is fallible, (2) actual truth isn’t as important as “emotional” or “interior” “truth,” and/or (3) life doesn’t follow tidy narrative arcs, but coherent storytelling often relies on them. Even the most talented memoirists, like Kiese Laymon, seem to accept that no memoir can perfectly capture the truth, so it’s best to resign oneself to “write a lie.” As John D’Agata writes in The Lifespan of a Fact (Norton, 2012), an account of his seven-year battle with Believer fact-checker Jim Fingal: “By taking these liberties, I’m making a better work of art—a truer experience for the reader—than if I stuck to the facts.” Memoirist Vivian Gornick writes, “Truth in a memoir is achieved not through a recital of actual events; it is achieved when the reader comes to believe that the writer is working hard to engage with the experience at hand.” She goes on: “What happened to the writer is not what matters; what matters is the large sense that the writer is able to make of what happened.”

Much of this truth-lie argument is about memory. Because memory is imperfect, the memoirist is left to rely on “emotional truth” and even imagined experience. Emotional truthiness is more important than the truth, so memoirists need only concern themselves with themselves, the thinking goes. The goal isn’t truth; it’s to speak “your truth” using literary techniques that produce a convincing narrative or experimental work from the author’s point of view.

I didn’t believe in emotional truthiness and trusted my recollection of events. Although much of psychology and scientific research has led us to believe that memory is untrustworthy and highly suggestible, one more recent study indicates these claims are exaggerated. Memory is limited and can be manipulated but not to the point of unreliability.

The issue lay with the claims I was making. Was I backing them up enough to take on the mental health–industrial complex and change the minds and lives of 150 million Americans?

In no way did I think of myself as the type of the memoirist Yagoda describes: “Memoirists remember; no one truly expects them to engage in careful investigation and research.” I thought of myself as a kind of hybrid memoirist-journalist–medical historian. As a memoirist I’d be true to the facts of my personal history. As a journalist I’d follow the No. 1 piece of advice for journalists in the Nieman Foundation for Journalism’s Nieman Reports and “back up with a document every claim, anecdote, or scene involving another person named or unnamed in your book.” As a medical historian I’d analyze the DSM with the precision and depth of an academic.

Once my manuscript reached the copyediting stage, my editor and I were tasked with deciding how to present my research. Memoirs aren’t supposed to include citations—not in the academic sense. Some writers, like Dave Eggers and Martin Amis, have used footnotes in nonfiction to be clever or erudite. Foot-notes can give a text the meta feeling of David Foster Wallace’s work or seem to complicate reality and narrative, as in Jenny Boully’s essays, or appear scholarly yet funny, as in Carmen Maria Machado’s In the Dream House (Graywolf Press, 2019). Academic citations risk pulling a memoir from that ambiguous category of creative nonfiction, which relies not on evidence but ideas.

I insisted on superscript citations and not just a vague “Notes” section at the end of the book. My editor thought that footnotes were unsightly, and I agreed. (Supposedly other publishers do as well. Chuck Zerby describes in The Devil’s Details: A History of Footnotes [Simon & Schuster, 2003] how in recent years presses have asked authors to remove their citations entirely and sequester them on a separate website.) Originally footnotes were developed to make a text more pleasing. Prior to their innovation, commentary appeared in the margins of a text. By the Renaissance, marginalia had become so commonplace that even the pages of the Bible were overrun with words, leaving little white space. Each comment corresponded to a point in the text, but the aesthetics left much to be desired. Queen Elizabeth’s senior printer, Richard Jugge, ordered all commentary into a single section down the side of the page of the Bishop’s Bible. His real triumph came with his inclusion of two notes neatly tucked away at the bottom of the page. Centuries later, footnotes would become a scholarly tool used not just for commentary but to cite sources.

Because footnotes could make my writing appear tight and academic, end-notes seemed like the most unintrusive and concrete way to support my claims. Like footnotes they were first used to include commentary rather than act as citations. (Their likely origins are traced to nineteenth-century author George Crabbe.) Endnotes, too, could make a text seem academic, but they’d provide the authority I needed.

My preoccupation with my own credibility speaks to a deeper problem in our treatment and understanding of memoir: power. Those with privilege—with respect to race, gender, socioeconomic status, and ability, among others—can disregard the truth and never even wonder if readers will trust them. They already have authority and know they will, on some level, be believed in a way that those without power don’t.

The day I received the first paycheck from my advance made the book even more of a reality. Not the egocentric I-have-a-book-coming-out aspect of reality, but the I-am-responsible-for-every-word-in-it aspect.

A few nights later I sat at the kitchen table Googling “fact-checking” and found an article by Emma Copley Eisenberg in Esquire. Fact-checking isn’t the norm in book publishing. Eisenberg describes what it was like for her to be “forced to hire her own fact-checker” when her publisher wouldn’t cover this expense for her debut book. She’d worked as a fact-checker and still didn’t trust herself to do a thorough enough job on her own work. A fact-checker, as Eisenberg writes, reads for accuracy, researches any facts presented, assesses sources, checks quotations, and flags plagiarism.

So it didn’t come as a surprise to learn that my publisher wouldn’t be providing a fact-checker. Like most nonfiction books published by a major imprint, my memoir would undergo several rounds of copyediting but not fact-checking, per se. “Legal” would have a pass at it, checking mainly for glaring inaccuracies, plagiarism, and any moments that might lead to a lawsuit against the publisher.

Unlike Eisenberg, who views the absence of a publisher-funded fact-checking stage as a betrayal, I thought it was right. My contract stated that I was accountable for each comma, every and. Plus, I—not the publisher—would own the copyright.

I hired a fact-checker who freelanced for New York magazine and elsewhere. (Many fact-checkers operate on a freelance basis.) Her fee came out of my modest advance. It took months. We went chapter by chapter. Some weeks we’d have to stop because she’d have an assignment for New York, a situation she’d made me aware of before contracting her.

Dread filled me each time I sent a chapter. My thoughts would spin: I must have gotten something wrong. I didn’t read that article carefully enough. Those statistics may not have been from the article I said they were from. I thought fact-checking was about getting everything right, about earning her approval and receiving an A-plus. Or getting an F and beating myself up and doubting my book and spiraling into uncertainty.

But the process was far more interesting than that. When she questioned my use of certain studies to make certain claims, it forced me to refine my argument. If she pushed me to find a more current study, it allowed me to better understand how the discussion surrounding the DSM and diagnoses had evolved. Every time she spotted a misspelled name, I looked it up and learned more about the source or subject. With each round I became more confident in my book.

Why wouldn’t all memoirists do this? Why don’t we all back up the claims we make about health and illness and politics and history? If memoirists decline to fact-check their work (or don’t hire someone else to), we could do as journalist Gabriel Mac suggests, and let the reader know: “Maybe there should be a warning, like on a pack of cigarettes: ‘This book has not been fact-checked at all.’”

When my fact-checker and I finished the epilogue of my book, I felt…sad. Here was a person I’d never met but whom I respected so much and felt such a bond with. Fact-checkers are the unsung heroes of publishing. In my last e-mail to her, I wrote, “We’ve done it!”

Later, during one of the copyediting rounds, I fact-checked the book again myself. The fact-checker I’d hired had inspired me that much. I read The Chicago Guide to Fact-Checking (University of Chicago Press, 2016), wherein author Brooke Borel writes that half of all journalists teach themselves how to fact-check. Others learned on the job and not in any formal setting. I wanted to do that too.

It quickly became clear that the fact-checker I’d hired had done a beyond thorough job.

I felt triumphant, knowing I’d spent my advance well. More than that, I could back up and stand behind every word I’d said about the DSM and its invalid diagnoses. My book could change people’s lives—could save their lives. Whereas before I’d worried about people doubting me and questioning what I’d said, now I wasn’t just ready for pushback—I craved it.


Sarah Fay is an author and activist. She is the author of Pathological: The True Story of Six Misdiagnoses (HarperCollins, 2022) and the founder of Pathological: The Movement (, a public awareness campaign devoted to educating people about the unreliability and invalidity of DSM diagnoses and the dangers of identifying with an unproven mental health disorder.