I am on occasion asked why the nation's MFA programs were not simply contacted directly, by mail or telephone, to fill in what few gaps exist in the table of data that follows. Apart from the questions of time and resources involved, the important point to make here about MFA programs is that they have not traditionally been particularly forthcoming about program particulars—indeed, one reason for the rankings' reliance on programs' promotional materials is to encourage programs to view their own, self-generated online documentation as the best venue for informing prospective applicants of program features. And in the past three years the rankings have been extremely effective in this regard. But efficiency and the promotion of transparency are not the only reasons for relying primarily on programs' online materials in collecting data; program Web sites are also, in the Internet age, the great equalizer among the nation's graduate creative writing programs. Every program has one and, just as important, every program has exclusive control over content. Telephone- and mail-based surveys necessarily advantage those programs that respond—and while that may seem like rough but fair justice to some, the purpose of these rankings is not merely to reward those programs that support the rankings through active engagement but also to encourage all programs everywhere to better serve their applicants and students. My belief in this approach has only been confirmed by several fruitless attempts to contact individual programs regarding some omission or ambiguity on their Web sites; while from time to time I encounter a program responsive to such queries, more commonly they are met with either no response or a terse dismissal. The hundreds of e-mails I've gotten from MFA applicants over the past three years reveal that they've had similar experiences in trying to coax information from programs. Only putting something at stake in a program's nonresponse—only letting the programs know that they are being assessed not merely on their quantifiable aspects but also on their transparency—has led to any significant movement on this score. To those among the faculty and staff of the nation's programs who question this methodology, the response, then, is clear enough: Adding missing data to online promotional materials has the triple benefit of being entirely within program control, being likely to benefit the program in the now-annual, highly popular MFA rankings, and being necessary to a better-informed and therefore more satisfied applicant pool and student body.
In the table that follows the individual programs are ranked on the basis of their votes in the online poll; these votes have then been broken down by genre, resulting in a genre-specific ranking for each program. Each program's placement in four feature-specific rankings also is included. Because there are 140 full-residency MFA programs in the United States, any school whose numerical ranking is in the top fifty in any of the ranked categories—the overall rankings; rankings in the poetry, fiction, or nonfiction genres; or the rankings by funding, selectivity, and postgraduate placement—should be considered exceptional in that category. Practically speaking, what this means is that all the programs listed in the table are exceptional programs in one or (frequently) many more than one area.
In rare instances, unavailable data, as well as data that falls beneath the cutoff for inclusion in the table, is denoted by special markings. For instance, fewer than half of all full-residency programs offer the nonfiction genre, and so in that category an asterisk (indicating the absence of such a program) appears alongside many programs. In the funding, selectivity, and placement rankings, an asterisk indicates that a program is unranked in this category. The poetry, fiction, and funding categories include a special notation—a plus sign—to indicate programs that received honorable mention (programs ranked between fifty and sixty nationally).