One other category of consideration excluded from these rankings is long-term alumni success. In the past, articles have been written observing that, ipso facto, a strong program must graduate strong writers. This may be the case, but it is not necessarily so. Most programs concede in their promotional literature that they cannot teach talent, only (at best) craft; consequently, most programs know better than to take direct credit for graduate successes that may occur many months or even years after a student has left the program. More important, though, there is no viable method for measuring alumni success. There are simply too many tenure-track teaching positions, spots at writers colonies, book prizes, and miscellaneous writing-world laurels: To poll any appreciable percentage of the institutions offering such accolades for the biographies of their recipients—particularly when such biographical data is scarce online—would be impossible. Even if available, the use of such information would be limited. Does the success, in 2009, of a writer or poet who graduated from an MFA program in 1996 tell us anything about the present quality of that program? Given faculty turnover, and the other hard-to-quantify factors that inform a program's success or failure with respect to any one student, it seems unlikely—that is, if terms like success and failure are even appropriate or relevant at all. Likewise, and beyond the impossibility of linking any one achievement to any one period of instruction, how do we competently control for or weigh such factors as size of graduating class, degrees of achievement, and when an individual's MFA study took place? The only postgraduate assessment considered in this ranking is the determination of which programs have the most success (controlled for program size) in placing graduates in the few highly regarded, short-term post-MFA fellowships that exist. As the most pressing issue for graduating poets and writers is generally setting up a postgraduation employment plan, prospective applicants are likely to seriously consider what fellowship placement statistics say about cohort quality and program reputation.
So what is measured by these rankings, and how has the data for these measures been compiled? The most important element in the table that follows is a poll taken of more than five hundred current and prospective MFA applicants between October 2008 and April 2009. This survey was conducted on two of the largest online communities for MFA applicants, the Suburban Ecstasies and the MFA Weblog, and it differentiated among applicants on the basis of information they supplied for their individual user accounts for these communities. The data was also subsequently reviewed to remove the rare duplicate entry or multiple response. All poll respondents were asked to list, along with their genre of interest, either the programs to which they planned to apply, or, if they were not yet applicants but expected to be in the future, which programs they believed were the strongest in the nation. Finally, data from the 2008-2009 application season was compared with data from the preceding two application cycles to spot any significant unexplained deviations; fortunately, there were none. While certain programs have ascended in the rankings and certain others have descended over the past three years this poll has been conducted, the most dramatic movements can be linked to, variously, the hiring of new faculty, the creation of new programs at highly regarded universities (currently, an average of six new programs are founded each year), significant amendments to program funding packages, and improvements to the transparency of programs' online promotional materials.
While the response to this poll from applicants and the MFA programs themselves has been overwhelmingly positive, what few criticisms have emerged generally run along one of two lines: that the poll "merely" measures the popularity of any program among current and prospective applicants, and that such individuals are not, in any case, the best arbiters of program quality, having not yet experienced either the benefits or the shortcomings of any program. These concerns have been addressed in myriad forums online over the past three years, but, generally speaking, the most succinct answer to these charges is that the 2009 poll, as well as the two previous iterations of the poll, does not measure the sort of subjective, highly individualized assessments current and former students of the various MFA programs can supply. Nor does the poll rest on the view, once taken by U.S. News & World Report, that MFA faculties know better than their students or applicants which programs are the most esteemed. Neither MFA faculties nor current or former students of the programs themselves are tasked with determining the current state of affairs in the field of creative writing MFA programs; this is the unique province, and the special task, of current applicants. MFA faculties are not paid to follow the minute, year-to-year details of the scores of full-residency MFA programs in the United States, nor is there any particular reason for them to do so, as they are, first and foremost, working writers. Current and former MFA students likewise are to be considered expert only in their own program's particularities, and with regard to those particularities they are not especially good respondents for polls because of the significant possibility of observer bias. Applicants, in contrast, are far more likely to have no particular horse in the field, and to have acknowledged the importance of the matriculation decision to their own futures by rigorously researching a wide variety of programs.
Some may wonder why these rankings do not address MA programs in English that offer creative writing concentrations, low-residency MFA programs, or creative writing PhD programs. Apart from the fact that the time and resources available for this rankings project were necessarily finite, the applicant pools for these other types of programs are much smaller than the one for full-residency MFAs and therefore are extremely difficult to sample accurately. Moreover, low-residency programs in particular are not amenable to the same type of categorical assessment as full-residency programs: Generally speaking, low-residency programs do not offer much if any financial aid, cannot offer teaching opportunities to students, employ highly tailored Internet-based pedagogies and instructional schemes, are less likely to be gauged on the basis of their locales (as applicants only spend the briefest of periods on campus), and, because their faculties are part-time, are more likely to feature star-studded faculty rosters. It would be unfair to these programs, and to their full-residency counterparts, to attempt a straight comparison between the two groups. These same types of concerns also exist, to a varying extent, with non-MFA creative writing degrees. For instance, MA degrees in creative writing (or in English with a creative writing concentration or creative thesis) are not terminal degrees, and so are structured as much to prepare students for future doctoral study as for immediate immersion in the national creative writing community.
“Because there are 140 full-residency MFA programs in the United States, any school whose numerical ranking is in the top fifty in any of the ranked categories should be considered exceptional in that category.”