Poets & Writers
Published on Poets & Writers (https://www.pw.org)

Home > The Top Fifty MFA Programs in the United States: A Comprehensive Guide

The Top Fifty MFA Programs in the United States: A Comprehensive Guide [1]

by
Seth Abramson
November/December 2009 [2]
11.1.09

For the latest rankings of the top fifty MFA programs in creative writing, read "2011 MFA Rankings: The Top Fifty." [3] For a ranking of low-residency programs, read "2011 MFA Rankings: The Top Ten Low-Residency Programs." [4]

The following is an excerpt of an article that appeared in the November/December 2009 issue of Poets & Writers Magazine.  The print article, and its accompanying rankings [5], include eight categories of additional data for each program, including size, duration, cost of living, teaching load, and curriculum focus.

"When U.S. News & World Report last gathered original data about graduate creative writing programs, in 1996, it did so based on two erroneous assumptions. First, it presumed that no part of the writing community was better equipped to assess the relative strengths of the country's then three-score MFA programs than the faculties of the programs themselves. In fact, there was rather more evidence to suggest that no part of the community was less suited to opine on this topic than the one selected. MFA faculties are by definition composed of working writers for whom teaching is an important but often secondary pursuit; likewise, faculty members, because they are primarily focused on writing and teaching within their own programs, have no particular impetus to understand the broader landscape of graduate creative writing programs.

A second major flaw—among many smaller ones—in the USNWR approach was the premise that, unlike every other field of graduate education, graduate study in creative writing was singularly resistant to quantitative analysis, and that therefore the only category of assessment worthy of exploration was faculty opinion on individual programs' "reputations." In fact, every graduate creative writing program has (somewhere) a documented acceptance rate, an annual if changeable funding scheme, and a whole host of less weighty but equally quantifiable data points: student-to-faculty ratio, matriculating-class size, credit-distribution prerequisites, local cost of living, and so on. USNWR ignored all of these.

Irrespective of the approach taken by USNWR, the evils of educational rankings are indeed legion and do urge caution on the part of any prospective analyst of MFA programs. At base it is impossible to quantify or predict the experience any one MFA candidate will have at any one program. By and large, students find that their experiences are circumscribed by entirely unforeseeable circumstances: They befriend a fellow writer; they unexpectedly discover a mentor; they come to live in a town or city that, previously foreign, becomes as dear to them as home. No ranking ought to pretend to establish the absolute truth about program quality, and in keeping with that maxim the rankings that follow have no such pretensions. When I first began compiling data for comprehensive MFA rankings, nearly three years ago, I regularly told the many MFA applicants I corresponded with that educational rankings should only constitute a minor part of their application and matriculation decisions; that's a piece of advice I still routinely give, even as the creative writing MFA rankings I helped promulgate have become the most viewed and most utilized rankings in the field—read online by thousands of prospective MFA applicants every month.

None of the data used for the rankings [5] that follow was subjective, nor were any of the specific categories devised and employed for the rankings based on factors particular to any individual applicant. Location, for instance, cannot be quantified—some applicants prefer warm climates, some cold; some prefer cities, some college towns; and so on—and so it forms no part of the assessment. Other factors traditionally viewed as vital to assessing MFA programs have likewise been excluded. For instance, conventional wisdom has been for many years that a program may be best assessed on the basis of its faculty. The new wisdom holds that applicants are well advised to seek out current and former students of a program to get as much anecdotal information about its faculty as possible, but, in the absence of such information, one must be careful not to confuse a writer's artistic merit with merit as a professor. In the past, too many applicants have staked years of their lives on the fact that the work of this writer or that one appealed to them more than others, only to find that the great writers are not always the great teachers, and vice versa. Likewise, mentoring relationships are difficult to form under even the best of circumstances, particularly because neither faculty member nor incoming student knows the other's personality and temperament in advance. In short, determining whose poetry and fiction and memoir publications you most enjoy yields little information about whose workshops and one-on-one meetings you will find most instructive and inspirational.

One other category of consideration excluded from these rankings is long-term alumni success. In the past, articles have been written observing that, ipso facto, a strong program must graduate strong writers. This may be the case, but it is not necessarily so. Most programs concede in their promotional literature that they cannot teach talent, only (at best) craft; consequently, most programs know better than to take direct credit for graduate successes that may occur many months or even years after a student has left the program. More important, though, there is no viable method for measuring alumni success. There are simply too many tenure-track teaching positions, spots at writers colonies, book prizes, and miscellaneous writing-world laurels: To poll any appreciable percentage of the institutions offering such accolades for the biographies of their recipients—particularly when such biographical data is scarce online—would be impossible. Even if available, the use of such information would be limited. Does the success, in 2009, of a writer or poet who graduated from an MFA program in 1996 tell us anything about the present quality of that program? Given faculty turnover, and the other hard-to-quantify factors that inform a program's success or failure with respect to any one student, it seems unlikely—that is, if terms like success and failure are even appropriate or relevant at all. Likewise, and beyond the impossibility of linking any one achievement to any one period of instruction, how do we competently control for or weigh such factors as size of graduating class, degrees of achievement, and when an individual's MFA study took place? The only postgraduate assessment considered in this ranking is the determination of which programs have the most success (controlled for program size) in placing graduates in the few highly regarded, short-term post-MFA fellowships that exist. As the most pressing issue for graduating poets and writers is generally setting up a postgraduation employment plan, prospective applicants are likely to seriously consider what fellowship placement statistics say about cohort quality and program reputation.

So what is measured by these rankings, and how has the data for these measures been compiled? The most important element in the table that follows is a poll taken of more than five hundred current and prospective MFA applicants between October 2008 and April 2009. This survey was conducted on two of the largest online communities for MFA applicants, the Suburban Ecstasies and the MFA Weblog, and it differentiated among applicants on the basis of information they supplied for their individual user accounts for these communities. The data was also subsequently reviewed to remove the rare duplicate entry or multiple response. All poll respondents were asked to list, along with their genre of interest, either the programs to which they planned to apply, or, if they were not yet applicants but expected to be in the future, which programs they believed were the strongest in the nation. Finally, data from the 2008-2009 application season was compared with data from the preceding two application cycles to spot any significant unexplained deviations; fortunately, there were none. While certain programs have ascended in the rankings and certain others have descended over the past three years this poll has been conducted, the most dramatic movements can be linked to, variously, the hiring of new faculty, the creation of new programs at highly regarded universities (currently, an average of six new programs are founded each year), significant amendments to program funding packages, and improvements to the transparency of programs' online promotional materials.

While the response to this poll from applicants and the MFA programs themselves has been overwhelmingly positive, what few criticisms have emerged generally run along one of two lines: that the poll "merely" measures the popularity of any program among current and prospective applicants, and that such individuals are not, in any case, the best arbiters of program quality, having not yet experienced either the benefits or the shortcomings of any program. These concerns have been addressed in myriad forums online over the past three years, but, generally speaking, the most succinct answer to these charges is that the 2009 poll, as well as the two previous iterations of the poll, does not measure the sort of subjective, highly individualized assessments current and former students of the various MFA programs can supply. Nor does the poll rest on the view, once taken by U.S. News & World Report, that MFA faculties know better than their students or applicants which programs are the most esteemed. Neither MFA faculties nor current or former students of the programs themselves are tasked with determining the current state of affairs in the field of creative writing MFA programs; this is the unique province, and the special task, of current applicants. MFA faculties are not paid to follow the minute, year-to-year details of the scores of full-residency MFA programs in the United States, nor is there any particular reason for them to do so, as they are, first and foremost, working writers. Current and former MFA students likewise are to be considered expert only in their own program's particularities, and with regard to those particularities they are not especially good respondents for polls because of the significant possibility of observer bias. Applicants, in contrast, are far more likely to have no particular horse in the field, and to have acknowledged the importance of the matriculation decision to their own futures by rigorously researching a wide variety of programs.

Some may wonder why these rankings do not address MA programs in English that offer creative writing concentrations, low-residency MFA programs, or creative writing PhD programs. Apart from the fact that the time and resources available for this rankings project were necessarily finite, the applicant pools for these other types of programs are much smaller than the one for full-residency MFAs and therefore are extremely difficult to sample accurately. Moreover, low-residency programs in particular are not amenable to the same type of categorical assessment as full-residency programs: Generally speaking, low-residency programs do not offer much if any financial aid, cannot offer teaching opportunities to students, employ highly tailored Internet-based pedagogies and instructional schemes, are less likely to be gauged on the basis of their locales (as applicants only spend the briefest of periods on campus), and, because their faculties are part-time, are more likely to feature star-studded faculty rosters. It would be unfair to these programs, and to their full-residency counterparts, to attempt a straight comparison between the two groups. These same types of concerns also exist, to a varying extent, with non-MFA creative writing degrees. For instance, MA degrees in creative writing (or in English with a creative writing concentration or creative thesis) are not terminal degrees, and so are structured as much to prepare students for future doctoral study as for immediate immersion in the national creative writing community.

I am on occasion asked why the nation's MFA programs were not simply contacted directly, by mail or telephone, to fill in what few gaps exist in the table of data that follows. Apart from the questions of time and resources involved, the important point to make here about MFA programs is that they have not traditionally been particularly forthcoming about program particulars—indeed, one reason for the rankings' reliance on programs' promotional materials is to encourage programs to view their own, self-generated online documentation as the best venue for informing prospective applicants of program features. And in the past three years the rankings have been extremely effective in this regard. But efficiency and the promotion of transparency are not the only reasons for relying primarily on programs' online materials in collecting data; program Web sites are also, in the Internet age, the great equalizer among the nation's graduate creative writing programs. Every program has one and, just as important, every program has exclusive control over content. Telephone- and mail-based surveys necessarily advantage those programs that respond—and while that may seem like rough but fair justice to some, the purpose of these rankings is not merely to reward those programs that support the rankings through active engagement but also to encourage all programs everywhere to better serve their applicants and students. My belief in this approach has only been confirmed by several fruitless attempts to contact individual programs regarding some omission or ambiguity on their Web sites; while from time to time I encounter a program responsive to such queries, more commonly they are met with either no response or a terse dismissal. The hundreds of e-mails I've gotten from MFA applicants over the past three years reveal that they've had similar experiences in trying to coax information from programs. Only putting something at stake in a program's nonresponse—only letting the programs know that they are being assessed not merely on their quantifiable aspects but also on their transparency—has led to any significant movement on this score. To those among the faculty and staff of the nation's programs who question this methodology, the response, then, is clear enough: Adding missing data to online promotional materials has the triple benefit of being entirely within program control, being likely to benefit the program in the now-annual, highly popular MFA rankings, and being necessary to a better-informed and therefore more satisfied applicant pool and student body.

In the table that follows the individual programs are ranked on the basis of their votes in the online poll; these votes have then been broken down by genre, resulting in a genre-specific ranking for each program. Each program's placement in four feature-specific rankings also is included. Because there are 140 full-residency MFA programs in the United States, any school whose numerical ranking is in the top fifty in any of the ranked categories—the overall rankings; rankings in the poetry, fiction, or nonfiction genres; or the rankings by funding, selectivity, and postgraduate placement—should be considered exceptional in that category. Practically speaking, what this means is that all the programs listed in the table are exceptional programs in one or (frequently) many more than one area.

In rare instances, unavailable data, as well as data that falls beneath the cutoff for inclusion in the table, is denoted by special markings. For instance, fewer than half of all full-residency programs offer the nonfiction genre, and so in that category an asterisk (indicating the absence of such a program) appears alongside many programs. In the funding, selectivity, and placement rankings, an asterisk indicates that a program is unranked in this category. The poetry, fiction, and funding categories include a special notation—a plus sign—to indicate programs that received honorable mention (programs ranked between fifty and sixty nationally).

Astute observers may notice some slight disparities between the funding rankings here and those that appeared in the November/December 2008 issue of Poets & Writers Magazine. As the task of gathering data on MFA programs is an ongoing one, and as many programs have responded to their prior ranking by making additional data available online, some movement in the rankings has occurred. There have also been changes to the methodology used for the funding rankings.

The funding ranking that appears here is similarly responsive to applicants' stated values: In short, the programs are ranked by both the annual and overall tuition- and health care-exclusive dollar value of their funding packages, as modified by the cost of living in the program locale (compared against a single, national-average locale, randomly selected as Providence, Rhode Island) and the duration of the program. Only monies that are guaranteed programwide are considered in calculating a program's financial aid package. This is consistent with a growing trend among applicants to generally favor longer rather than shorter programs, and full-funding schemes rather than tiered aid systems. (Full funding is defined here as providing the equivalent of a full-tuition waiver and a cost-of-living-adjusted minimum stipend of $8,500 to every student.)

While application strategies will differ greatly from applicant to applicant, and fundamentally are dependent upon individual values—and a careful weighting of those values—it is critical for applicants to have some sense of their chances of admission at a program before adding it to their application list. The selectivity ranking provided here uses available acceptance-rate data, though only programs with more than a hundred annual applicants in fiction and poetry combined are included. (Nonfiction applicants are not included in this data set or in the overall applicant poll.) As most programs receive one nonfiction application for every four poetry applications and six fiction applications, and more than half of all full-residency MFA programs nationally do not offer the nonfiction genre, the absence of nonfiction data from the overall ranking and the selectivity ranking has minimal effect on the final rankings—and what effect it does have is necessary to keep the playing field level for all the programs ranked.

The furor surrounding educational rankings in major areas of graduate study—law, medicine, engineering, business, and doctoral programs in the social sciences, sciences, and humanities—never really dies down, but there is a tacit presumption that carefully collated educational data, organized and ranked on an annual basis, can, in time, produce a substantially better-informed applicant pool. While no ranking can or should ever absolve applicants and MFA faculty members and administrators from the responsibility of making their own independent judgments, any ranking system that reflects the values of its most important consumers offers at least a chance of becoming, in time, a virtuous circle. To the extent aspiring young writers are seeking well-funded communities of artists where they will receive the precious commodity of time, these rankings reflect those values, and, as a result, programs responsive to such applicant needs are the most likely to find favor in the rankings. In turn, these programs will receive more applications in the coming months and years. As the most-applied-to programs also enjoy the luxury of being the most selective—and therefore the most attractive to young writers seeking the inspiration of a community of talented peers—these rankings offer the promise of nudging programs toward doing more for their students and encouraging all students to be more deliberate about how they make a critical life decision.

*For the full article and additional data for each program, including size, duration, cost of living, teaching load, and curriculum focus, see the November/December 2009 issue.

Seth Abramson is the author of The Suburban Ecstasies (Ghost Road Press, 2009) and a contributing author to The Creative Writing MFA Handbook (Continuum, 2008). His poems have recently appeared in Best New Poets 2008, Poetry, the American Poetry Review, New American Writing, Crazyhorse, Subtropics, and elsewhere. In 2008 he was awarded the J. Howard and Barbara M. J. Wood Prize by Poetry. A graduate of Harvard Law School and the Iowa Writers' Workshop, he is currently a doctoral candidate in English at the University of Wisconsin, Madison.


Source URL:https://www.pw.org/content/top_fifty_mfa_programs_united_states_comprehensive_guide?destination=content/top_fifty_mfa_programs_united_states_comprehensive_guide

Links
[1] https://www.pw.org/content/top_fifty_mfa_programs_united_states_comprehensive_guide [2] https://www.pw.org/content/novemberdecember_2009 [3] http://www.pw.org/content/2011_mfa_rankings_the_top_fifty_0 [4] http://www.pw.org/content/2011_mfa_rankings_the_top_ten_lowresidency_programs [5] https://www.pw.org/content/2010_mfa_rankings_top_fifty_0