Page 1 | Page 2 | Page 3 | Page 4 | Page 5

2012 MFA Rankings: The Methodology

Seth Abramson

The Poets & Writers Magazine rankings do not use the above polling data to create a weighting system for the overall rankings. There is a presumption that applicants' own application lists best reflect the extent to which they take into account funding, location, reputation, selectivity, faculty, curriculum, and other applicant-specific factors in choosing which programs to apply to and attend.

Were the above data used to create a weighting system for the rankings, many of the nation's most prominent and popular programs would drop from the top fifty rankings altogether. The result would be a series of rankings that poorly reflected the present national consensus on program quality. For instance, under the rankings' current methodology a popular but largely-unfunded MFA program in a major urban center might still appear in the top fifty rankings because even a low standing in the funding, selectivity, student-faculty, fellowship placement, and job placement categories can be counterbalanced by a program's popularity due to location and/or faculty. The popularity of a program's location and faculty is best reflected by privileging applicants' application lists rather than a confluence of these lists and publicly-accessible hard data. To redesign the overall rankings as something other than a direct reflection of current applicant mores would be to ensure that virtually no nonfully funded and/or big-city programs (with only a handful of exceptions) would appear in the overall top fifty rankings, nor many (if any) nonfully funded programs whose appeal lies in large part in the composition of their faculty rosters.

While it’s fair to assume that program popularity going forward may be directly affected by a high or low standing in the funding, selectivity, fellowship-placement, and job-placement rankings, the pace of this trend is arrested, rather than hastened, by the current ranking methodology. If the top fifty programs were ordered according to a weighted ranking system that calculated hard data regarding program funding, selectivity, student-faculty ratio, fellowship placement,and job placement, most large urban programs would drop from the top fifty rankings immediately. The present methodology both registers the relative decline or stagnation in the popularity of certain such programs while allowing for these programs to improve their funding, selectivity, student-faculty ratio, and placement statistics before losing their positions, by applicant consensus, in the top fifty altogether.

Genre of Poll Respondents
Asked in 2010, using a Google-sponsored polling application, “What is your primary genre?”, 701 poll respondents from The Creative Writing MFA Blog provided the following responses:

1. Fiction (53 percent)
2. Poetry (28 percent)
3. Nonfiction (15 percent)
4. Other (2 percent)

Isolating only the 578 poetry and fiction respondents to the above poll question, the results are as follows:

1. Fiction (65 percent)
2. Poetry (35 percent)

This suggests that the potential polling cohort at The Creative Writing MFA Blog is similar in its constitution, in terms of genre affiliation, to the national MFA-applicant cohort. Hard data from twenty MFA programs with available admissions data for both genres (constituting a total of twenty-four data-sets ranging in age from the 2008–2009 admissions cycle to the 2010–2011 admissions cycle) generates a total data-set of 12,368 applicants, 8,730 of these being fiction applicants (70 percent) and 3,638 poetry applicants (30 percent).

This hard data also bears a close statistical resemblance to the actual polling cohort for the 2012 full-residency MFA rankings: during the time period discussed above, 635 MFA applicants with a declared genre were polled for these rankings, including 398 in fiction (63 percent) and 237 in poetry (37 percent). This said, the disparity between the genre demographics of the actual polling cohort and applicants’ genre demographics as recorded by MFA programs suggests that poets were marginally more likely to participate in the present rankings’ program-popularity polling than were fiction writers. Specifically, the actual polling cohort contains 10 percent fewer fiction writers than a fully representative sample would have included. While some substantial portion of this disparity is explained by the margin of error presumptively generated by Google-sponsored polling apparatuses and the relative paucity of two-genre statistical breakouts (only 10 percent of all full-residency MFA programs have such data available from any of the past three admissions cycles), it is nevertheless clear that the present polling under-samples fiction writers by approximately 5 percent. Analysis of existing polling data reveals, however, that such under-sampling has a negligible effect on the final overall rankings. Even those few programs that are substantially more popular among fiction writers than poets could expect to see at most only a one- to three-spot increase in their overall ranking were the actual polling cohort for the rankings fully representative of the national applicant pool in terms of genre affiliation.

Of course, no full-residency graduate creative writing program advertises its fiction, poetry, or nonfiction tracks as being of any more or less importance to its reputation than any other genre track, so an equally valid methodology for the fiction- and poetry-focused overall rankings would be to use a polling demographic equally divided between fiction writers and poets. The present methodology essentially charts a middle course between this option and a fully national-pool-representative polling cohort.

Poll respondents for the 2012 Poets & Writers Magazine rankings were self-selected, and it is the particular and express design of the rankings’ methodology that the polling cohort for these rankings be self-selected. Just as a poll aimed at determining popular car manufacturers might use a self-selecting polling cohort to only compile the responses of the best-researched car buyers—for instance, those who had spent time on websites that allow consumers to compare various available car brands and styles—these rankings do not intend to sample a generic cohort of MFA applicants. Instead, it is the aim of the rankings to primarily if not exclusively catalogue application decisions made by the best-researched MFA applicants (see “Full-Residency Rankings: Polling Locus”).

National Full-Residency Applicant Pool Size
The frequency with which each full-residency MFA program appeared on polled fiction and poetry applicants' application lists was determined by dividing the number of votes for a particular program in both fiction and poetry by the total number of applicants in these two genres polled during the 2010–2011 admissions cycle (640). Because recent applicant-pool hard data is available for thirty-nine full-residency MFA programs, a function of these two data-points was used to estimate the total national full-residency applicant pool in fiction and poetry for the 2010–2011 admission cycle. While such an extrapolation presumes that the users of The Creative Writing MFA Blog were and are demographically similar to those individuals who did not use The Creative Writing MFA Blog to research programs during the polling period (and that those who cast votes on The Creative Writing MFA Blog were demographically similar to those who were patrons but did not), this sampling was used because (1) demographic data for all full- and low-residency applicants is not known or knowable, and (2) there is no particular reason to suspect dramatic demographic differences between the various subgroups cited above, as The Creative Writing MFA Blog is a public website easily accessible by networked computer. Likewise, because user accounts allow website patrons to manage the amount of personal information they release to the public, there is no particular reason for any subset of applicants to feel chilled from casting a vote for whichever programs they favor. While the general tenor of discourse on The Creative Writing MFA Blog is consistent with the applicant-demographic polling described above—for instance, the blog constitutes a community that generally favors more selective over less selective programs, higher-ranked programs over lower-ranked ones, programs with better student-faculty ratios over those with worse, programs in cities and towns popular among younger Americans versus those in less talked-about locales, funded over unfunded programs, and programs with a longer duration over those with a shorter one—these attitudes are consistent with that present conventional wisdom expounded upon at length in most media accounts of the creative writing MFA, as well as the sort of advice about important program features that college creative writing professors give to their MFA-aspirant students every day. There appears to be nothing remarkable about the demographics of those who patronize a free, public, lightly-moderated website like The Creative Writing MFA Blog.

In an open letter published online in 2009, the Association of Writers and Writing Programs (AWP) estimated that there are "more than 13,000 applicants to [full- and low-residency] MFA programs each year." Data collected for the 2012 Poets & Writers Magazine rankings indicate that this estimate is almost certainly incorrect. While no one knows for certain the total number of applicants annually to full-residency programs in the United States, based on the available data the present median estimate for this figure is 2,797. The mean estimate is 3,253; subtracting the two lowest and two highest outliers from the thirty-nine program-based data-points available results in an “adjusted mean” of 3,042. Similar calculations conducted during the 2009–2010 admissions cycle produced similar results: a median of 3,116; a mean of 3,478; and an adjusted mean (dropping the one lowest and one highest outlier from a data-set including sixty-three program-based data-points) of 3,276. These numbers suggest that the 2011 Poets & Writers Magazine rankings polled more than 15 percent of the annual applicant pool to full-residency MFA programs, and that the 2012 Poets & Writers Magazine rankings have now polled more than 20 percent of this population.

As noted in the introductory article for the 2011 Poets & Writers Magazine rankings ("The MFA Revolution: Ranking the Nation's Best Programs," Poets & Writers Magazine, September/October 2010), while the Poets & Writers Magazine rankings are not scientific, they are probative. Whereas scientific rankings (which require demographic data that is, in this case, unavailable both to independent researchers and national trade organizations) traditionally poll, at the state level, well less than a hundredth of one percent of their target population, and national polls typically sample well less than a thousandth of one percent, the sample size for the 2012 Poets & Writers Magazine rankings, in a nod to the necessarily unscientific nature of the polling, is between 2,000 and 20,000 times larger as a percentage of population.

To arrive at this year’s national applicant-pool estimates, the following equation was used:

(640 divided by the number of fiction and poetry votes received by a program in 2010–2011 applicant polling) multiplied by (the number of fiction and poetry applicants reported by that program during the 2010–2011 admissions cycle)

Using the equation above, it was determined that twenty-six of the thirty-nine program-data-based estimates for the annual full-residency applicant pool (67 percent) fell within approximately 1,000 applicants of the above-cited mean of 2,797, and 32 estimates (82 percent) fell within 1,500.

The popularity of programs whose extrapolated national-applicant-pool estimates significantly exceed the adjusted mean may well be undertabulated (ranked lower than what they would have been had the entire national applicant pool been polled) by the polling done for the 2012 Poets & Writers Magazine rankings; conversely, programs whose extrapolated applicant-pool estimates fall significantly below the adjusted mean may well be overtabulated (ranked higher than what they would have been had the entire national applicant pool been polled). These undertabulations and overtabulations are not random; they reflect the fact that those applicants less likely to have been exposed to the present conventional wisdom regarding MFA applications on sites like The Creative Writing MFA Blog are consequently more likely to apply to short-duration, poorly-funded programs in high cost-of-living urban areas. The current conventional wisdom among the online-researching MFA applicant community is that it is advisable to apply to longer-duration, well-funded programs in lower cost-of-living areas. To the extent the polling conducted at The Creative Writing MFA Blog favors better-funded programs or programs that boast some of the qualities referenced earlier in this section, this statistical phenomenon is a conscious mirror-imaging of the general preferences of the best-researched MFA applicants, not an inadvertent byproduct of the rankings' methodology.

Of the ten programs listed below with the highest upward deviation from the adjusted mean—that is, the programs most likely to have been undertabulated by this ranking—not one is fully-funded. Only one is even half fully-funded. Only two of ten are three years in duration; one, in fact, is only a single year in duration. More than half are located in one of four high-cost-of-living locales: Boston, New York City, San Francisco, and Washington, D.C.

Of the twenty programs with the largest downward deviation from the adjusted mean—the programs most likely to have been overtabulated by this ranking—eighteen (90 percent) are fully funded. Of the remaining two programs, one is 70 percent fully funded, and one is an Honorable Mention selection in the funding category and has of late widely advertised its ongoing push to better fund matriculants. Of these twenty programs, nearly two-thirds (65 percent) are three years in duration, and two others (10 percent) offer a substantial percentage of each graduating class third-year funding opportunities. Fifteen of the programs (75 percent) are at public universities.

Below are national applicant-pool estimates, derived from the polling data, for the thirty-nine programs with available annual admissions statistics for 2011, ranked from the lowest estimates for the national full-residency applicant pool in fiction and poetry (programs more likely to be overtabulated) to the highest (programs more likely to be undertabulated). To reiterate, the number in parentheses represents an estimate of the total full-residency MFA applicant pool for 2010–2011 based on the percentage of the 640 MFA applicants polled for the Poets & Writers Magazine rankings who voted for that school and the actual number of applications the school reportedly received.

McNeese State University in Lake Charles, Louisiana (1,176)
Virginia Polytechnic Institute in Blacksburg, Virginia (1,580)
Purdue University in West Lafayette, Indiana (1,697)
University of Minnesota in Minneapolis (1,758) *
University of South Carolina in Columbia (1,768)
University of California in San Diego (1,824)
Louisiana State University in Baton Rouge (1,874)
University of Miami in Florida (1,920)
University of Colorado in Boulder (1,996) *
Indiana University in Bloomington (2,116)
Cornell University in Ithaca, New York (2,201)
Arizona State University in Tempe (2,352)
Johns Hopkins University in Baltimore, Maryland (2,369)
University of Iowa in Iowa City (2,424)
University of Florida in Gainesville (2,600)
Syracuse University in New York (2,601)
University of Nevada in Las Vegas (2,618)
University of Wisconsin in Madison (2,640)
Hollins University in Roanoke, Virginia (2,685) *
University of Mississippi in Oxford (2,797)
Brown University in Providence, Rhode Island (2,800)
University of Virginia in Charlottesville (2,869)                                                                      
Pennsylvania State University in University Park (2,891) *
University of Michigan in Ann Arbor (3,104)
University of Texas in Austin (3,220)
University of Wyoming in Laramie (3,303) *
University of Oregon in Eugene (3,469)
University of California in Irvine (3,491)
Georgia College & State University in Milledgeville (3,576)
New York University in New York City (4,106)
University of Maryland in College Park (4,185)
Colorado State University in Fort Collins (4,221)
University of Montana in Missoula (4,292) *
Hunter College in New York City (4,364) *
George Mason University in Fairfax, Virginia (5,007) *
Brooklyn College in New York City (5,493)
Boston University in Massachusetts (5,847)
North Carolina State University in Raleigh (7,200)                                  
San Francisco State University in California (10,435)

* = Publicly-released applicant-pool data included three genres. A two-genre estimate—using the national-average 6:3:2 distribution of fiction, poetry, and nonfiction applications—has been used to generate this extrapolated figure.

The variation in the figures above reflects the differing practices of applicants who conduct substantial research into programs via online MFA-applicant communities and those who do not. The list reflects that, for example, Boston University in Massachusetts is probably more popular among the total national applicant pool than it is among the 640 users polled on The Creative Writing MFA Blog. That the Iowa Writers' Workshop, whose reputation and name-recognition in the field of graduate creative writing is the most likely of any program to be equivalent across all applicant groups, is only 350 or so applicants off the median national applicant-pool estimate of 2,797 suggests that the Writers' Workshop was one of the most "neutrally-tabulated" programs in these rankings—as no obvious reason exists for individual groups of applicants to be more or less familiar with the much-lauded seventy-five-year-old program. Other programs within one standard deviation of the median include the well-known, longstanding programs at Brown University (0 percent off the median), University of Virginia (3 percent), Hollins University (4 percent), Syracuse University (7 percent), and University of Michigan (11 percent).

The mean for the above estimates is 3,253, and the “adjusted mean” (calculated by dropping the two lowest and two highest outliers) is 3,042.

As the annual applicant-pool estimates provided above relate only to fiction and poetry applications, the traditional 6:3:2 genre ratio (see “Genre Rankings: Cohort”; approximately 18.1 percent of all MFA applicants nationally are nonfiction applicants, though among the applicant community at The Creative Writing MFA Blog this figure is 15.3 percent) can be used to estimate the median and mean number of nonfiction applicants per annum: 622 (the median; in 2010 this was 567), 723 (the mean; in 2010 this was 632), and 676 (the adjusted mean; in 2010 this was 596). These figures are derived directly from the median, mean, and adjusted mean calculations for full-residency fiction and poetry programs recited above.

The above nonfiction applicant-pool estimates cross-check, broadly speaking, with estimates extrapolated from programs with known recent admissions data for nonfiction: University of Iowa (whose 2011 admissions figures produce an estimate of 278 annual applications in nonfiction nationwide); University of Wyoming in Laramie (2010 figures: 253); Sarah Lawrence College in Bronxville, New York (2006: 561); and Rutgers University in Camden, New Jersey (2010: 842). Further confirmation is provided by programs whose three-genre applicant-pool sizes are known but for which a breakdown by genre is unavailable. The nonfiction applicant pool for these programs can be estimated using the national-average 6:3:2 ratio, and this estimate can then be used to estimate the national nonfiction applicant pool in the manner described above for poetry and fiction programs. The result is a series of national nonfiction applicant-pool estimates from the following programs’ 2011 admissions data: Hollins University in Roanoke, Virginia (127); University of Arizona in Tucson (130); University of North Carolina in Wilmington (208); University of Wyoming in Laramie (217); George Mason University in Fairfax, Virginia (260); Pennsylvania State University in University Park (270); University of Notre Dame in South Bend, Indiana (283); University of Alabama in Tuscaloosa (291); University of Minnesota in Minneapolis (296); American University in Washington, D.C. (314); Hunter College in New York City (441); University of Colorado in Boulder (477); Ohio State University in Columbus (527); Columbia University in New York City (652); and University of Montana in Missoula (676). The median for these estimates is 291, while the mean is 345.

One issue with this second method of extrapolation is that highly-ranked nonfiction programs are both more likely to have nonfiction-genre statistical “breakouts” available, and more likely to receive a larger percentage of nonfiction applicants than the usual 6:3:2 ratio would suggest. Some of the national nonfiction applicant-pool estimates above are therefore slightly lower than they should be, and this may explain the difference in the two sets of medians and means derived from the two methods of extrapolation described above.

While these estimates cannot fix with certainty the annual nonfiction applicant pool, that every estimate above is between 125 and 725, with a clear majority falling between 250 and 550, suggests that the correct figure is well under a thousand.

Added to the adjusted median and mean data for fiction and poetry, these nonfiction figures suggest an annual three-genre applicant pool, across all full-residency programs in the United States, of 3,088 (median) or 3,597 (mean). A more cautious approach would be to propose a range: The annual three-genre full-residency applicant pool is likely between 3,000 and 3,750, a drop of several hundred from last year’s estimate of 3,500 to 4,000.

Data Sources
For those program measures not subject to applicant polling, such as rankings and recitations of admissions, curricular, placement, student-faculty ratio, and funding data, only data publicly released by the programs—either to individual applicants, to groups of applicants, in a program's promotional literature, or via a program website—have been included in the rankings table. All data were updated regularly to reflect programs' most recent public disclosures. As previously stated, programs can alert Poets & Writers Magazine to important website updates by e-mailing the magazine directly.

Many of the nation's full- and low-residency MFA programs decline to publicly release internal data (see “Ranking Questionnaires and Program Response Rates”). Programs unable or unwilling to release data regarding their funding and admissions processes are necessarily disadvantaged by a ranking system that relies on transparency. Yet no program that fails to release this data for applicants' consideration can avoid being judged, by applicants and other observers, through the lens of such nondisclosures. As research for these rankings is based entirely on publicly-available, publicly-verifiable data, the accuracy of the data upon which the rankings are based can be readily confirmed by any party.


Low-residency programs were measured in nine categories, seven of which are rankings—five employing unscientific but probative polling of the sort described above, and two based upon publicly-available hard data. Low-residency programs have not been assessed with respect to their funding packages because these programs generally offer no or very little financial aid to incoming students. The reason for this is that low-residency programs presume their applicants will continue in their present employment during the course of their studies.

Over the course of four successive application cycles, a total of 230 low-residency applicants were polled as to their program preferences, with these preferences exhibited in the form of application lists. The locus for this polling was the Poets & Writers online discussion board, the Speakeasy Message Forum, widely considered the highest-trafficked low-residency community on the Internet. The relatively small cohort used for this polling accounts for the following: (1) The annual applicant pool for low-residency programs is approximately one-eighth the size of the full-residency applicant pool (see “Full-Residency Rankings: National Full-Residency Applicant Pool Size” and “National Low-Residency Applicant Pool Size”); (2) low-residency applicants do not congregate online in the same way or in the same numbers that full-residency applicants do; and (3) low-residency programs are subject to a "bunching" phenomenon not evident with full-residency programs, with only ten programs nationally appearing on even 10 percent of poll respondents' application lists, and only four appearing on 20 percent or more. For this reason only the top ten low-residency programs have been included in the rankings (also available in the September/October 2011 print edition of Poets & Writers Magazine); below this level it is difficult to draw distinctions between programs, as none received a significant number of votes over the four years polling was conducted.

One explanation for the bunching phenomenon described above may be that low-residency programs are less susceptible to comparison than full-residency programs, as many of the major considerations for full-residency applicants, including location, funding, cohort quality, class size, program duration, student-faculty ratio, job placement, and cost of living, are not major considerations for low-residency applicants due to the structure and mission of low-residency programs. Generally speaking, low-residency programs are assessed on the basis of their faculty and pedagogy, neither of which are conducive to quantification and ranking. That three programs have such a clear advantage in the rankings on the other forty-six operating in the United States, Canada, the United Kingdom, and China is a function of both the relatively recent development of the low-residency model (with older programs tending to be more highly regarded, though none dates to before 1976) and the consensus that appears to have existed for years that three programs in particular are strongest in terms of faculty, selectivity, and placement. It is worth noting, too, that a significant number of the world's forty-nine low-residency MFA programs were founded within the last eight to ten years (see “The Full-Residency Rankings Chart: Additional Program Measures: Dates of Establishment,” below); applicant familiarity with these programs may still be relatively low.

The four-year low-residency polling described above has been further broken down into year-by-year poll results. The cohort for the 2010–2011 annual ranking was thirty-five, for the 2009–2010 annual ranking eighty-eight, for the 2008–2009 ranking fifty-five, and for the 2007–2008 ranking fifty-two. If and when individual Speakeasy account-holders applied to programs in more than one admissions cycle, their application lists from each cycle were treated as separate slates of votes; repeat applicants accounted for less than 10 percent of the polling cohort, however. Full-residency applicants on The Creative Writing MFA Blog who applied to one or more low-residency programs as part of their overall slate of target programs (see "Structure" and "Cohort") were also included in the low-residency voting; due to the exceedingly small number of such votes, these entries were manually compared both to one another and to existing low-residency application lists to ensure duplicate lists were avoided.

While polls with larger cohorts are, all other things being equal, more reliable than those with smaller ones, the fact that the annual applicant pool for low-residency programs is likely between 350 and 400 (see “National Low-Residency Applicant Pool Size”) suggests that the total polling cohort for the 2012 low-residency MFA rankings likely represents approximately 50 percent of a single-year national applicant pool for this sort of degree program. Moreover, as is the case with the full-residency rankings, cross-checking applicant vote totals across a period of four years reveals substantial consistency in the results and quickly unearths any significant anomalies or outliers. Of the ten low-residency programs listed in this year's print rankings, eight (80 percent) ranked in the top ten in all four years of polling, while another was in the top ten for three of the four application cycles studied. All of the programs in the top ten achieved at least an Honorable Mention (a ranking between eleven and fifteen) for all four of the years in which low-residency applicants were polled. In fact, only one current top ten low-residency program has ever ranked lower than twelfth in any individual admissions cycle since 2006 (and even this occurred in only one admissions cycle).

An n.d. notation signifies that a program has not released the requisite data. An em-dash (—) indicates that the program is unranked in that category. Only nine of the nation’s forty-nine low-residency MFA programs achieved a positive score in the national placement rankings, which considered placement data for full- and low-residency programs in a single assessment (and which then, as to the low-residency programs, combined the data for fellowship and job placement into a single table): University of Alaska in Anchorage; Bennington College in Vermont; Goddard College in Plainfield, Vermont; Murray State University in Kentucky; Queens University in Charlotte, North Carolina; University of Southern Maine in Portland; Spalding University in Louisville, Kentucky; Vermont College of Fine Arts in Montpelier; and Warren Wilson College in Swannanoa, North Carolina.

In order to better acknowledge the achievement, in the placement category, of these nine low-residency programs relative to their low-residency peers, and in recognition of the fact that low-residency graduates are substantially less likely to seek postgraduate fellowships or even postgraduate university teaching positions (largely because they do not give up their present employment when they matriculate), the national placement data collected for these rankings have been reconstituted as a low-residency-only ranking.

Due to the still relatively small number of low-residency programs in the United States and abroad, only programs receiving top ten placement in any category of assessment have received a special notation in either the print or online editions of the rankings.

Page 1 | Page 2 | Page 3 | Page 4 | Page 5