Note: The following article explains the approach used to compile the 2014 MFA Index, published in the September/October 2013 print edition of Poets & Writers Magazine.
The 2014 MFA Index comprises two tables: one of seventy-eight full-residency MFA programs and one of twenty-six low-residency MFA programs. Full-residency MFA programs are assessed on the basis of twenty measures: ten listings of survey results, and ten listings of other important program features. While the ten surveys detailed in the full-residency table cannot be classified as scientific, all are predicated upon sufficient hard data to be substantially probative. A scientific survey of full- or low-residency creative writing MFA programs is not presently possible, as more than half of the nation’s full- and low-residency graduate creative writing programs do not publicly release the data necessary for such an assessment.
Five of the ten full-residency MFA surveys are based on a survey of a large sample of current MFA applicants. In each of these five surveys, programs are ordered on the basis of the number of times they appeared on applicants’ application lists; the resulting numeric ordering therefore assesses only the popularity of individual programs among a large sample of well-researched aspiring poets and writers, and is not an assessment of overall program quality. While prospective MFA students have a wide range of resources at their disposal in selecting where to apply—including not only quantitative data but also programs’ promotional materials; first-hand interviews with program staff, faculty, and current and former students; large online communities of fellow applicants, many of which are devoted to MFA research, discussion, and application advising; their own past experiences in undergraduate and non-institutional workshop settings; the literary oeuvres of current faculty members and recent alumni; previous program assessments in various online and print media, including first-hand accounts from, and interviews with, current and former faculty and graduates; and in some instances program visits or prior first-hand experience workshopping with selected faculty members—there is necessarily some information that is impossible for prospective MFA students to access unless and until they become matriculants. Once a student has matriculated, however, they may develop subjective attachments to their alma mater, which makes objective assessments of their own experiences, as opposed to the provision of definitionally “self-serving” survey responses, difficult or even impossible; for this reason and others discussed in more detail below, current MFA students are not asked to self-report on their programs, nor are they considered to have a broader, more accurate, or more timely knowledge of programs they do not attend than those unmatriculated applicants who are currently researching program options
In the remaining five surveys in the full-residency table, programs are noted by their relative numeric placement within the data-sets compiled. The five hard data-based survey columns in the full-residency program table are as follows: selectivity, funding, student-faculty ratio, fellowship placement, and job placement. These categories appear in the table in the chronological order in which applicants to graduate creative writing programs encounter them: a program’s selectivity determines, all things being equal, an individual applicant’s likelihood of securing admission; the funding category indicates, all things being equal, what aid package will be attached to that program admission if and when it comes; student-faculty ratio gestures toward, all things being equal, a student’s ability to access individual faculty members while in-program; fellowship placement generally measures the success of individual programs in placing recent graduates in post-MFA fellowships; job placement generally measures the success of individual programs in placing graduates who have already achieved some measure of fellowship/publishing success in full-time creative writing teaching positions in higher education.
These survey results are scientific to the extent that they order programs on the basis of quantitative data publicly released by the programs themselves, and unscientific to the extent that not every program has released data for every category of assessment. These five columns therefore constitute an ordering of all publicly known data rather than an ordering of all extant data. A full complement of funding and admissions data is available for well over half of the nation’s full-residency MFA programs; the remaining programs are generally smaller, newer, lightly advertised, nondomestic, or regional in terms of their applicant base. As all of these programs have websites, however, and as all enjoy exclusive dominion over their online presence, the absence of any specific funding or selectivity data in these programs’ online promotional materials is taken as one indication that these programs would be unlikely to place within the top half of all programs in the funding and selectivity categories. The full-residency table is based in part on the presumption that it would be counterintuitive for a program providing full funding to a substantial percentage of its student body to not indicate as much in its promotional materials, or that a program among the most selective in the country would fail to designate itself as such. Program websites are regularly reviewed to determine whether a program has added information to its online profile. Program administrators can also e-mail Poets & Writers Magazine directly, at firstname.lastname@example.org, to draw attention to any substantive website, program-policy, or funding/admissions changes.
Based on the data presently available, it is not anticipated that any of those programs without a full complement of funding and admissions data available in some form online would have appeared in the top half of full-residency programs in the funding category. These programs, given the incompleteness of their promotional materials, are also much less likely to attract sufficient applications to place in the top half of the selectivity listing. At present, a program’s yield-exclusive acceptance rate would have to be 10 percent or less to place it in the top half of this category. As to the two placement-related surveys, these do not rely on programs’ promotional materials or on their willingness to release internal data to individual applicants or groups of applicants, so all programs nationally, both full- and low-residency, are equally eligible for a top-half placement in the fellowship and job placement categories. Data sufficient to calculate student-faculty ratios for virtually every MFA program in the United States were also readily available.
By definition, quantitative surveys of any kind—whether in the field of education or any other field—perform poorly when it comes to assessing unquantifiable program features and/or features that can only be assessed subjectively by an individual experiencing them firsthand. That such features are most assuredly a vital element of every graduate creative writing program does not and should not preclude the possibility of statistics-based assessment measures operating alongside the primary mechanism programs have to introduce applicants to unique curricular features: their own websites and promotional materials. Programs at all times bear the primary responsibility for educating prospective applicants regarding program offerings, and no program assessment or series of program assessments could or should usurp this responsibility—especially as no applicant applies to a program without first researching it online or by other means. Indeed, programs have a captive audience of hundreds if not thousands for their online and hard-copy promotional materials. Some programs may worry that neither the full-residency table nor any other series of surveys or hard-data assessments could adequately capture each graduate creative writing program’s most unique elements; these fears are understandable, but misplaced. The applicants surveyed for the full-residency table discussed in this Article had access to individual program websites and promotional material at all times before, during, and after their formation of a list of where they intended to apply.
If specific, unquantifiable program features do not lead to programs excelling in the hard-data measurements or the applicant popularity surveys, it is neither because the 2014 MFA Index did not consider such features nor because the applicants surveyed for Index did not. Interviews with hundreds of MFA applicants conducted as part of this program assessment project indicate that applicants can and do take into account a host of unquantifiable program features in deciding where to apply. What may be the case, instead, is that certain unquantifiable program features are less popular among applicants than among those program faculty and administrators who first brainstormed and implemented them. By surveying current applicants rather than individual program faculties and administrators, the 2014 MFA Index ensures that the class of persons surveyed for the program assessment is one likely to have recently accessed the very program websites which (presumably) prominently advertise those features of which programs are most proud. In contrast, students, faculty, or administrators at one program are highly unlikely to be visiting the websites of other programs on a regular basis; consequently, they are far less likely to be aware of peer programs’ idiosyncratic, online-advertised features.
The approach to compiling the 2014 MFA Index was devised with the following goals in mind: (1) To better inform applicants of their program options by offering the nation’s first complete listing of graduate creative writing programs; (2) to more widely disseminate hard data regarding objective but non-exhaustive/non-conclusory measures of program quality, which often play a role in applicants’ application and matriculation decisions; (3) to be responsive to, and reflective of, the opinions, interests, concerns, and values of current applicants to graduate creative writing programs; (4) to emphasize the importance of both inputs and outputs in assessing individual graduate creative writing programs; and (5) to enhance the transparency of the admissions process for present and future applicants to graduate creative writing programs.
In the twelve months between April 16, 2012, and April 15, 2013, 304 full-residency MFA applicants were surveyed on The MFA Draft 2013 Group, hosted by Facebook.com. Created by a group of then-current MFA applicants in the early spring of 2012, the Draft at its peak had more than 1,500 members during the survey period, and featured dozens of new MFA-related conversation threads—some garnering up to a thousand individual responses—each day. The Draft was created and is moderated entirely by the applicants themselves; admission of new members was and is determined entirely by applicants; and decisions regarding the production of personal or program information for purposes of media research were likewise made entirely by applicants. The site was promoted via several methods: Word-of-mouth; links and references in the MFA Draft 2012 Group, the predecessor to the MFA Draft 2013 Group (and there is also, now, a well-attended MFA Draft 2014 Group); and links and references on The Creative Writing MFA Blog, founded by novelist Tom Kealey (and described in detail in the methodology article for the 2012 MFA Index). The author of this Article was at no time a moderator in the Draft, nor did the Author play any role in the creation or promulgation of the community in the MFA Draft 2013 Group. The Draft was the only survey locus used for the one-year applicant popularity survey described above; the five-year survey is a compilation of the data from this one-year survey and the past four years of Poets & Writers Magazine program assessments.
Individual users on the MFA Draft 2013 Group were distinguished by their proper names, as reflected by the full names (first and last) associated with their Facebook accounts. Internet research was conducted to verify applicants’ identities as and when authenticity was in question. The document in which Draft 2013 applicants voluntarily compiled their application decisions was part of an application that tracks all changes to uploaded documents by editors’ full names/Facebook accounts. This application ensured that any and all changes to the document were traceable to specific users. Users were permitted to amend their application lists in real-time; though less than 15 percent of users elected to make amendments to their lists once posted, all changes to applicants’ lists were accounted for by both the one-year and five-year applicant surveys appearing in the program tables. Substantial additional security measures were taken to ensure the authenticity of compiled application lists.
As noted, surveyed applicants voluntarily listed the programs to which they had applied or intended to apply, and were permitted to adjust these lists at any point during the survey period.
Period of Data Collection
Only recently collected data is of use to applicants. The one-year full-residency survey uses no survey data predating April 16, 2012; the five-year full-residency survey uses no survey data predating April 16, 2008; the low-residency survey uses no data predating April 16, 2007. The longer survey periods for low-residency MFA programs were necessitated by the significantly smaller applicant pools for these degrees.
The MRP Index
Eligibility for inclusion in the 2014 MFA Index was determined using the “MRP Index,” available for review at www.mfaresearchproject.wordpress.com. The Index tracks how programs place in seven survey areas: popularity among applicants, selectivity, funding, student-faculty ratio, fellowship placement, job placement, and location. Programs are categorized by the number of survey areas in which they place in the top half of all programs a) eligible for the category, and b) with data publicly available. The number of programs considered eligible for each category of assessment varies by the category; in some instances (such as fellowship placement and job placement) newer programs cannot yet be added to the pool of eligible programs because they have yet to graduate any poets or writers, whereas in other instances (such as selectivity, funding, and student-faculty ratio) certain programs may not yet have released the data necessary for them to be assessed in these areas. For the popularity and location surveys, all contemporaneously extant programs were automatically made members of the pool of eligible programs. Otherwise, the pool of eligible programs was 121 for the selectivity survey, 161 for the funding survey, 144 for the student-faculty ratio survey, 94 for the fellowship placement survey, and 94 for the job placement survey. For the fellowship and job placement surveys, only programs known to have graduated at least five classes of poets and writers by the beginning of data collection in these categories (2009) were considered to have had a full opportunity to place in these categories, with the result being that the number of MFA programs founded prior to fall 2001 was used as the “eligible pool” figure for this measure (but note that, in order not to disadvantage programs founded after this time, such programs were nevertheless included in the survey if they successfully placed a graduate in an eligible fellowship or teaching position). As 94 full-residency programs were founded prior to fall 2001, the size of the pool for the job placement measure was set at 94. In future years the size of the “eligible pool,” consistent with the above-stated methodology, will increase.
Programs appearing in the top half of three or more of the seven survey areas listed above were included in the 2014 MFA Index.
Survey Questionnaires and Program Response Rates
Graduate degree programs in creative writing respond to assessment-related inquiries at a lower rate than do programs in almost any other field of study in the United States. This is one of several reasons the MFA Index does not seek to survey the opinions of program faculty and administrators as to the features and efficacy of peer programs.
The following response rates were reported for questionnaires sent pursuant to the 2012 U.S. News & World Report program assessments (in each instance, the field of study is followed by the response rate from all programs surveyed in the field): Engineering (98 percent); Business (91 percent); Criminology (90 percent); Education (90 percent); Medicine (84 percent); Healthcare Management (76 percent); Statistics (67 percent); Law (66 percent); Public Health (61 percent); Audiology (57 percent); Library and Information Studies (56 percent); Pharmacological Sciences (56 percent); Social Work (56 percent); Occupational Therapy (53 percent); Veterinary Medicine (48 percent); Nursing (47 percent); Computer Science (46 percent); Physician Assistance (45 percent); Sociology (43 percent); Speech-Language Pathology (42 percent); Public Affairs (40 percent); Rehabilitation Counseling (40 percent); Fine Arts (39 percent); Political Science (37 percent); Economics (34 percent); Mathematics (34 percent); Physical Therapy (33 percent); English (31 percent); Physics (31 percent); Earth Sciences (29 percent); Clinical Psychology (28 percent); Chemistry (25 percent); Psychology (25 percent); History (23 percent); and Biological Sciences (15 percent). Respondent institutions in each of these academic fields were aware that their questionnaire responses would not be kept confidential, and that their participation in surveys sponsored by U.S. News & World Report would result in publication of a substantial stock of program-specific data regarding each university queried.
Every two years, the Association of Writers and Writing programs (AWP) sends a questionnaire to programs in the field of creative writing—a field whose administrators and faculty are no longer surveyed (and whose programs are no longer assessed) by U.S. News & World Report—in much the same way U.S. News & World Report does for the thirty-five fields of study listed above. A crucial difference between the two questionnaires, however, is that the AWP questionnaire guarantees anonymity to its respondents; AWP releases no program-specific data or survey results pursuant to its biennial questionnaire. It is worth noting, too, that AWP estimates (as of its 2009 Annual Report) that 34 percent of programs in the field of creative writing have thus far declined to become members of AWP. These programs are not subject to AWP questionnaires.
According to AWP’s publicly released summary of the program questionnaire it distributed in 2007, between 40 and 60 percent of AWP’s member programs declined to answer AWP’s queries regarding internal admissions and funding data. Specifically, 47 percent of programs declined to reveal how many assistantships they offered annually to incoming students, 61 percent declined to reveal the stipend offered to teaching assistants, 56 percent declined to reveal whether they offered a full-tuition waiver to teaching assistants, 49 percent declined to reveal how many scholarships were offered to incoming students, 55 percent declined to reveal their annual number of applicants, and 52 percent declined to reveal the size of their annual matriculating class. AWP did not distinguish between low-residency and full-residency programs on the questionnaire.