Aug 29, 2008, 6:26 PM
Post #686 of 764
If you took a poll of Americans' favorite soda brands, would the resultant data-set be a ranking of the most popular brands? I'm afraid the dictionary answers "yes" to that. So, just to be clear, what's on TSE is not a "list" in any sense, as that's not what that word means in English. It's a poll and a ranking, and that's how I've presented it.
Each year around 4,000 people apply to MFA programs; since January 1, 2007 (a period of about 18 months) nearly 500 MFA applicants, the best-informed demographic in the field of MFA programs, have participated in TSE polls. In other words, 1/8th of the entire applicant pool (by way of comparison, in politics a state with 25 million residents requires only a poll of 1/5th of 1/10th of 1/10th of 1 percent of the population to be considered accurate and significant). And over that time, the rankings have been shockingly consistent in terms of the top 40 programs in America. Only unfunded programs like SFSU, USF, Columbia, The New School, and others have seen significant fluctuations in their rank (generally downward). The other nearly 200 MFA programs in America seem to have found quite specific places in the ranking and stayed in one general area there, suggesting a high confidence rate for the poll.
Reaction to the poll from those who are not students at the unfunded programs is now running 99:1 positive. Literally. The complaints are coming from those who attend schools not doing well in the rankings, and let me say now that I can understand that. It's a legitimate personal reaction to something like this to try to denigrate it because certain programs are slowly losing their luster. And it's tempting, too, to implicitly laud the methodology of the USNWR96 rankings instead--despite the fact that no one knows less about whether recent college-grad MFA applicants will enjoy a program than MFA faculties twice their age asked to comment (by and large) on programs they're not familiar with (and 12 years ago, at that). It's MFA applicants who do the leg-work to speak to current students and faculty about individual programs, who research funding packages, locations, cost of living, and so on. If you've ever met a faculty member at an MFA, you understand that shoving a voluntary questionnaire in their hand and asking them to comment on every MFA program in America is a shockingly unsound and unreliable methodology. Apparently USNWR agrees, and that's why they discontinued their rankings.
It's funny, actually, that those most opposed to these rankings are also opposed to ever measuring programs on objective data, like funding packages, even though this is what has made the USNWR rankings in other fields credible and sustainable for decades. I'm put in mind of what, in the field of education, Jonathan Kozol has said about conservative activists who believe only better parenting helps failing public schools: it's very, very convenient that such activists, who fundamentally don't want anything to change in the public school system (particularly not in the way of increased funding for it), have selected an unmeasurable data-point as the hinge for any government funding of education. After all, if you can't measure it, then you can't do anything, right? It's the same for those who would rank programs only on bases which don't lend themselves to any sort of statistical analysis. The TSE polls are great because they mix objective and subjective considerations, as evidenced by the fact that some fully-funded programs are more popular than others due to their location (a subjective factor), while fully-funded programs generally are more popular than unfunded ones (based on objective analyses of funding packages).
I'm totally cool with people saying that a ranking is only so useful to a prospective applicant (I agree), or that some programs because of bad word-of-mouth will get a raw deal (I agree), or that no ranking system can ever be wholly reliable (I agree). But I'm not really cool with folks misrepresenting a poll and a ranking as a "list," and saying that 500 people is a small sample size in an overall data-pool of only a few thousand. Nor do I like the romanticizing of the discredited USNWR methodology, especially as the magazine itself has disowned it and, even at its best, it held no promise of being even as accurate as the TSE rankings. Of course, if we do want to single out selectivity as a factor, we can do so via the acceptance rate data, which shows that unfunded programs are scandalously easy to get into compared to their better-funded peers (e.g., unfunded, NY-based Ivy League program Columbia has an acceptance rate seven times higher than fully-funded, NY-based Ivy League program Cornell). Folks might look at this selectivity issue as a sort of "key" as to why some programs are plummeting in the rankings.
(This post was edited by umass76 on Aug 29, 2008, 6:30 PM)