Oct 20, 2009, 6:48 PM
Post #741 of 764
I think you're using the wrong lens here. The old rankings were indeed "peer-based"--MFA faculties were asked a single question on a single questionnaire, which single question related to a single feature of an MFA program ("reputation"). The questionnaire acknowledged--as it had to--that most MFA faculties had absolutely no clue about the strength of other programs, as it gave faculty members the opportunity to not rank a given program. The methodological questions there are enormous: Can a program really be assessed solely on the basis of "reputation"? How would a poet employed at a school in NYC know anything about the fiction program at a California MFA? How well did a faculty member have to know a school to be willing to assign it a ranking (on a scale of 1 to 5), and did all professors use this same guideline, if indeed such a guideline could even be set? How does it skew the rankings that a program a faculty member had never heard of, instead of receiving a "negative" ranking as to its reputation--which you'd think it would have, as never having heard of a program means a great deal for whether it has a "reputation" in the field--was instead treated to what was essentially a mulligan? And on and on and on. In other words, people were being polled on topics they didn't know about (they actually were not allowed to rate their own programs, so let's not think their expertise in their own program was being mined), and the polling was too limited in its scope, and the rules of the polling were destined to do something entirely different than measure what the poll claimed to measure. That's why USNWR dumped their rankings.
These rankings take a different and far more logical view: They conclude that the only thing a faculty member is an expert on, or that a current student is an expert on, is their own program. Faculty and students express this expertise through their ability to advertise the program via a medium they control exclusively and absolutely: the program website. Program websites, however incomplete at times, nevertheless offer prospective applicants a wealth of information--and it's not random information, it's information specifically targeted toward the values of applicants, i.e. the way applicants expect to judge a program once they matriculate. Applicants want to know: Will you fund me? Will I have to teach? Who will teach me? How long will I have to write? Where are you located? What courses can I take? And so on. They want to know these things because these are the program features that will matter to them when they are current students in MFA programs. The website is a program's opportunity to translate its expertise on these questions to an unbiased audience: i.e., a receptacle for this information who a) has no horse in the field, and b) has everything to lose by not thinking extremely carefully about the expert information they're getting (incidentally, not merely from program websites, which are just the tip of the iceberg, but also from on-line applicant communities, undergraduate professors, current and former MFA students, blogs, and so on). Applicants, who are by definition self-interested, then make decisions in their own best interest, and their collective valuations of what matters in choosing an MFA program establish (by definition, as they're both the target audience and target community) how we determine what makes a good MFA program. For too long the theory has been, "What makes a good program is what an MFA professor thinks is a good program." The new theory is, "What makes a good program is what an MFA student thinks is a good program--and some, but not all, of the factors that go into making that determination are transparently available to the unbiased target audience/community (applicants) prior to matriculation."
So yes, this ranking does not measure in-school experience, because no ranking could. The only experts in what it's like to be in a program are current students, but the experiences of current students cannot be either quantified or compared: when Billy at CSU says he "loves his program," and Sally at OSU says she "loves her program," these are two totally different people having totally different subject experiences, so "love" doesn't even mean the same thing to each person. Billy may love his program because he likes the skiing in Fort Collins, or because he met a girl he's fallen in love with; Sally may like OSU for her own reasons. And even if Billy and Sally are asked to speak specifically to the quality of their program in certain respects, what Billy means by an "8" and what Sally means by an "8" (out of 10) are two different things, and essentially--the point here is--everything's floating in a vacuum because neither Billy or Sally are putting anything on the line in their assessments, they're both likely to be biased either for or against their own program, and they're not in a position to, or being asked to, compare their experience to anyone else's. Not to mention it'd be practically impossible to sample enough Billies and Sallies, as 4 students at Cornell is 100% of the annual class, but only 16% of Iowa's annual class, which would cause (if we used that sample size) Iowa to say that their sample size wasn't necessarily representative (certainly not as much as Cornell's), and they're be right. Yet we couldn't increase Cornell's sample size because we wouldn't have access to enough current or recent students to match the sample-size we'd need from the larger programs. And on and on and on.
Looked at with this sort of comprehensive and contextual lens, you see how, in fact, the new system is redressing some of the weaknesses of the old, however imperfectly.