Aug 30, 2008, 1:13 PM
Putting aside the Columbia issue entirely, I think it's important to trace the history of MFA rankings:
* In 1996, a national news magazine distributed a questionnaire to MFA faculties and asked them to give every school in America a single reputation score. The programs were thus ranked in one area of interest, based on writers filling out a single piece of paper. These rankings were immediately abandoned by the magazine as unreliable.
* In 2007, a single reporter for a national news magazine did "top ten" lists in various areas of interest. He based his lists (and he insisted they were not rankings, explicitly noting, too, that twenty other schools could have easily made his "top ten list") on a handful of interviews with select MFA faculty, many of whom he knew personally beforehand. He also placed his alma mater in the top 10 of MFA programs--for only the second time in its history (previously, it had once placed in a six-way tie for 10th).
* In 2006, 2007, and 2008, hundreds of MFA applicants voluntarily ranked their favorite MFA programs, based on their own research into MFA options, and taking into account every single factor of interest MFA applicants consider, and provided this data to a single individual for compilation. That individual is an attorney, a former journalist, a professional writer, and the author or co-author of two books, one of which is the only reference guide on the market for MFA applicants.
These latter rankings will, in 60 days' time, be published in paperback by a major international publisher.
Can someone explain again why--why in the world--anyone would look at the 1996 single-area "rankings" and 2007 "lists" and consider these more reputable than the rankings we have now?
In fact, by no means is funding the only factor considered in the MFA rankings on TSE. If that were the case, someone would have to explain the following to me:
Columbia (largely unfunded): #4 (USNWR96); #14 (2007); #24 (2008); #43 (2009).
Swing: -39; Difference*: -30 (#4 to #34).
* Between average placement 2008-9 and USNWR96.
NYU (largely unfunded): #6 (USNWR96); #17 (2007); #17 (2008); #27 (2009).
Swing: -21; Difference: -16 (#6 to #22).
Arizona (largely unfunded): #9 (USNWR96); #21 (2007); #31 (2008); #27 (2009)
Swing: -18; Difference: -20 (#9 to #29).
Utah (largely unfunded): #16 (USNWR96); #58 (2007); #82 (2008); #90 (2009).
Swing: -74; Difference: -70 (#16 to #86).
So unfunded programs are dropping, yes, but Columbia's drop is two times greater than NYU's. Same location, same USNWR96 reputation, same stellar faculty, same funding package. Meanwhile, Columbia has held its ground substantially better than Utah, though both programs were top 20 in 1996, and both are largely unfunded. Why? Obviously, Columbia's overall reputation as a University significantly outstrips Utah's (e.g., Columbia is #8 among undergrads, Utah #127, according to USNWR).
So this notion that all unfunded programs are being treated equally by the TSE rankings is wildly inaccurate, as is the idea that all New York City programs are faring equally poorly. They're not. At all.
P.S. The above comparisons are actually excessively generous to Columbia, as right now New York University stands at #7 in the forthcoming 2009 P&W Reader Poll, and trending upward, while Columbia is at #19, and trending downward. Again, why this cross-town discrepancy? And why this consistent discrepancy across three years of rankings?
(This post was edited by umass76 on Aug 30, 2008, 1:14 PM)