Mar 20, 2008, 11:58 PM
Post #197 of 430
Re: [wardis] Columbia, again
Unfortunately I did miss your comment--sorry about that (NB: my in-box gets about fifty messages/day that are just comments from the blog, and sometimes I miss reading a few of them by accident). This is a good question and one I'm actually happy to answer.
The answer is that it's a question of who the rankings are intended for. The concern you've raised suggests, if we break it down, one way of looking at funding rankings, which is that the rankings are intended for the benefit of the programs, and so being "fair" to individual programs is of paramount importance. Under that rubric, you're absolutely right, we'd have to take into account an endless series of factors, many of which we'd have no way of knowing (largely because the very programs we were trying to be "fair" to had no interest whatsoever in divulging such information to us). For instance, a program-centric measure of funding would require not only information on the size of each program's entering class and the number of financial aid packages available, it would also--because "fairness" would be the byword of the analysis--require that we consider the endowment of the university or college, the efforts made by program administrators to secure additional lines of funding, trends in alumni giving, the availability of on-campus housing, cost of living comparisons, and so on.
The problem here, of course, is that it's not the programs that need the rankings, it's the applicants. It's not the programs who are lacking in information about, well, themselves--once again, it's the applicants. In fact one of the main reasons I started analyzing MFA programs using hard data was because I felt MFA programs were waging, purposefully or not, an asymmetrical brand of Information-Age warfare against their applicants. A war in which programs regularly had data they could release to students, but decided not to.
I tend to have less sympathy than others for the programs and their administrators, if only because I've scoured the websites of 200+ programs (literally) and have seen first-hand (and en masse) how intentionally misleading, disingenuous, vague, and sometimes downright confrontational these websites are (NB: Columbia happens to have one of the worst websites in this respect--which is fitting, given the bad news it otherwise would have to, in bold and unambiguous and apologetic terms, disclose to its prospective students). It shouldn't, actually, be hard to find out the stipend associated with a program's TAship offerings, but in fact--at most program websites--it's absolutely impossible (because the information isn't public), even though the programs have that data readily available. Some schools won't even tell you how many students they admit, and only 45 schools (of nearly 300) will reveal how many applications they receive, even though every single undergraduate institution in America does so annually. And that's just the tip of the iceberg: the general trend with MFA programs is to tell prospective applicants absolutely nothing the program doesn't want them to know. And since most programs want the students that are applying to them to know nothing--judging simply from the various programs' skeletal promotional materials--the students are told nothing, and many of them end up knowing nothing. Which, of course, is hardly their fault.
It's in this context, and against this backdrop, that I decided that the funding rankings had to be targeted at prospective students, not program administrators. And so I utilized Rawls' "veil of ignorance" philosophy, which says (in rough-and-tumble paraphrase) that if we're assessing a society's institutions, we want to judge them from the perspective of an unborn child who will be born into that society, but whose place in the society is as yet unknown. That way, we end up with the fairest assessment, because the assessment is unbiased by knowledge of where any particular individual will be situated in the society whose institutions are to be judged.
So how does that apply here? Basically, it means that the funding rankings are targeted at the archetypal, anonymous, median student. In other words, the student who, having been accepted to a program, has a chance at funding at that program that represents the average chance for a random acceptee to that school. And what question would that archetypal, anonymous, median student ask? S/he would want to know the following: all things being equal, what percentage of students get funded at this program, and how much money (taking cost of living into account) does the average student receive? And that's the question the upcoming funding rankings will answer. Because truthfully, it's no consolation to the 85% of Pitt acceptees who don't get funding to say that, if only Pitt were smaller, they would have. Likewise, it doesn't cause even a flutter in the hearts of Cornell acceptees to be told that, hypothetically, if Cornell were larger they'd have half as much funding, or even less--because every one of those acceptees knew ex ante, when they applied to Cornell, that if they got in their funding was guaranteed, and was guaranteed at an exceedingly high level. What the funding rankings will tell applicants, then--and, importantly, will tell them before they apply to any school, so they can factor these calculations into their decisions on where to apply--is what the relative percentage chance is that they will get funding (and how much funding they will get) if they are admitted to the university or college they're considering applying to.
I'll have to leave it to someone else to compile a funding ranking targeted at programs, rather than students. That sort of ranking just doesn't seem very interesting--or useful, or needed--to me.
Hope this answers your question.