»

Subscribe | Give a Gift Subscription

Log In or Register | Help | Contact Us | Donate

Advanced Search

Main Index » Writing and Publishing » MFA Programs
Current MFA Rankings
Edit your profilePrivate messages Search postsWho's online?
You are not signed in. Click here to sign in.
If you are not a member, Register here!
139224 registered users
First page Previous page 1 ... 23 24 25 26 27 28 29 30 31 Next page Last page  View All


Woon


Oct 19, 2009, 11:40 PM

Post #726 of 764 (13845 views)
Shortcut
Re: [umass76] [In reply to] Can't Post

Umass is always ranked so high and yet, I'm not drawn to that program. The funding worries me. Also, one current student didn't particularly give high marks to the administration. At some other forum, one guy complained about how ugly the campus is. I know two of these three reasons have no bearing on the MFA program but I can't erase them out of my mind.


bighark


Oct 20, 2009, 12:41 AM

Post #727 of 764 (13831 views)
Shortcut
Re: [Woon] [In reply to] Can't Post

The effectiveness of a program's pedagogy, faculty, and university administration are among the many things not measured in these rankings.


umass76


Oct 20, 2009, 1:39 AM

Post #728 of 764 (13815 views)
Shortcut
Re: [bighark] [In reply to] Can't Post

Bighark,

Coincidentally, the three program elements you mentioned are also not measured by any ranking of undergraduate or graduate programs I've ever seen. Probably because the idea of measuring the "effectiveness" of a "pedagogy" is so nebulous I have no idea where one would even start; likewise, how do you know when a teacher is being "effective," how would you measure that except anecdotally? And of what importance is the "effectiveness of the administration" of a program--i.e., a) what does that even mean, and b) is this somehow of more (or equal, or close to the) importance of the amount of money that administration gives to students, or the selectivity of that administration's admissions, or the teaching load that administration requires of program admittees, all of which are measured and/or contained within the rankings? I think it's important we remain realistic about what rankings can and cannot do--not just MFA rankings, but any rankings. I mean, the rankings don't measure the attractiveness of the school mascot at any of these programs, either. Some things either can't be quantified, are irrelevant, or simply so pale (in their importance) to other measurable factors that the inability to work with them should be no impediment whatsoever to compiling and releasing rankings. Unless someone can propose a both realistic and good way for an applicant to take into account "pedagogical effectiveness" in looking at schools--and incidentally, virtually no MFA programs advertise their specific pedagogical perspectives (which often vary from faculty member to faculty member, anyway) to prospective applicants online or otherwise, making even the source data/material bighark would partially base his/her rankings on unknowable--I don't see what use or place that has in a discussion of rankings.

And of course, it's also worth pointing out that the rankings--by definition--take into account whatever the applicants who responded to the poll took into account. So, in fact, if the hundreds of applicants who participated in the polling did take into account the three factors cited above (assuming anyone could really unpack what they are or import), then actually the rankings do consider them.

Be well,
S.


(This post was edited by umass76 on Oct 20, 2009, 1:45 AM)


bighark


Oct 20, 2009, 2:19 AM

Post #729 of 764 (13803 views)
Shortcut
Re: [umass76] [In reply to] Can't Post

Seth,

For the record, then, the rankings in question do not account for any of the factors that might improve a student's writing.

The only thing I can tell from the methodology of these rankings is that they present opinions of perceived value from people who have not graduated from any writing programs. Zero percent of the sampled can comment on the efficacy of any given program.

Res ipsa loquitur.


yeahyeahyeah


Oct 20, 2009, 4:17 AM

Post #730 of 764 (13793 views)
Shortcut
Re: [bighark] [In reply to] Can't Post


In Reply To
Seth,

For the record, then, the rankings in question do not account for any of the factors that might improve a student's writing.

The only thing I can tell from the methodology of these rankings is that they present opinions of perceived value from people who have not graduated from any writing programs. Zero percent of the sampled can comment on the efficacy of any given program.

Res ipsa loquitur.



I think cohort is a more important factor when it comes to improving a student's writing. The more applicants a program attracts, the better the chances that program will be able to put together a nice group of high quality writers. So in this way the rankings are pretty helpful. And it also helps when choosing a few "safety" schools that don't have as many applicants.

And I don't see how quality of pedagogy and faculty can be ranked. We don't all have the same aesthetic. Or the same reaction to teaching.


klondike


Oct 20, 2009, 11:13 AM

Post #731 of 764 (13757 views)
Shortcut
Re: [bighark] [In reply to] Can't Post

Excellent point, Bighark.


jamie_mu


Oct 20, 2009, 11:45 AM

Post #732 of 764 (13747 views)
Shortcut
Re: [Woon] [In reply to] Can't Post

I agree with you about UMass Amherst. While it seems like a decent program the funding situation right now is a mess. Since a TA position is guaranteed once you get it and since optional third and fourth years for the MFA students and eighth years for PhD students are offered, many students are staying extra years, tying up the already limited positions within the composition program. If no one graduates then no new positions open up.

I attribute UMass Amherst's position in the rankings to Tom Kealey. We all own his book. He speaks highly of UMass. So many people end up applying there. This goes to show how just about any ranking is moot by the time it is released. When Tom went to UMass the funding wasn't all that bad and the job market was a little better (not much, I know). So people graduated. Now many things have changed but UMass amherst's popularity is still on the rise. The program is not a bad program, but it seems strange that when funding seems to be so high on everyone's lists when considering where to apply, UMass is still a popular choice.

I think Seth's rankings are informative but they could never tell us what programs are the "best" (I don't think any rankings could). It's a bit like awarding valedictorian to the most popular person in school. Everyone may like that person but it doesn't mean he or she is the smartest in the class. We now know which programs are the most popular.


umass76


Oct 20, 2009, 12:10 PM

Post #733 of 764 (13739 views)
Shortcut
Re: [jamie_mu] [In reply to] Can't Post

Jamie (and bighark),

I think the article says--admits--that no ranking can ever claim to know the absolute answer to the question of program quality. That is an unknowable: for MFA programs, for any sort of educational program. Even judging, say, elementary schools by their students' standardized test scores is fraught with complications about whether that really measures the quality of teaching or the amount of real learning going on. My point here being: saying that these rankings can't make an absolute determination of program quality is no slight to the rankings, it doesn't lessen their import in any way, it merely points out the impossibility of getting at this particular knowledge through rankings.

That said, bighark is absolutely, 100% wrong. I can't even express how wrong, and I say that firmly but also respectfully. One of the biggest factors in whether one has a good educational experience in a program--whether one improves one's writing--is whether one actually has, in the first instance, time to write. A reasonable teaching load and full funding is the only thing that ensures that to a writer. The one semester at the IWW that I taught two classes my learning curve absolutely collapsed; I didn't have the time, energy, or focus to improve my writing as I wanted. Likewise, the selectivity of a program determines program cohort quality, and a huge percentage of what one learns in one's program one learns from being immersed in a community of talented, smart writers who challenge your presumptions about writing both in and out of class. And the rankings, with this in mind, do measure selectivity (both through the overall poll, which gauges popularity among the most informed applicants, and through the hard-data selectivity rankings). Thirdly, to say that applicants to MFA programs don't take into account any factors that would contribute to (once they enter a program) the improvement of their writing is just flatly wrong; again, to the extent the overall poll is a statistically significant reflection of which schools applicants believe best suit their needs, to agree with bighark's position one would have to also believe that applicants didn't take these things into account in forming application lists--which is not just anti-commonsensical, but actually empirically wrong.

Finally, the fact that the rankings are accompanied by a data chart which lists several data-points relevant to how one improves one's writing--e.g., program size (which goes to vibrancy of community as an inspiring force as well as student-to-teacher ratio, which goes to mentoring quality) and curriculum focus (some students learn better, and learn more, in a studio system, some in an academics-oriented system)--makes bighark's observation even more absolutely bankrupt of sense. I say "bankrupt of sense" because the comment simply in no way reflects what these rankings say, do, or account for. It's almost like you haven't even looked at (let alone considered) the rankings, bighark. I would love to know your response to what I've just said here.

S.


(This post was edited by umass76 on Oct 20, 2009, 12:17 PM)


__________



Oct 20, 2009, 1:14 PM

Post #734 of 764 (13712 views)
Shortcut
Re: [umass76] [In reply to] Can't Post

The only thing I can tell from the methodology of these rankings is that they present opinions of perceived value from people who have not graduated from any writing programs. Zero percent of the sampled can comment on the efficacy of any given program.

So...is this statement true or false? ^^

And if you base rankings solely on the opinions, the best guess, of those who haven't even applied to or attended a program -- how is this different than asking me to rank the next ten books on my reading list? (I.e. I ask you guys, who haven't read Julie Orringer or Attila Bartis, who is the best, who will help me the most, and then I tell a web site I'm going to read Julie Orringer because that's what most people who haven't read Julie Orringer said -- and then the next round of readers will positively stomp Bartis's scruffy little Hungarian head right into the mud trying to reach Julie Orringer, who is, though no one's actually read her, the best).

Maybe all I'm saying is, Poor Attila Bartis!

But again, I was never good at the maths -- it's just from all current explanations, I still can't tell.

Res ipsa loquitur.

I don't know what this means.


six five four three two one 0 ->

(This post was edited by Junior Maas on Oct 20, 2009, 1:20 PM)


bighark


Oct 20, 2009, 1:14 PM

Post #735 of 764 (13711 views)
Shortcut
Re: [umass76] [In reply to] Can't Post

Cum hoc ergo propter hoc

The argument: The more applicants a program has, the better the program is going to be. Therefore, the programs with the most applicants are the best.

I remain unconvinced.

And by your own admission, ďNo ranking ought to pretend to establish the absolute truth about program quality, and in keeping with that maxim the rankings that follow have no such pretensions.Ē

See? We both agree that the rankings are limited in value. Now explain to me how Iím so flat-out wrong.


__________



Oct 20, 2009, 1:23 PM

Post #736 of 764 (13704 views)
Shortcut
Re: [umass76] [In reply to] Can't Post

EDIT!

I forgot to say it should go without saying that (besides the sweet pile of money) this is of course a difficult, thankless job, and even those a little fuzzy on everything probably still appreciate it. (I do).

DOUBLE EDIT!

Also, not to be too bitchy or anthing, but Cornell at number nine? How! I think last year's student roster is all publishing novels or in the NYT or Atlantic, etc. Seriously, man. I just found this video of their graduation reading after the outpouring of awe-struckness over one student's novel excerpt, and goddamn. Great, publishable stuff all around. They seem like a really tight bunch, too, very invested in helping each other out. I can't imagine UMass has them beat. (Though more people might now apply to the latter).


six five four three two one 0 ->

(This post was edited by Junior Maas on Oct 20, 2009, 1:32 PM)


umass76


Oct 20, 2009, 5:38 PM

Post #737 of 764 (13645 views)
Shortcut
Re: [Junior Maas] [In reply to] Can't Post

JM,

If the next ten books on your reading list all had comprehensive websites where you could research everything about them (how long they take to read, the basic plot, reviews by major critics, and so on and so on) and if the elements advertised by these book websites represented the exact bases on which you planned to make your decision about what to read, then that analogy might be apt--as the websites would be giving you a language [with]in which to think about and make decisions about not only what you value but which books would be most likely to meet your needs and reflect your interests and values. We should also assume (for the analogy to be apt) that choosing your next ten books will be a decision that forces you to move across the country, spend tens of thousands of dollars, perhaps determine your future career, and so on--i.e., the level of research you do into your future book-reading in no way compares to the level of research applicants do to determine where they want to go for graduate school and which programs they most admire. There are also a few dozen other ways in which the analogy is not apt; that's just a brief start.

Be well,
S.


umass76


Oct 20, 2009, 5:58 PM

Post #738 of 764 (13634 views)
Shortcut
Re: [bighark] [In reply to] Can't Post

Bighark,

I know you're not unaware that you're engaged in dialogue with an attorney, so I'm not sure why you'd think it appropriate to resort to mere rhetorical devices in this discussion. I'm not likely to fall into that trap, so it's not going to be a paradigm that works well for you here.

Your post made an assertion: "[T]he rankings in question do not account for any of the factors that might improve a student's writing."

Your assertion was refuted: emphatically, comprehensively. After your second (non-responsive) post, it remains utterly refuted.

Your post made a second assertion: "[These rankings] present opinions of perceived value from people who have not graduated from any writing programs. Zero percent of the sampled can comment on the efficacy of any given program."

This second assertion carried with it three implicit presumptions (each of which has sub-presumptions) that you did not acknowledge:

a) There is an absolute, objective "value" that can be assigned to the experience to be had at a given program. This value is "actual" and "real." Any "value" which is not "actual" is worthless. "Perceived value" is therefore worthless; it is not probative.

b) The only important measure of a program is the "efficacy" of its pedagogy.

c) "Efficacy" can be measured.

You then said, blithely, "res ipsa loquitur," a misuse of the term because in fact, due to your unacknowledged presumptions, the thing manifestly does not "speak for itself." In fact, each of your presumptions and sub-presumptions is empirically (indeed absolutely) untrue as a proposition: 1) The actual value of a program is unknowable by any means. 2) Perceived value is probative even where it is not conclusive. 3) Efficacy cannot be measured. 4) Efficacy is therefore not a relevant (or possible) measure of a program; it is a rhetorical device to assign probity to an unknowable--by definition unknowables cannot be probative.

Your argument having been thus obliterated in all particulars, you respond (again) with a misused Latin maxim. You use the maxim as a rhetorical device; surely you're not required to respond to a single thing I've said, or the logical and empirical bankruptcy of all you've said, if you can speak Latin? Sorry, it's not that easy. Particularly as the rankings are not predicated upon the maxim you cite, they're predicated (only in part) on the maxim referenced explicitly in the article: the virtuous circle. The virtuous circle is a different rhetorical figure than "cum hoc ergo propter hoc." The virtuous circle does not say that a program is better because it has more applicants; the virtuous circle says that the more applicants a program has, the more selective it can be and the larger the pool of prospective admittees, leading to a stronger cohort, and a stronger cohort leads to a stronger workshop, and a stronger workshop leads to a more (aha!) efficacious pedagogical experience. It's a causal chain, not a tautology, bighark.

You end your second comment with a correct citation of something I said--"No ranking ought to pretend to establish the absolute truth about program quality, and in keeping with that maxim the rankings that follow have no such pretensions"--followed by a gloss of that quote which neither I nor anyone would accept as necessary and just. Specifically, you use a logical fallacy (what President Obama often refers to as, "rendering the perfect the enemy of the good") to equate one fact (that rankings have certain limitations) with a second non-factual, unsupported assertion (the P&W rankings are useless). Again, that's a fallacy. Yes, we both agree that rankings have certain limitations. No, we absolutely disagree on what those limitations mean for the ultimate value of the rankings.

I don't know that I have much more to say to you, though. I don't care about engaging you "merely" rhetorically (which is the sort of dialogue your responses have attempted to initiate); I'm perfectly willing to argue logic and substance. If you want to do that, let your responses indicate it and I'll be happy to engage with you.

Be well,
Seth



__________



Oct 20, 2009, 6:11 PM

Post #739 of 764 (13629 views)
Shortcut
Re: [umass76] [In reply to] Can't Post

Not to quibble, but I think you're being too generous toward these applicants. (Who knows how many are the ones sending crayoned notes and cat photos to lit mags -- there must be some overlap...)

Forget the analogy. My point -- which I hoped was obvious (if not, I fault my hasty 'net style) -- is that it looks like there's a huge problem with basing rankings on the opinions and best guesses of people who've never attended these programs. For instance, I wonder why the applicants are needed at all; couldn't you just view the various web sites, and rank the schools that way? What is gained by a cacophony of voices, other than opinion, rumor, and mere conjecture?

If I understand correctly, the old rankings were peer-based -- teachers, administrators, and so forth ranked their own and competing programs -- and the new rankings are based on what's provided by school web sites, as well as the number of people who will apply to those programs. This seems to remove the opinion of any non-school-affiliated entity with actual, personal knowledge of a program.

I can't help but think of one MFA student who applied based primarily on web site information. Sure, the funding numbers matched up (almost), but he was miserable because his professor didn't read student work before the minute it was workshopped, and his classes were essentially run by TAs or an elected classmate. It seems like it's this kind of information that can't be gleaned from the current method... (Filed under 'pedagogy', I guess. Vague, but extremely valuable!)


six five four three two one 0 ->

(This post was edited by Junior Maas on Oct 20, 2009, 6:13 PM)


Woon


Oct 20, 2009, 6:24 PM

Post #740 of 764 (13622 views)
Shortcut
Re: [Junior Maas] [In reply to] Can't Post

To give a real world example of Never-Underestimate-the-Stupidity-of-Applicants, I was a smart geeky type in high school. Got great SAT scores and high GPA. When it came to applying for college, I picked five schools. My safety school was Northwestern, to give you some perspective. Anyway, I wanted to go to MIT so bad that, one afternoon, I sent them a supplemental package containing a self-made booklet of my poems. The self-made booklet was just a bunch of typing paper folded and stapled together. In it, I had various cartoon illustrations colored with cheap color markers. Glued on the heads of my cartoon characters were cut-out photos of my face/head. This was just to show how creative I could be.

I forgot to mention that my major was Physics.

And my poetry was really bad.

Anyway, as soon as I dropped that package in the mail, I uttered a deep, "Oh, my god, what the f### did I just do!?!?!"

Needless to say, I didn't get into MIT.

So, the lesson for everyone is: Thank God for otherwise smart applicants who do moronic things in their applications!


umass76


Oct 20, 2009, 6:48 PM

Post #741 of 764 (13607 views)
Shortcut
Re: [Junior Maas] [In reply to] Can't Post

JM,

I think you're using the wrong lens here. The old rankings were indeed "peer-based"--MFA faculties were asked a single question on a single questionnaire, which single question related to a single feature of an MFA program ("reputation"). The questionnaire acknowledged--as it had to--that most MFA faculties had absolutely no clue about the strength of other programs, as it gave faculty members the opportunity to not rank a given program. The methodological questions there are enormous: Can a program really be assessed solely on the basis of "reputation"? How would a poet employed at a school in NYC know anything about the fiction program at a California MFA? How well did a faculty member have to know a school to be willing to assign it a ranking (on a scale of 1 to 5), and did all professors use this same guideline, if indeed such a guideline could even be set? How does it skew the rankings that a program a faculty member had never heard of, instead of receiving a "negative" ranking as to its reputation--which you'd think it would have, as never having heard of a program means a great deal for whether it has a "reputation" in the field--was instead treated to what was essentially a mulligan? And on and on and on. In other words, people were being polled on topics they didn't know about (they actually were not allowed to rate their own programs, so let's not think their expertise in their own program was being mined), and the polling was too limited in its scope, and the rules of the polling were destined to do something entirely different than measure what the poll claimed to measure. That's why USNWR dumped their rankings.

These rankings take a different and far more logical view: They conclude that the only thing a faculty member is an expert on, or that a current student is an expert on, is their own program. Faculty and students express this expertise through their ability to advertise the program via a medium they control exclusively and absolutely: the program website. Program websites, however incomplete at times, nevertheless offer prospective applicants a wealth of information--and it's not random information, it's information specifically targeted toward the values of applicants, i.e. the way applicants expect to judge a program once they matriculate. Applicants want to know: Will you fund me? Will I have to teach? Who will teach me? How long will I have to write? Where are you located? What courses can I take? And so on. They want to know these things because these are the program features that will matter to them when they are current students in MFA programs. The website is a program's opportunity to translate its expertise on these questions to an unbiased audience: i.e., a receptacle for this information who a) has no horse in the field, and b) has everything to lose by not thinking extremely carefully about the expert information they're getting (incidentally, not merely from program websites, which are just the tip of the iceberg, but also from on-line applicant communities, undergraduate professors, current and former MFA students, blogs, and so on). Applicants, who are by definition self-interested, then make decisions in their own best interest, and their collective valuations of what matters in choosing an MFA program establish (by definition, as they're both the target audience and target community) how we determine what makes a good MFA program. For too long the theory has been, "What makes a good program is what an MFA professor thinks is a good program." The new theory is, "What makes a good program is what an MFA student thinks is a good program--and some, but not all, of the factors that go into making that determination are transparently available to the unbiased target audience/community (applicants) prior to matriculation."

So yes, this ranking does not measure in-school experience, because no ranking could. The only experts in what it's like to be in a program are current students, but the experiences of current students cannot be either quantified or compared: when Billy at CSU says he "loves his program," and Sally at OSU says she "loves her program," these are two totally different people having totally different subject experiences, so "love" doesn't even mean the same thing to each person. Billy may love his program because he likes the skiing in Fort Collins, or because he met a girl he's fallen in love with; Sally may like OSU for her own reasons. And even if Billy and Sally are asked to speak specifically to the quality of their program in certain respects, what Billy means by an "8" and what Sally means by an "8" (out of 10) are two different things, and essentially--the point here is--everything's floating in a vacuum because neither Billy or Sally are putting anything on the line in their assessments, they're both likely to be biased either for or against their own program, and they're not in a position to, or being asked to, compare their experience to anyone else's. Not to mention it'd be practically impossible to sample enough Billies and Sallies, as 4 students at Cornell is 100% of the annual class, but only 16% of Iowa's annual class, which would cause (if we used that sample size) Iowa to say that their sample size wasn't necessarily representative (certainly not as much as Cornell's), and they're be right. Yet we couldn't increase Cornell's sample size because we wouldn't have access to enough current or recent students to match the sample-size we'd need from the larger programs. And on and on and on.

Looked at with this sort of comprehensive and contextual lens, you see how, in fact, the new system is redressing some of the weaknesses of the old, however imperfectly.

Be well,
Seth


owenj


Oct 20, 2009, 7:52 PM

Post #742 of 764 (13583 views)
Shortcut
Re: [umass76] [In reply to] Can't Post

Look, the rankings are as good as any ranking can be - the rankings are just a data point in a complicated process, right? They're an attempt to gather as much objective (and some subjective) information about a program and offer just one piece of the application puzzle. I hope that nobody really thinks that going to Iowa over, say, Cornell, is actually going to make them a better or more successful writer. No ranking is going to judge that. Rankings can't predict a student's individual experience. I don't think (I'm sure Seth will correct me if I've got this wrong) the rankings include data on student publications or job placement, or acceptance rates of graduates who choose to go on to a PHD, if those are a student's goals. I don't think they claim to be. No methodology is going to perfect. I hope people use the rankings for what they're worth - a data point. Rankings can't predict and don't take into account a student's individual goals as a writer, either.

I think it's flawed to assume that selectivity actually gauges the quality of the students in a program. Good writers can be assholes. Bad writers can be great readers. "Good" and "Bad" are impossible to measure. I've been a student in a couple 'top programs' as an MFA and a PhD student, and even though I have a tremendous amount of respect for my peers, I've still learned the most from a handful of writers. I'm also not sure that the quality of the writing at Iowa is any better or worse than the writing at, say, Johns Hopkins, because they're ranked higher. Keep in mind that the pool of applicants at most of the 'top programs' is going to be a LOT of the same people.

And look, Seth's done a ton of work here, and as difficult as it can be to get through his ridiculously long posts (Seth, you must be an incredible typist) Seth will defend these rankings to the death. I think Seth often comes off as being a little too entrenched in his own methodology, meaning I'm not sure he's that flexible in how he feels about these things, but hey, this is a business for him, and he's a smart guy (did you know he's a Harvard-educated lawyer?) and isn't making any claim that these are the be-all-end-all in what is really a complicated decision process.


bighark


Oct 20, 2009, 8:54 PM

Post #743 of 764 (13543 views)
Shortcut
Re: [umass76] [In reply to] Can't Post

Hereís my understanding of our exchange:

1) I said your rankings donít measure some of the things that improve student writing.

You responded by saying that these things cannot be measured and therefore donít matter.

2) I said your rankings represent the opinions of people who are unqualified to assess writing program value.

You responded by saying I was wrong, wrong, wrong, that money is important, and you have a chart.

3) I said I thought the chart was based on a logical fallacy. I also quoted the part of the rankings that say that the rankings donít measure program quality.

You responded by saying that you annihilated my first point, that my second point was full of presumptions and sub-presumptions that I neither wrote nor implied, and that I was avoiding you by using Latin as a rhetorical device (I just thought I was speaking your language).

Finally, you told a big fat hairy lie. I never said the rankings were useless, Seth. I said they were limited in value.


umass76


Oct 20, 2009, 9:25 PM

Post #744 of 764 (13513 views)
Shortcut
Re: [bighark] [In reply to] Can't Post

Bighark,

"Big fat hairy lie" is an interesting term. Here's a good example of that phenomenon:

On October 20, 2009, at 2:19 AM you wrote: "For the record, then, the rankings in question do not account for any of the factors that might improve a student's writing."

On October 20, 2009, at 8:54 PM you wrote: "Here's my understanding of our exchange: 1) I said your rankings don't measure some of the things that improve student writing..."

You see, bighark, words do matter. Had you actually said--from the outset--what you now claim you said, I would have agreed with you. What you actually said was BS, so I refuted it.

The rest of your transcription of our conversation contains similar disingenuities. You clearly have no interest in getting responses to your complaints/queries, so I'm thinking we're done here. Anyone who wishes to can read my exhaustive responses, above, to each one of your ill-informed opinings.

Take care,
S.



(This post was edited by umass76 on Oct 20, 2009, 9:27 PM)


bighark


Oct 20, 2009, 10:03 PM

Post #745 of 764 (13495 views)
Shortcut
Re: [umass76] [In reply to] Can't Post

Seth,

If I was imprecise with the language of my summary, I apologize. You were similarly imprecise, but I should not have said you told a lie.

That was out of bounds. I don't think you are a liar.

I suggest we both be more careful with our words and continue this discussion in a civilized manner.





HollinsMFAer
Luke Johnson


Oct 20, 2009, 10:18 PM

Post #746 of 764 (13489 views)
Shortcut
Re: [bighark] [In reply to] Can't Post

Or we could just not continue the discussion. Honestly, it's been had before. It will probably be had again.

I take these rankings as a relatively accurate barometer what's going on in MFA-land. No more, no less. Unless the writing world is going to turn into the BCS (only graduates of the top 15 programs get books), I don't think any of these rankings do much beyond providing applicants with another valuable resource in making their decisions.

Sure, it would be nice if the program from which I graduated was number one (come to think of it, why the hell isn't it?!?!), but hey, the world needs mid-majors too.


http://www.lukejohnsonpoetry.com

(This post was edited by HollinsMFAer on Oct 20, 2009, 10:18 PM)


kbritten

e-mail user

Oct 20, 2009, 10:38 PM

Post #747 of 764 (13473 views)
Shortcut
Re: [HollinsMFAer] [In reply to] Can't Post

I would like to second HollinsMFAer's motion to end this discussion. I totally get that lawyers like to argue, and if I ever need one, I hope that he is as cantankerous and argumentative as Seth ;), but it does get a little bit old after a while. And HollinsMFAer is my new best friend for criticizing the BCS (um, hello... does anyone remember 2004 when Auburn went undefeated in the SEC and did not get to play in the BCS game?). If these rankings are anything like the BCS I refuse to read them! Anyway, I had no idea people cared so much about MFA rankings so much!


(This post was edited by kbritten on Oct 20, 2009, 10:38 PM)


kbritten

e-mail user

Oct 20, 2009, 10:40 PM

Post #748 of 764 (13471 views)
Shortcut
Re: [kbritten] [In reply to] Can't Post

And one more thing - Seth, how are your posts so grammatically and logically correct, angry rants aside? Are you super anal about them? Do you spend ten minutes proof reading them? I say this in a light-heartedly, but it baffles my mind...


taraberyl



Oct 21, 2009, 1:33 AM

Post #749 of 764 (13436 views)
Shortcut
Re: [HollinsMFAer] [In reply to] Can't Post

I agree. I would rather discuss the rankings and programs themselves than the value of the rankings. they are the best we have and i find them useful, particularly the funding and selectivity rankings. also, i suspect they are making the programs more competitive with each other, so as an applicant i'm not complaining.


pongo
Buy this book!

e-mail user

Oct 21, 2009, 7:58 AM

Post #750 of 764 (13410 views)
Shortcut
Re: [kbritten] [In reply to] Can't Post


In Reply To
And one more thing - Seth, how are your posts so grammatically and logically correct, angry rants aside? Are you super anal about them? Do you spend ten minutes proof reading them? I say this in a light-heartedly, but it baffles my mind...


Some people have trained themselves to speak and think grammatically and logically. Grammar is a pretty good tool for a writer.


The Review Mirror, available at www.unsolicitedpress.com

Difficult Listening, Sundays from ten to noon (Central time), at http://www.radiofreenashville.org/.

http://home.comcast.net/~david.m.harris/site/

First page Previous page 1 ... 23 24 25 26 27 28 29 30 31 Next page Last page  View All

Main Index » Writing and Publishing » MFA Programs

 


P&W Newsletters

Sign up to receive our monthly email newsletter to stay informed of the latest news, events and more.

Click to Sign Up

Subscribe to P&W Magazine | Donate Now | Advertise | Sign up for E-Newsletter | About Us | Contact Us

© Copyright Poets & Writers 2011. All Rights Reserved