正文

來自芝加哥大學的對US News排名的抱怨

(2010-12-30 11:10:48) 下一個

News you can abuse

University of Chicago

 

By Chris Smith, October 2001. October 2001. The University of Chicago Magazine

U.S. News and World Report has made its name among weeklies as "news you can use," but is its annual college-rankings issue a self-help feature or an academic beauty pageant?

I SPOKE WITH ROBERT MORSE BY PHONE so I don't know what he looks like. I imagine him of medium height, skinny, maybe balding a bit, and a conservative dresser-the kind of man who wears a tie on casual Fridays. He is brilliant (again, I imagine) and well-respected by his coworkers, but they are utterly confused by his work-the same obvious joke about "Morse code" probably winds its way around the office every six months or so. His awkward personal manner and subtle intelligence were so provoking that the first thing I did after hanging up the phone was check to see if he is a
Chicago graduate.

He's not, but it would be another feather in our cap if he were. Robert Morse may be one of the most powerful people you've never heard of.

Morse is the director of data research at U.S. News and World Report, the man behind the college rankings that appear every September like Elvis Presley walking onstage at the Vegas Hilton. The special issue has become ingrained in our national consciousness, its cover blaring out from the newsstand with glittering rhinestone phrases: "
AMERICA'S BEST COLLEGES"; "ALL NEW EXCLUSIVE RANKINGS"; "#1 BEST SELLER." Add a few stars and some cartoon graphics of a faceless child (could be yours!) accepting a diploma amidst massive Ionian columns and what you have, my friend, is what has become known as the swimsuit issue of news magazines. What parent could resist?

As well known as they early issue is the controversy surrounding it. School administrators decry the rankings as a ridiculously inaccurate measure of an institution's quality. Some, like Leon Botstein, AB'67, president of
Bard College, are moved to anger-fueled hyperbole. "It is the most successful journalistic scam I have seen in my entire adult lifetime," Botstein told the New York Times recently. "A catastrophic fraud. Corrupt, intellectually bankrupt and revolting."

Yet critics cannot deny its sway over prospective students and their parents. U.S. News sells 2.4 million copies of its college-rankings issue annually-driving newsstand sales up 40 percent-and 700,000 copies of a companion guide to schools. According to a 1997 UCLA study using data from more than 220,000 first-years, 41 percent of students find the rankings to be somewhat or very important in their college choice.

Not only is the special issue consistently one of U.S. News's best-sellers, but it sells to the top students. The UCLA study concluded that the U.S. News and other news magazine school rankings are "a phenomenon of high-socioeconomic status, high-achieving students" for whom a school's academic reputation is a powerful influence, "more powerful than the advice of professional advisors or the influence of families."

Morse is the man primarily responsible for U.S. News's sway over millions of future professionals, academics, and politicians-he is referred to by his boss, special projects editor Peter Cary, as "the brains of the operation, the heart and soul of the engine." In 1989 Morse devised the magazine's first methodology to judge schools on such points as SAT scores, selectivity, and endowment income. He is still waiting to hear the end of it.

Perhaps my assumptions about Morse's physical appearance and personal style are unfair and inaccurate-ridiculous extrapolations on the quality of his life based on tiny snapshots of information. But essentially that is what U.S. News has been doing to colleges and universities for almost two decades. Without setting foot on a campus, its staff of number-crunchers a masses quantitative data-currently in 16 categories-from 1,323 colleges and universities nationwide. The magazine believes that the academic quality of an institution can be measured by punching these numbers into a computer, running an algorithm that assigns a predetermined weight to each category, and watching the printer spit out the results, the colleges stacked one atop another like the New York Times best-seller list.

"You can almost imagine that it is about quality," says Ted O'Neill, AM'70, dean of College admissions at
Chicago,"but it's not. These numbers can be bad or good and not say much about the kind of education offered." O'Neill echoes the criticism most often voiced by school administrators about the rankings: that colleges and universities are not appliances or cars to be evaluated in quantitative terms. Adds John W. Boyer, AM'69, PhD'75, dean of the College, "It's unfortunate that you get this hyper-commercialization and hyper-consumerization of academic life whereby a college education becomes the equivalent of an SUV."

U.S. News makes no bones about viewing education as a product and students as customers-a distasteful metaphor to administrators, but a metaphor not without merit. "The schools won't accept the premise that we're providing a service that the market place believes has value," says Morse, "and that our main market for doing this is the consumer." Indeed, the 1996 issue states, "When consumers invest in simple household appliances, this sort of information is freely available. We think it should be similarly available for an educational investment that can cost up to $120,000."

The editors believe their intentions are pure: to rate the quality of schools as a service to students. The UCLA study, however, reinforces another common complaint: that the rankings underscore how famous a school is rather than how good, reinforcing prestige rather than measuring what is learned. "I am taking the U.S. News people at their word that they are doing this to help inform students," says Boyer. "But it translates within seconds from quality to prestige." Prestige, from the Latin praestigiae, meaning "a conjurer's tricks"-the same root that gives us prestidigitation. A magician flutters his wand, makes wide eyes at the audience, and pulls a Harvard out of his hat.

THE U.S. NEWS RANKINGS began as a small cover story in the
November 28, 1983, issue (sandwiched between "A Robust Recovery-How Long It Will Last" and"Bias Against Ugly People-How They Can Fight It"). Two reporters sent surveys to 1,308 four-year-college presidents asking them to name the highest-quality undergraduate schools. About half the presidents responded. The magazine tallied the votes, Stanford and Amherst came out on top, and that was it. The issue sold well, but no one foresaw the phenomenon-or the controversy-the rankings would create. The article about ugly people received more letters of complaint.

Still, the rankings proved popular enough for the editors to repeat the task in 1985 and 1987, again to brisk newsstand sales. In 1988 they got serious, hiring former Newsweek education editor Mel Elfin to develop it as a special issue for 1989. Elfin assigned Robert Morse, a researcher for the magazine since 1976, to develop a formula for measuring quality in an academic institution.

Morse was actually the second statistician hired for the job-the first left after her formula produced what Elfin calls "bizarre anomalies," including a Hellenic seminary as the No. 1 school because of the heavy weight assigned to spending per student. (The seminary's library, Elfinre calls, was full of rare, handwritten ancient texts.)

Morse took over, crunched the numbers his way, and in the National Universities category Yale surfaced as No. 1. Believing that the result justified the methodology-a philosophy from which the rankings issue has yet to escape-the editors ran with it. They have been running ever since.

RANKING THESE THINGS seems to be in our blood, a biological urge to sport the giant foam-rubber hand with the lone index finger stabbing skyward. No step on the rankings ladder means anything more than its proximity to the top spot. Even if a college lies at 27th next to its less-fortunate competitor at 28th, the only important thing is that it is closer to No. 1.

But how do we determine what"No. 1" means? By what methodology do we declare a champion? Billboard ranks albums according to sales. Forbes ranks companies by total revenue. Consumer Reports ranks toasters on common desirable indicators such as price and life span. U.S. News, the criticism goes, also uses such quantitative measurements, but what it purports to measure-an institution's precise overall quality-is simply beyond measurement. "At
Chicago we've always thought in terms of how one learns to become an intellectual being," says O'Neill. "Nothing in U.S. News surveys even begins to address this issue-nothing about how people learn, nothing about what goes on in the classroom, nothing about the kind of community that's created. So anything we think is valuable they think is unmeasurable, therefore unimportant."

Criticisms of this kind did not stop the rankings from growing in popularity through the 1990s. Other magazines and newspapers, such as Money, Newsweek, Time, and the Wall Street Journal caught the bug and began rating colleges, graduate schools, and professional schools. But none has captured as much attention or achieved anywhere near the annual sales of U.S. News. Although U.S. News won't release its figures, the UCLA study estimates that the issue generates $5.2 million in sales each year, approximately one-third of the sales of all news magazine rankings issues-a figure that does not include advertising profits.

Even though criticism of the U.S. News rankings seemed to grow in direct proportion to its sales, it was like the old joke about the weather-everyone talked about it, but nobody did anything. Not until 1995, that is, when
Reed College stood up to U.S. News, and U.S. News knocked the school down.

REED IS A SMALL, 1,200-student liberal arts college nestled in a neighborhood five miles from downtown
Portland, Oregon. Forty years ago Scientific American described Reed-second only to Caltech in producing future Ph.D.s-as "far and away more productive of future scientists than any other institution in the U.S." In his 1996 book Colleges That Change Lives (Penguin), former New York Times education editor Loren Pope named the school "the most intellectual college in the country." In U.S. News's 1983 rankings, Reed was tied for ninth among Best Liberal Arts Colleges in the nation, and through 1994 it remained a first- or second-tier institution.

In 1995, however, Reed found itself banished to the rankings wasteland-the fourth tier where schools are listed alphabetically, placing it somewhere between 122nd and 161st in the country, bordered on one side by
Nebraska Wesleyan University and on the other by Richard Stockton College of New Jersey. Why the nose dive? It opted, quite simply-and quite publicly-to retire from the beauty pageant.

Reed president Steven Koblik decided to withdraw the school from the rankings game after he read an
April 5, 1995, Wall Street Journal article noting widespread inaccuracies in college rankings. Reporter Steve Stecklow had compared the information given by schools to the Money and U.S. News rankings with similar statistics colleges report to bond agencies and the NCAA. (While there are no penalties for giving inaccurate data to a magazine, lying to bond agencies violates federal securities laws, and lying to the NCAA can also have serious repercussions.)

Stecklow found numerous discrepancies between figures reported to bond agencies and figures reported to magazines. New College of the
University of South Florida, for instance, boasted an impressive average SAT score of 1296 in the1994 Money college guide. Its actual average score was about 40 pointsless. In what the school's admissions director called part of a"marketing strategy," it had neglected to include the bottom-scoring 6percent of its student body when reporting to Money. Boston's Northeastern University excluded international and remedial students-about 20 percent of its freshman class-in the numbers it reported to the news magazines. Monmouth University in New Jersey over reported its average SAT score by a whopping 200 points, a number the school told Stecklow was fabricated by a former employee. Rensselaer Polytechnic Institute in New York reportedly raised its selectivity rate by counting as "rejects" students who were admitted to programs other than those to which they had applied and students who were wait listed and later admitted.

But it wasn't only the schools scrambling for a little limelight that fudged the numbers.
Boston University, hovering just outside the Top 40 like a struggling rock band, reportedits international students' (traditionally high) math SAT scores, butnot their (traditionally low) verbal scores. New York University-a solid Top 50 school trying to crack the Top 25-excluded about 100economically disadvantaged students in a state-sponsored program from its SAT figures. Even Harvard, locked in an eternal cat fight with Yale, Stanford, and Princeton (and, depending on the methodology du jour, Caltech, Duke, and MIT), demonstrated a15-point difference in reported SAT scores, a variance the admissions director called a "mystery."

Several schools that Stecklow accused of juggling data wrote the WSJ defending the discrepancies as"transposed numbers," "typographical errors," and even a"misinterpreted handwritten figure." One dean told the WSJ that such practices were necessary because the heavy emphasis placed on student selectivity was an "abuse" by college guides: "…what you've got to doas an admissions person is to juggle them in such a way so the abuse is minimized."

ALTHOUGH REED WAS NOT ACCUSED of any wrongdoing and had always performed well in the rankings, Koblik-who had unsuccessfully lobbied U.S. News to make changes the previous year-decided the revelation of number-juggling was the last straw. So in a letter to alumni, prospective applicants, and high-school guidance counselors, Koblik explained that Reed would no longer return the forms sent by the magazine because the project was "not credible."

He also requested that the magazine drop Reed from its lists altogether. When editors' pleas to reconsider went unanswered, U.S.News had to obtain the information on its own. Allowing Reed to dropout would open the floodgates for others to decamp as well. Smelling their own blood in the water, the editors decided to use the most scientific method at their disposal: they guessed.

"Guess" is a kind way of putting it. For every category for which Reed did not submit information, U.S. News assigned a point value equal to the worst-performing school. This effectively put Reed dead last in the rankings-even while "academic reputation," an indicator based on other schools' responses, showed Reed's academic reputation as 18, two points higher than the previous year and higher than half of the first-tier schools.

On the same page U.S. News printed an additional list of the Top 20 schools with "an unusually strong commitment to undergraduate teaching," again determined by surveying school administrators. Reed tied for 18th with Middlebury and Bates. Reed's obvious cachet with the academic community did not keep the editors from dropping it to the bottom for the sake of saving the overall rankings. The ends, as Vincent Price once said, justified the meanness.

"There are penalties," says
Chicago's O'Neill of taking a stand against U.S. News. "We watched them make an example out of Reed. No one else was willing to take that chance. It's too bad-you'd think as educators we'd have more courage than that."

Fortunately for Reed, the magazine's plan backfired. The school received wide support in the press and from its students, faculty, alumni, and trustees. Koblik became a hero among other schools where administrators whispered around the water cooler how they wished they could do that. Unable to defend its actions as truly in its readers' best interest, U.S. News issued a silent mea culpa. The 1996 rankings found Reed at No. 37, and it has rested in the first or second tier ever since. In 1997, then-U.S. News editor Al Sanoff told Rolling Stoneof the incident, "Let's just say we did not handle it the right way."

But it was too late. Sensing weakness, or perhaps just emboldened by Reed's bravery, Stanford threw a combination punch in 1997 that sent U.S. News reeling. President Gerhard Casper was joined by his provost and his dean of admissions in refusing to submit academic reputation ratings, saying they were "extremely skeptical that the quality of a university-any more than the quality of a magazine-can be measured statistically." Instead Stanford posted its statistical data on its Website and encouraged other schools to follow suit so prospective students could gather information on their own. Stanford's call was answered by, among others,
Clark University in Massachusetts, Georgia Institute of Technology, the University of Minnesota, and Bryn Mawr.

Casper had been critical of the rankings since he served as provost of Chicago from 1989 to 1992, but his decision to make a stand may have been partly influenced by Nicholas Thompson, who as a Stanford junior in 1996 founded FUNC-the Forget U.S. News Coalition-which criticized Stanford for altering its policies to influence the U.S. News rankings.Thompson targeted his criticism on the Stanford Fund, a development program he briefly worked for. He charged that the fund was created not to raise money but to raise the percentage of alumni who donated, so Stanford's alumni giving rate would go up in the rankings. (The alumni giving rate, which counts for 5 percent of the overall score,calculates the percentage of alumni who give rather than the amount of money received.) Although increasing the alumni giving rate may not be a bad thing in itself, Thompson felt his school was altering policies to please an artificial rankings system. FUNC became a national student organization spawning chapters across the country.

U.S. News could survive with a few less reputational surveys and could gather its data at the Stanford Web site. But to have a school of Stanford's stature back out of the rankings was a major publicity blow. It didn't help that only two years later U.S. News shot itself in the foot.

IN THE FALL OF 1999, a perennial underdog emerged as champion in the bloody arena. Caltech had teleported from ninth place in 1998 to big, fat Number One. When the survey was published, rating all schools on a1-100 point system-with 100 being the best school and the others rated accordingly-Caltech was not just No. 1, but No. 1 by a massive seven-point margin over No. 2 Harvard. The same spread had covered the top 16 schools the previous year.

Caltech's leap could be traced to a methodological adjustment that gave more weight to the amount a school spends per student (the same factor devalued in 1989that gave Yale the top spot). An outside statistician, Amy Graham, had been brought in to evaluate the existing methodology. Graham determined that spending per student was undervalued and made the adjustment. Caltech, with its massive financial resources and small student body, spends $192,000 per student, more than twice any other school.

The editors were stunned by the results. "This just happened to be the week I was given the job," says special projects editor Cary. "The [previous editor] was heading out the door, and he handed me this and said, 'Here, you're in charge.'" With deadlines looming, the staff decided to go with what they had. "There was never any consideration of redoing it," says Cary. "We felt like we make changes every year, and there's always the chance that a school will move up or down the list.We just do it and live with it."

Although Cary insists the methodology was justified, the editors knew Caltech wouldn't fly as No.1 in the public eye, and critics accurately predicted that the next year the methodology would be readjusted to put the big three back on top. This prejudice can be seen in the rankings themselves-as Caltech was perched at No. 1 overall, it was buried at No. 27 in its graduation and retention rank, and it earned a "value-added" rating of -12.(Retention records how many students graduate in six years, but thevalue-added rating evaluates the quality of entering students based on their test scores and estimates how many of them should graduate within six years as compared to how many actually do. A positive score means the school performed better than predicted, and a negative score,worse. U.S. News developed the category in 1996 to address complaints that none of its indicators measures what actually happens at an institution.)

Two other schools consistently stand out as bad performers in the graduation and retention and value-added categories:MIT and the
University of Chicago.In the same year that Caltech was on top, MIT, though No. 3 overall,ranked 11th in graduation and retention and had a value-added rate of-5. Chicago ranked 13th overall, but scored 28th and -7 respectively.

U of C vice president and dean of College enrollment Michael C. Behnke says that
Chicago, like Caltech and MIT (where he served as dean of admissions), is punished for having a specialized mission. "We have made a conscious decision not to be all things to all people," he says. "And we get penalized for that." This echoes a letter Gerhard Casper wrote to U.S. News in 1996 condemning the value-added category. "The California Institute of Technology offers a rigorous and demanding curriculum that undeniably adds great value to its students," he wrote. "Did it ever occur to the people who created this 'measure' that many students do not graduate from Caltech precisely because they find Caltech too rigorous and demanding-that is, adding too much value-for them?"

Like Caltech and MIT,
Chicago has always done well in the rankings-from a high of No. 5 in 1985 to a low of No. 14 in 1998. It tied in the 2001 rankings at No. 9 with Dartmouth and Columbia.But like their colleagues at peer institutions, U of C administrators are caught between the rock of rankings they believe don't have any value and the hard place of constituents who believe they do. "They are meaningless but people give them a great deal of weight," says Chicago provost Geoffrey R. Stone, JD'71, who tells stories of deans harassed by angry alumni whenever U.S. News changes its methodology and the ratings suffer.

Feeling compelled to participate is especially aggravating if it runs counter to your mission as an educational institution. "We probably feel more injured than some other places because we have tremendous self-regard," says
Chicago's O'Neill, "and we understand that other peoples' numbers don't represent what is valuable in education."

Though the overall rankings may be meaningless-even contrary to a school's mission-some of the indicators speak unsavory truths about
Chicago."There are certain things we don't do as well as we should, and U.S.News has taken us to task for that, and not inappropriately," says Stone. In addition to Chicago's poor graduation and retention rate, it has suffered in the rankings for its lack of selectivity, this year ranked at No. 22. Though U of C students are notoriously self-selecting and the acceptance rate has dropped significantly over the past few years (from 71 percent in 1996 to 44 percent in 2001) as the freshman retention rate has steadily risen, the school still lags behind its peers in these categories, which combined count for 40 percent of the total score. "U.S. News has in a very small sense reaffirmed a set of judgments about these being things we should pay attention to," says Stone, "not because of U.S. News but because we should pay attention to them anyway."

Other categories, however, should be cause for celebration.
Chicago consistently performs as one of the top schools in small class size, faculty resources, and academic reputation. But these successes are not enough to take the sting out of being measured according to criteria that you don't equate with overall value.

"When these things originally came out, we were near the top-No. 6, I think-and at that point we thought 'How can they undervalue us so much?'" says Ted O'Neill. "It is galling to think that places considered by these rankings as better than us are places we consider as having less value on our own terms. I don't know that anyone can see us in U.S. News and World Report anywhere between 5 and 14 and say, 'Oh, now I understand the
University of Chicago.'"

THE POWERS THAT BE at U.S. News would have us believe that colleges and universities are race cars, flying around the track at death-defying speeds, passing and being passed every lap to the ooohs and aaahs of the crowd who pay their $3.50 to watch Caltech fly by Stanford, only to blow a tire the next lap, or to gasp in voyeuristic horror as Reed slams into the wall at the third turn, its president carried away on a stretcher with a weak "thumbs up" to the applause of the crowd.Eventually the checkered flag drops, the winner bows modestly to receive the Delphic laurel, and the sunburned crowd goes home to talk excitedly for months until the next race is run.

In truth, institutions of higher education move about the speed of the line at the Department of Motor Vehicles. There are no laps (only a mind-numbingly slow process forward), no fans (only critics), and no interest among the participants in soaring through lap 499 at 207 miles per hour (only a concern with the everyday mundanities-keeping dorms safe, filing research grant applications, hiring a new assistant professor to fill a space in the Department of Middle Eastern Languages).

The problem with the DMV scenario is predictability. If schools change so slowly, how slowly will the rankings change? And if the rankings change too slowly, how will U.S.News sell magazines? "The way they sell magazines is by causing different outcomes," says Provost Stone, "and they cause different outcomes by changing the criteria and by changing the weight they give the criteria. U.S. News wouldn't sell magazines if their rankings changed as slowly as the institutions."

U.S. News would also have us believe that they are the people we can trust-they even call themselves education "experts" in one issue. As the sales of the rankings issue increased, so did their limitless knowledge. They are now "experts" on hospitals, graduate schools, professional schools, financial aid, HMOs, and high schools-all of which have received rankings issues of their own. One seven-week period in 1997 saw four separate rankings issues. All the circus needs now is a truck to drive it from town to town.

As ridiculous as that sounds, it may not be far off. The magazine has spent more than a year planning a"National College Tour," a 48-foot tractor trailer that would travel from state to state offering admissions and financial-aid advice to high-school seniors. "Since education is our franchise," says Ty Trippet, a U.S. News spokesman, "it's more a kind of service-slash-promotion project." I can picture it now, the kids poking about on the computers in the back (seeing if the porn sites are blocked), the counselors ignoring questions from teachers about why their alma maters aren't ranked higher,  and the parents sitting on benches, looking at the giant "U.S. News National College Tour" banner flailing in the wind and asking each other, "Didn't they used to have a magazine or something?"

OKAY, SO MAYBE I'M BEING CRUEL. Or maybe I'm just feeling guilty-when I left the military to return to school in the mid-'90s I used the U.S. News rankings and found them to be of great help, simply because I didn't know where else to turn. And therein lies the sole justification for not only their existence, but also their popularity: they fill a need. "One of the most important services the rankings provide is a starting point from which students and parents can begin to judge the academic quality and the differences of the schools they are intrigued by," says Cary. "There are criticisms and there is buzz that puts us on the defensive, but we also hear from a lot of people who tell us they think we're doing the right thing."

"They don't get it wrong by order of magnitude," admits Stone. "To the extent that one is trying to pick a college or graduate program and doesn't have enough sophisticated information to be able to make fairly broad judgments about relative reputation and excellence, U.S. News can be a useful form of information."

To the magazine's credit, ithas developed a sense of modesty. Tracing the introductory articles from the first issue to the present, one reads in 1983 an authoritarian"The verdict is in"; in 1997 a more genuine "While they are only one factor to use...the rankings themselves are the single best source of information"; and in 2000 a downright humble "...the college experience...can't be reduced to mere numbers." The 2000 issue even includes a full-page article by Edward Tenner, AM'67, PhD'72, on why you shouldn't pay attention to the rankings.

Cary and Morse also take great pains to gather advice. "We meet with between 50 and 100 college deans and presidents who come to visit us every year with their delegations," says Cary. "We meet with about a dozen institutional researchers-data people-from about a dozen colleges every year. Then we have a college advisory board, about 20 people, mainly admissions officials, and we have just started a second advisory board of financial-aid officials. Then we have another ad hoc board of high-school guidance counselors." They also attend conferences, colloquia, and presentations year-round, constantly fielding complaints and suggestions about the rankings. "Honestly, we listen," says Cary, "and from that we take suggestions on how they might be adjusted."

In 1997 the magazine commissioned the National Opinion Research Center (NORC) at the U of C to study the rankings. The NORC report was not flattering. Words like "weaknesses," "disturbing," and "arbitrary"march about like ants on potato salad. The conclusion: "…the weights used to combine the various measures into an overall rating lack any defensible empirical or theoretical basis." The magazine says it has since made four of the five changes suggested by the report, including a survey of school administrators assessing the values of the categories themselves.

Even Nicholas Thompson, the Stanford student who founded FUNC and is now an editor at the Washington Monthly, agrees that U.S. News is making progress. "The rankings have clearly steadily improved over the last 15 years," he says. "The major flaw with them now is they have very little measure of what actually matters in a college education: how much students learn on campus." Or as he put it more vividly in a September 2000 Washington Monthly article, "[T]hey don't measure whether students spend their evening stalking about Jonathan Swift or playing beer pong."

Thompson cowrote a September 2001 follow-up article with Amy Graham, the statistician who adjusted the methodology to put Caltech on top in 1999. In the article they complain that the current rankings system follows the equation, "Good students plus good faculty equals good school," likening it to "measuring the quality of a restaurant by calculating how much it paid for silverware and food." The rankings could be improved, they suggest, by incorporating such measures as alumni satisfaction surveys and data from the National Survey on Student Engagement (NSSE), a survey funded by Pew Charitable Trusts that asks college students how invested they are in their studies.

WHETHER U.S. NEWS WILL TAKE Thompson's and Graham's suggestions remains to be seen. Cary is interested in incorporating the NSSE data into the rankings, but is held back by student privacy issues and the small number of colleges the survey covers.

Since Morse and Caryspend so much time conferring with academics to improve the methodology, they can no longer be accused of ignoring criticism or evaluating academia without proper perspective. Indeed, they deserve credit for at least striving to improve methodology and for guiding students toward other sources of information. Even on the U.S. News Website-which generates 8 million hits when the rankings are released each year-users can click on the category that matters most to them and watch the schools reorder themselves.

Unfortunately, schools must now turn the finger on themselves when complaining about the rankings. Although the 1995 Wall Street Journal article prompted U.S. News to guard against schools reporting in accurate data, many institutions have become craftier at making themselves look good, some changing academic policies to improve their standings. Some schools purge alumni databases, removing people unlikely to donate (e.g.,students who did not graduate) in order to increase the alumni giving rate. Other schools market to students who have no realistic chance of being admitted simply to drive up the school's applications and thus its selectivity rate.

There are also colleges that discriminate against good applicants who might lower the school's yield by taking an offer from a more prestigious school. "I actually know of institutions-I have been told by the people who make the decisions,"says Geof Stone, "that they have adopted a policy of not admitting the strongest applicants, entirely dictated by U.S. News and World Report.This is a practice which could never be defended in any substantive way."

Even the drive to abolish SAT scores from college admissions-a campaign that seems to go hand in hand with a distaste for rankings-has been tainted with accusations that the real purpose of dropping SATs is to raise the school's average SAT score in the U.S. News rankings. If SATs are optional, only high scorers submit them, thus increasing the average score a school can faithfully report. Schools also benefit from an increased number of applications from students who see their SATs as a barrier to other institutions, thus driving up the school's selectivity along with its application rate. Currently about 20 percent of colleges and universities don't require SATs or ACTs.

Someday maybe U.S. News and the colleges it ranks will get on the same track. Maybe the magazine will come up with a methodology that pleases academics and still provides the information that prospective students and their parents want. The larger problem, however, will persist. No matter how the magazine chooses to measure schools, it is still ranking one against another in a near-random hierarchy-constructed to measure quality in quantitative terms-implying that there is an ideal state to which every school should endeavor until all schools are identical, that each school should be all things to all people-a task at which Chicago will always fail.
[ 打印 ]
閱讀 ()評論 (0)
評論
目前還沒有任何評論
登錄後才可評論.