This is a pivotal moment in American higher education—a crisis, you might say, if the term hadn’t been debased by overuse. The criticisms come from every corner and the bill of particulars is lengthy.
The financial cost gets most of the attention. Since 1980, tuition has more than doubled at private universities and tripled at public institutions. Students have accumulated more than $1.2 trillion in debt, $300 million more than what Americans owe credit card companies. For-profit schools enroll about an eighth of all college students, many of whom end up saddled with mountainous debts and worthless degrees.
Students from poor families have it especially rough. Half of all 25-year-olds from well-off families, but just a tenth of all 25-year-olds from poor families, have a bachelor’s degree. As Robert Putnam recently observed, “dumb rich kids” go to college at about the same rate as “smart poor kids.” What’s more, students are dropping out in alarming numbers. About two out of five students in four-year institutions, and at least three out of five enrolled in community colleges, don’t graduate. The cost to the economy, in lost productivity, and to these individuals, in stunted futures, is immense.
Undergraduates aren’t studying as much as they used to, and they’re not learning much either. Students are “drifting through college without a clear sense of purpose,” conclude Richard Arum and Josipa Roksa in their much-discussed 2011 book, Academically Adrift, and, while in college, more than 36 percent of them do “not demonstrate any significant improvement in learning.” One reason is that students are poorly taught. Professors persist in relying mainly on lectures to deliver information, rather than adopting demonstrably more effective teaching strategies that engage students in thinking critically. In their new book, Aspiring Adults Adrift, Arum and Roksa track the same group of students after graduation and find that many students remain at sea. While it’s unsurprising that many who matriculated during the recession couldn’t find decent jobs, it’s worrying that they also were unable to form close relationships or take on civic responsibilities.
Women are having an especially hard time. Many have been sexually assaulted while at college. Too often, administrators, fretful about bad publicity, have given a slap on the wrist to the assailants and a pass to fraternities where sexual assaults disproportionately occur.
Meanwhile, star athletes at universities with aspirations for national rankings have literally been handed a pass, receiving credit for nonexistent courses. “Why should we have to go to class if we came here to play FOOTBALL,” tweeted Cardale Jones, who quarterbacked Ohio State to the 2015 national championship. “We ain’t come to play SCHOOL, classes are POINTLESS.” Elsewhere, athletes have rebelled at being exploited by universities that make millions from their prowess on the field, leave them with sports-related medical bills, and take away scholarships when they are injured.
For their part, many professors anguish over the direction of higher education. The culture wars that roiled campuses a generation ago seem like the good old days. At least then, the faculty occupied center stage; now they find themselves pushed to the margins, their voices counting for little in crucial institutional decisions. Winner-take-all has become the norm, rendering quaint the idea of a university as a community of scholars. Successful coaches and university presidents earn seven-figure salaries and star professors carry minuscule teaching loads, while ill-paid lecturers and adjuncts, with no hope of tenure, do the lion’s share of undergraduate teaching. Humanities departments are being dismembered, and vocationally oriented programs expanded, in the name of making colleges more “relevant” and “efficient.”
“Disruption is coming for higher education,” announces the guru of disruption, Harvard Business School professor Clayton Christensen, in his 2011 book The Innovative University, co-written with Henry Eyring. “Do it cheap and simple,” Christensen says, predicting that online providers will deliver much of the course content now prepared by professors. Understandably, such claims invite fears among academics that they will be reduced to the status of glorified teaching assistants.
A word of caution is in order, for some of these “crises” have been overblown. A recent Brookings Institution study concludes that the burden of student debt is actually no greater than a generation ago; the frenzy over massive open online courses, or MOOCs, has died down; the incidence of sexually assaulted women on campus is likely considerably lower than the commonly cited rate of one in five; and fewer students are enrolling at for-profit schools. Still, there’s no doubting that all is not well in higher education.
A common thread runs through this litany of laments: the norms and forms of the marketplace have come to dominate decision making. A dozen years ago, in Shakespeare, Einstein, and the Bottom Line: The Marketing of Higher Education, I explored how market values were edging out the collegial values of academe. Practices that at the time demarcated the frontier of market-driven behavior, like basing scholarships on merit, not need, have since become commonplace.
Markets and money have always mattered in American higher education, but not nearly as much as they do today. Throughout much of the 20th century, a well-educated citizenry was regarded as the ticket to national prosperity, and that shared belief was the rationale for spending tax dollars to underwrite public universities. The tacit bargain that states made with their flagship universities was simple—the state would subsidize these institutions to keep tuition low if they delivered a world-class education to the best and brightest state residents, and produced research that contributed to the common good. Under the terms of that bargain, enrollment mushroomed and America’s premier universities became the best in the world. In The Race Between Education and Technology, Harvard economists Claudia Goldin and Lawrence Katz demonstrate that this bet on higher education explains why, in the decades after the Second World War, the United States became the world’s richest nation.
Now students, not society, are regarded as higher education’s big winners. This sea change in attitude—higher education understood to be a private, not a public, good—largely explains why the states’ share of university budgets has plummeted, with tuition rising to fill the void. In American Higher Education in Crisis?, an insightful and crisply written survey of the current state of affairs, Goldie Blumenstyk, a veteran reporter for The Chronicle of Higher Education, points out how widespread the shift in financing has become. In 2000, tuition generated more revenue for public universities than state funds in just three states. By 2012, this was true in 24 states.
“We used to call ourselves state universities,” one university president told me. “Then we described ourselves as state-located universities. Now we’re state-molested universities.”
Meanwhile, a bachelor’s degree has become an ever-wiser investment as the “college wage premium,” which has doubled since the late 1970s, continues to increase. “Americans with four-year college degrees made 98 percent more an hour on average in 2013 than people without a degree,” notes New York Times reporter David Leonhardt. That’s up from 64 percent in the early 1980s. Even curmudgeonly William Bennett, in the bluntly titled Is College Worth It?, acknowledges that going to college pays if a student takes the right subjects and attends the right schools.
While undergraduates don’t know these statistics, they get the message. According to a UCLA Higher Education Research Institute survey, which has been conducted annually since the mid-1960s, the percentage of freshmen who say that being “able to get a better job” is a critical reason to attend college reached an all-time high of 87.9 percent in 2012. In the 1960s, “developing a meaningful philosophy of life” was cited as the strongest motivation for going to college, more than double the percentage of students focused on making money. These days, though, it’s all about money. In 2012, nearly three-quarters of freshmen, another record, cited the ability “to make more money” as very important, while fewer than half were eager to cultivate a philosophy of life.
Colleges can read these tea leaves. Students used to be treated as acolytes whose preferences were to be formed by the college experience, but now they are viewed as consumers whose preferences are to be satisfied. Colleges are keenly competitive, and in the struggle for status, as measured by the U.S. News & World Report rankings, they lure the best-credentialed applicants by spending money on country-club amenities and basing financial aid on merit, rather than need.
This tactic makes sense for a university whose ambition is to move up in the rankings. But basing admissions decisions on a market metric—maximizing tuition income and institutional ranking—hurts students from poor families. This is especially true at highly selective schools. As “Left Behind,” a Century Foundation report, points out, “one is twenty-five times as likely to run into a rich student as a poor student at the nation’s top 146 colleges.”
Boosting the number of poor kids who attend top colleges should be a top priority on the policy agenda. “The amount of untapped talent out there is staggering,” says Stanford economist Caroline Hoxby. Among the approximately 35,000 low-income kids with scores and grades in the top 10 percentile, more than 80 percent don’t apply to a single selective institution. Hoxby and a team of researchers have conducted ingenious experiments to show that simple strategies—like delivering information packets about colleges’ admission standards and financial aid policy, at a cost of six dollars a student—can substantially boost the number of these students who go to top colleges.
Federal Pell grants are based on need, but as those grants have become more generous, some states have responded by raising tuition. What’s more, at many schools, Pell grants cover just a fraction of the cost. The Obama administration hopes to advance the goal of equal opportunity in higher education by adopting a market-driven rating system designed to give students a clearer picture of colleges’ costs and outcomes. Its latest proposal, published last December, rates universities partly according to the percentage of students receiving Pell grants, the schools’ affordability, and their graduation rates. Troublingly, however, the earnings of a college’s alumni will also be factored into the equation, as if that were a decent measure of educational quality. The message is plain—the dollar-and-cents return on investment is what counts.
Students’ desire to get rich, colleges’ efforts to improve their place in the pecking order, and the Obama administration’s focus on graduates’ earnings all point in the same direction. Market values trump everything, including learning how to think, developing character, cultivating creativity, or working out a philosophy of life. Tellingly, the Department of Education ruled out taking into account information about students’ satisfaction with their education or the civic engagement of students and alumni.
Seldom has it been made so plain that, to Washington policymakers, a university is little more than a vocational training ground.
Where might pushback against the dominance of market values come from? Two new books—Larry Gerber’s The Rise and Decline of Faculty Governance and William Bowen and Eugene Tobin’s Locus of Authority—look to reforms in university governance, though they have drastically different ideas about what to do.
Gerber makes the emphasis on efficiency the villain of the piece. “American universities,” he writes, “were the best in the world because they were faculty-directed,” but since the 1960s, power has shifted to administrators. The mushrooming number of part-time instructors, who have no career stake in the university and little if any role in running the institution, further weakens the faculty’s position. Gerber’s answer is a reenergized professoriate that can make the case to the public that retreating from shared governance endangers “the future well-being of American society.” Good luck with that.
In their book Locus of Authority, Bowen and Tobin reject the shared governance model as it’s conventionally understood—with the faculty, administration, and board of trustees each vigilantly protective of its own turf. That division, Bowen and Tobin argue, is a prescription for inertia. They want faculty to abandon their aversion to discussing costs and confront the hard truth that resources are finite even at wealthy institutions. While elite private institutions like Princeton, where Bowen was formerly president, have never had it so good, funding for public higher education is unlikely to approach the levels of a generation or two ago, and that means hard decisions about academic priorities are inevitable. The approach Bowen and Tobin urge requires a delicate balance, making universities more nimble while respecting their core academic mission and values.
Lani Guinier’s The Tyranny of the Meritocracy, a deep dive into how colleges select and educate students, delivers a more thoroughgoing attack on market forces in higher education. Guinier, a Harvard Law School professor, challenges one of higher education’s sacred cows, the necessity of using test scores and grades to determine students’ worth. Borrowing from Michael Young’s futuristic satire, The Rise of the Meritocracy, Guinier assails what she terms “testocratic merit”—the use of metrics to admit students that appear meritocratic but actually advantage the advantaged. Instead, she urges universities to focus on “democratic merit.”
“Once you’re past the first year or two of higher education,” Guinier contends, “success isn’t about being the best test taker. It’s about being able to work with other people who have different strengths than you and who are also prepared to back you up when you make a mistake or when you feel vulnerable.”
Drawing on a rich variety of cases, Guinier describes how schools such as Clark University in Worcester, Massachusetts, working closely with local communities, have brought into the fold a diverse group of students, including minority youth, by promoting a culture of creativity and collaboration. She shows how pedagogical pioneers like Harvard physics professor Eric Mazur have made a classroom culture shift, turning mammoth lecture courses, which many universities use to weed out struggling students, into a beehive of peer teaching, in the process closing the gap among minority and female students.
Universities ought to be doing more of the work that Guinier describes, but the incentives are weighted against it. A third of a century ago, U.S. News & World Report published its first college rankings. The criteria and methodology were suspect, but instead of developing a more respectable metric, universities rushed pell-mell to climb this ladder of success, sometimes fudging their data to gain an advantage.
The draft federal regulations could prompt a national conversation about the value of higher learning. It’s learning, not earnings, that should matter, and while it’s easy enough to criticize the widely used metrics of accomplishment based on “value-added” critical thinking and student engagement, these measures are infinitely better than the criterion of higher earnings that the Obama administration has embraced.
The narratives in The Tyranny of the Meritocracy, with their emphasis on collaboration as the key to learning, are instructive for a larger reason. These tales of improbable successes, lives turned around, and problems solved in the context of higher learning offer a different vision of what success means in higher education. Such experiences need to be honored and held up as models if there’s any chance of keeping the universities from becoming just another business, where the bottom line is all that matters.