Reconsidering Universal Basic Income

Recent events have caused me to revise my assessment of universal basic income (UBI), which I’d previously written off as a utopian pipe dream. I’m still skeptical that it would be a good idea, but I’m more convinced than previously that it’s possible.

I had the opportunity to write an article on this. Here’s a quick summary, broken down by subject area:

Politics

  • The salience and popularity of UBI has increased massively in recent years. In 2011, Rasmussen Reports conducted a survey that found just 11% of adults favored a universal basic income. When they asked the same question this past April, that number had jumped to 40%.
  • Democratic Presidential Candidate Andrew Yang and a slew of tech executives have been proselytizing the electorate of the imminent obsolescence of human labor. Covid-19 may have exacerbated that: many of the jobs destroyed by the pandemic are not predicted to return.
  • The financial stimulus provided by the CARES Act has been popular, and there are calls from the public and prominent figures for its continuation. And while unemployment has risen sharply since March, poverty has actually fallen. I feel like this bodes well for the policy idea of giving people money.

Economics

  • I used to think any universal basic income program would be prohibitively expensive. But I think I underestimated our capacity for deficit spending, borrowing, and “printing” new reserves. I’m not saying we can do any of this without limit, but I feel less confident about where that limit is now.
  • Persistent low inflation, a feature of the last decade or so, is key to financing this level of public spending. As The Economist put it in July, “The absence of upward pressure on prices means there is no immediate need to slow the growth of central-bank balance-sheets or to raise short-term interest rates from their floor around zero.”

Sociology

  • This may be the one area where the case for UBI looks worse than it did previously. In the wake of summer’s unrest, it’s become more obvious that work plays a valuable role as a method of soft social control. (This is in keeping with the theories of UBI proponents who claim that work prevents people from mass organizing, though perhaps the other side of the coin.) If I were a national politician, I might be wary of the effects of greatly diminishing the role of work in society.
  • But there’s at least one reason to cautiously infer some sociological benefit too. A survey conducted by the CDC in June found a shocking 10.7% of the general population—and 25.5% of 18–24 year-olds—had seriously considered suicide in the past 30 days. (The benchmarks for these figures are respectively 4.3% and 11%.) The same survey found “only” 4.7% of unemployed people had experienced suicidal ideation during the same interval. Maybe the UBI-esque conditions brought on by pandemic relief are responsible for the counter-intuitive gap?

All told, I still don’t think we’ll see a universal basic income at any large scale for a long time, if ever. But the idea can’t be dismissed out of hand so easily anymore. Who knows how the winds will blow in ten or twenty years?

Read the full piece here.

A quick rant about “American collapse”

Consider this nothing more than a rant. I’ll try to make it short and unspecific. Pardon the interspersed use of the second person; I, of course, am not angry with you, dear reader.

The past four years have seen many once-noble institutions, organizations, and individuals1 descend into histrionics and set their credibility ablaze for minimal political return. Perhaps sadder, our “unprecedented times” have given rise to a disaster-porn mentality about the state of America that grifters have cynically refined and recast as implements of their base ambitions.

This is annoying for many reasons, not least because the energy being harvested is obviously an outgrowth of poorly sublimated upper-middle class ennui. If you believe America is a failed state, that you’re bearing witness to civilizational collapse, I invite you to take an extended trip to almost anywhere else on the planet. I have seen families in Brazilian slums living on islands of trash and driftwood tied together, makeshift docks to form walkways above the putrid water that served as their home’s foundation, sewer, and dumpster.

When you see failure, you’ll know.2

But you can’t really blame the managerial class for being bearish on America when they’re subject to a relentless memetic campaign to that effect. Our real ire ought to be directed at the cynics who divide their time evenly between fetishizing abstract hallmarks of America and its governance and propagandizing their illegitimacy.

Listen, if you tell me America is actually an oligarchy one day and then tell me we have to Save Democracy the next, you’re either an idiot or an opportunist with no regard for collateral damage. If you spend your days badgering people to vote and your afternoons diligently watering seeds of doubt about the integrity of the next election (hedging in case your guy loses), you’re an asshole. If you’re going to lament the damage being done, at least stop swinging the sledgehammer.

Perhaps worst of all is the myopia. Do these actors never ask themselves, “What comes next?” Credibility is subject to entropy as much as anything else: it’s difficult to build and easy to let slide into disrepair. They think they’re torching Russia to starve Napoleon, but in reality they’re Pyrrhus, paying for today’s victory with tomorrow’s defeat.

There will be no back to normal for us, brave explorers. They’ve burned the ships.


  1. With the exception of “thought leaders” with significant influence, this post is not meant to apply individuals. I don’t blame your uncle for his conspiratorial MAGA-posting, nor your aunt for her wine-mom-inflected renditions of CNN chyrons.
  2. For the record, I don’t consider Brazil a failed state. Nor will I pretend America doesn’t have some serious problems (many of which look likely to get worse). I’m just saying the bar is a lot higher than people seem to imagine.

Free speech in the digital age

I finally broke down and wrote a cancel-culture-adjacent piece. It originally appeared at Merion West, an online magazine. Since this essay contains themes I’ve been mulling over but have struggled to articulate for a while, I thought I’d reprint the piece here, with some commentary in the footnotes. Enjoy!


If I had to pick one thing America does better than any other nation, I’d have to go with free speech. The American commitment to free speech is legendary, codified by the First Amendment, which guarantees all Americans the right to worship, peacefully assemble, and otherwise express themselves without fear of government censorship.

As legal protections for freedom of expression go, the First Amendment remains the gold standard worldwide. We often take this for granted, forgetting that most people don’t live under the same conditions. Hold the American stance on freedom of speech in contrast with that of Iran or Saudi Arabia, where blasphemy is punishable by death, or China, where one-to-three million members of an ethno-religious sect are packed into concentration camps for crimes as spurious as abstaining from alcohol.*

If picking on theocracies and dictatorships strikes you as low-hanging fruit, recall that Europeans also live with less freedom of expression. A U.K. man was arrested and fined for posting a YouTube video that showed his girlfriend’s pug performing Nazi salutes, for example. By comparison, the American Civil Liberties Union has used the First Amendment to defend the rights of neo-Nazis and civil rights protestors alike to assemble.

Our commitment to the rights of others to express themselves, even if they hold heinous beliefs, is something uniquely American, perhaps the finest piece of our cultural heritage. Unfortunately, it’s a commitment we seem to be turning our backs on—and the First Amendment is often used as a moral license to do so.

The First Amendment guarantees one freedom from government censorship; it doesn’t establish the positive right to speech. This is as it should be, as anything more would require the compulsion of others to either hear or facilitate one’s speech. However, this allows people to take a narrow view of freedom of speech as being merely freedom from government censorship. We might call this the “showing you the door” strain of free speech thought. Such a view, while legally coherent, ignores that free speech has a cultural component as well—one that needs constant maintenance if it’s not to fall into disrepair.

That component might be described as a willingness to err on the side of permissiveness when it comes to public discourse—or perhaps an understanding that we generally tend to benefit from living in a culture where people can push boundaries without intolerable social and economic risk.** Its bedrock values are charity, humility, and tolerance. 

When I speak of a threat to free speech culture, I’m talking about the newly enabled impulse to defenestrate and defame people, often for trivial transgressions, sometimes years after the offense—“cancel culture,” if you must. It is distinct from free speech culture in that it doesn’t seek to confront opposing views but rather to erase them, often in ways that are financially or personally ruinous for the offending party. It’s the self-righteous, vindictive justice of the mob.

Because the internet

Anyone observer of humanity can tell you this is not new behavior. On the contrary, it’s been more the rule than the exception. But it does seem exacerbated and facilitated by modern life, especially the internet.

As more of life moved online, it became easily searchable, permanent, and largely public. This migration—the result of social encouragement to live in full view of your friends, casual acquaintances, and advertisers—has spawned a panopticonic social archive that can easily be turned against you.*** These are conditions unique to life in the 21st Century that many adults, let alone children, seem understandably ill-adept at navigating.

When paired with the rapid mutation of norms (also aided by the internet) surrounding acceptable speech and the mob mentality incentivized by social media and click-hungry outlets, this creates an environment ripe for reflexive, post-hoc defamation, to which even—or more accurately, especially—powerful liberal institutions (the very same tasked with guarding free inquiry) are showing little resistance

In such a hostile environment, the obvious choice becomes to abstain from speech that not only is controversial but also that which might someday be controversial. (The exception being those who are financially immune to cancelation and can thus be afforded public free thought.) This is clearly at odds with a culture of free speech, in which ideas can be freely debated, and people can change their minds over time. 

We’re already seeing the consequences: authors pulling their own books from publication for excruciatingly trivial offenses; professionals being fired for sharing objective research that supports unhappy findings. But the future consequences will be unseen: the important medical studies that aren’t conducted; the bold art that isn’t created; the policy failures that can’t be named, much less halted. From this vantage point, the future looks bleak, the province of the anodyne and ineffectual.

Censorship has been outsourced to private actors

Much like the social surveillance system under which we live (voluntarily, it must be said), the modern thought police regime is not a product of the state. Censorship has been outsourced to private companies and zealous volunteers, who are themselves often exercising their free speech rights in the course of policing others’ speech. From a legal standpoint, this is of course distinct from government censorship, and therefore not a First Amendment issue. No one has a right to a subreddit or a Twitter handle or a New York Times op-ed.

Yet it would be a mistake to say that these companies and individuals don’t or can’t pose a threat to free speech in the broader, cultural sense. To do so, you would have to ignore the market power of the relatively few actors that control the channels of speech in the modern era. The collapse of local media and the consolidation of firms within the industry, for example, have endowed the remaining actors with the power to filter the coverage of events and viewpoints that millions of Americans are exposed to. Do you trust them not to use it?

Over half of Americans get their news through Facebook, which is known to have manipulated users’ feeds to alter their emotions. Some 80% of search engine traffic flows through Google, home to famously opinionated, activist employees. About a quarter of journalists turn to Twitter—the use of which has been shown by at least one study to affect journalists’s judgement of newsworthiness—as their primary news source. The case of social media platforms and search engines is particularly illustrating: while they are private actors that users engage with of their own volition, network effects are built into their business models, meaning once established, they’re not as vulnerable to competition as other businesses and products are.

These companies are well within their legal rights to create their own policies and remove content that violates them, to algorithmically promote or suppress content on their properties, or to flag content as misinformation if they deem it so. But to deny that in doing so they might chill, stifle, or otherwise impact free expression is fanciful.

There are no easy fixes

Part of the irony of this problem is that addressing it in the most straightforward way (through policy or regulation) would actually represent a huge step in the wrong direction. Do I worry about the market power of companies that control the modern channels of speech? Yes, especially given the political power dynamics at play in many of our most powerful institutions. Do I think media polarization is dangerous and bad? You bet. But maintaining the independence of private actors, and thus the core of the First Amendment, is more important than the pursuit of an ephemeral unbiased public sphere.

That’s fine, because this isn’t a policy problem. It’s a cultural problem, and it requires a cultural solution: a revival of free speech culture and the virtues upon which it rests. We need to check our instincts to banish things we don’t like, and we need to voice our skepticism of those who over rely on the power of censorship.**** (It would probably also be a good idea for individuals to rethink how they use the internet.) 

I know this is a lot to ask, especially under the conditions of the digital age. But I have hope. Cultural free speech is a core American value and a key component of life in a pluralistic society. If anyone is going to defend it, it will be us.

Notes

* When I started writing this piece (about a week ago), Uyghur oppression was the most relevant example of Chinese human rights violation. By the time it was published, that had changed.

** This is one of those American ideals that has certainly never been implemented or enjoyed uniformly. As sociology professor Rod Graham points out, for a long time, you could risk losing your job and destroying your personal life by coming out as gay, for example. So while the tone of this piece is somewhat pessimistic about the state of modern free speech, I think it’s important to note that in a lot of ways, things have improved.

***  I should have also brought up that sometimes, as in many of the “Karen” videos going around, this social surveillance system is quite literally weaponized. There are incentives in place to do so—mainly the promise of money and virality for the poster.

****  There’s always going to be an Overton window; I don’t mean to suggest it could be any other way. That’s just part of living in a society.

Thoughts on Marc Andreessen’s IT’S TIME TO BUILD

Way way back in April of 2020, a venture capitalist named Marc Andreessen wrote an all-caps exhortation to western (particularly American) institutions and individuals: IT’S TIME TO BUILD. It’s a quick read, so I do recommend it. If that’s out of the question, you can get the gist from the opening paragraphs:

Every Western institution was unprepared for the coronavirus pandemic, despite many prior warnings. This monumental failure of institutional effectiveness will reverberate for the rest of the decade, but it’s not too early to ask why, and what we need to do about it.

Many of us would like to pin the cause on one political party or another, on one government or another. But the harsh reality is that it all failed — no Western country, or state, or city was prepared — and despite hard work and often extraordinary sacrifice by many people within these institutions. So the problem runs deeper than your favorite political opponent or your home nation.

Part of the problem is clearly foresight, a failure of imagination. But the other part of the problem is what we didn’t *do* in advance, and what we’re failing to do now. And that is a failure of action, and specifically our widespread inability to *build*.

We see this today with the things we urgently need but don’t have. We don’t have enough coronavirus tests, or test materials — including, amazingly, cotton swabs and common reagents. We don’t have enough ventilators, negative pressure rooms, and ICU beds. And we don’t have enough surgical masks, eye shields, and medical gowns — as I write this, New York City has put out a desperate call for rain ponchos to be used as medical gowns. Rain ponchos! In 2020! In America!

Marc Andreessen, “IT’S TIME TO BUILD”

Andreessen’s blog post is very good, even if it’s mostly an extended rallying cry. I think it was also very timely, as it alludes to a few subtextual themes I’m seeing come up more and more in politics:

  1. The US economy is increasingly concerned with rent extraction and distribution as opposed to genuinely productive economic activity, the latter having been off-shored to a great extent. The dollars-and-cents economic benefits of doing so aren’t really up for debate, but in social and political terms, the trade-off is looking less appealing these days. Prediction: interest in industrial policy is going to (continue to) increase among the right and possibly the left.
  2. Proceeding from a default assumption of capital scarcity is maybe not a smart way to make policy anymore. We are awash in money and not averse to printing more or deficit spending when the mood strikes. Obviously there’s a limit to how long you can get away with stuff like that, but if we can fight endless wars perhaps we can also fix a few roads.
  3. Maybe democracy is the problem? Others responded to Andreessen’s blog post by pointing out that there are political impediments to building as aggressively as Andreessen would like. Vox’s editor in chief, Ezra Klein, writes that American institutions public and private have become “vetocracies,” meaning that they’re biased against action instead of in its favor. Similarly, Steven Buss notes in Exponents Magazine that entrenched interests have captured regulators, making building, in many cases, illegal. Homeowners, for example, are hostile to development and form a powerful local political constituency.

    The thing is… isn’t this basically just policymakers being tuned into the desires of their constituents—or at least those inclined to make their voices heard? The only people who care enough to show up at a zoning meeting are the homeowners who don’t want the high-rise going in across the street. Professions lobby to be licensed so as to increase their income and limit competition, but members of the public generally don’t care enough to show up at the state house with a pitchfork.

    This is just the way it’s going to be, so maybe the answer is a system that doesn’t particularly care what its constituents have to say—or at least cares less in areas prone to regulatory capture.
  4. Finally, America’s ailments extend beyond the realms of economics and technocratic governance. Ours is a crisis of imagination, spirit, and mythology, exacerbated by the collapse of social capital across much of the nation. Consider the following anecdote1:

    In 1869, a businessman named George Atwater set out to install a network of rails throughout the city of Springfield, MA—from where I write presently—on which horses would pull carriages, a pre-electric trolley system. It seemed like such a ridiculous idea the board of aldermen laughed as they gave him permission and mocked him with an “initial investment” of eleven cents.

    Atwater built it anyway, and it turned out to be a huge success, expanding throughout the city and surpassing an annual ridership of 1 million by 1883. In 1890, less than a decade after the first electric power stations were built, the Springfield rail system began electrifying routes. By the next summer, all lines had been converted from horse to electric power. By 1904, ridership was 19 million; by 1916 it was 44 million.

    All of this—bold, successful investment in infrastructure, the rapid adoption of new technology, reliable and profitable public transportation—is technically possible today, yet this story could never take place in 2020. The aldermen would have dragged their feet, insisted on handouts to favored constituencies, and requested a handful of impact studies. Atwater would have stuck his investment in the stock market. The story would not have taken place here, because Springfield, like many former manufacturing cities, is in many ways a husk of its formerly productive self. Atwater would have lived in San Francisco, Boston, or New York.

Andreessen is right. It’s time to build. But let’s go broader than that: It’s time for a general return to alacrity in the public and private spheres, particularly for those of us who don’t live in one of the nexuses of the new economy. It’s time to rebuild social capital. It’s time to turn off autopilot.

Let’s fucking go.

###

  1. I came across this story in Lost Springfield, a local history book by Derek Strahan, who blogs at lostnewengland.com. I really enjoyed the book, so if you’re interested in the region’s history, I’d check out Strahan’s work.

Armchair Psych: Why Elizabeth Warren’s Loss Inspires “Fury and Grief”

You may have noticed that Senator Elizabeth Warren has suspended her campaign for president after a disappointing showing in the Democratic primary. You may have also noticed that some people are very upset. A small sample:

I do believe Warren’s loss is particularly painful for her supporters — and not just those in the media. I’m going to bend my rule about not discussing electoral politics on this blog so I can offer an armchair psychologist take on why Elizabeth Warren’s defeat has inspired such “fury and grief.” The usual disclaimers apply, probably more than usual.

#Goals

The first piece of the puzzle is that Warren’s supporters strongly identify with and admire her. Unlike, say, Bernie Sanders or Joe Biden, who are both personally wealthy and powerful but enjoy substantial support from the middle and lower classes, Elizabeth Warren actually has a lot in common with her supporters: white, highly educated professionals.

She is like them, only more so: a veteran of prestigious institutions from Boston to D.C., impeccably credentialed and accomplished, with grandchildren and a $12 million net worth to boot. She is “having it all” made flesh, an avatar of success. This encourages supporters to project themselves onto Warren. Their parasocial relationship makes her loss harder to deal with because it feels like a personal rejection, and in some ways, it is.

Technocracy and its True believers

Understanding Warren and her supporters as ideological technocrats is essential to making sense of their dismay at her poor performance. A technocrat’s authority is legitimized by displaying expertise, of which Warren did plenty. Her frequent allusions to her competence and preparedness — she has “a plan for that!” — are a straightforward appeal to technocratic ethos.

But raw displays of expertise are not the only route toward technocratic legitimacy, and indeed, few will have the occasion to put forth arcane “plans” to remake society and be taken seriously (though that is the dream). Expertise and the authority it grants can also be obtained through association with prestigious institutions.

Within these places, advancement, evaluation, and remuneration of personnel are typically formulaic matters. (For an example, check out the salary schedule for foreign service officers. Another is how public school teachers’ salaries are calculated.) This is a superficial gesture to the ideals of fairness and objectivity. The impersonality and aversion to qualitative data it necessitates are regarded as features, not bugs, of bureaucracy.

The reason this is important isn’t because Elizabeth Warren spent her career in such places. It’s because her supporters have too. These ideas are not only intuitive to them, they are fundamental, ethical truths. Elizabeth Warren deserves the job. She spent a lifetime earning it.

Whether or not the world should work this way is an open question. But to convince yourself this is the only morally valid way it could work, as Warren supporters seem to have, is an error in judgement. In practical terms, it’s a really poor model for understanding how actual voters make decisions about political leadership in a democracy. Presidential hopefuls wouldn’t be subjecting themselves to the Iowa State Fair if the election could be decided by a resume-scanning software.

The technocratic path and its costs

The technocratic path to power is not merely a career plan; it’s a full-blown ideology with ideas about what’s valuable, what constitutes a good life, and who deserves what. But underneath it lies the universal, deeply held human desire for esteem.

Recall that Elizabeth Warren is an aspirational figure for the upper-middle class, which defines itself by its intellect and is preoccupied with the markings thereof. Her path is their path, the Gramscian march that culminates in power and respect. Her decisive failure to obtain their platonic form, then, calls into question the legitimacy of the rules they’ve been playing by and the immense sacrifices doing so requires.

If giving years of your life and hundreds of thousands of dollars to the machine doesn’t buy you unquestioned esteem, what’s the point? You could have relaxed more, could have taken a job that actually paid and bought a house. You could have just pulled a Scott Alexander and exorcised your passions in a blog! But time only goes forward, so the present and future have to justify the past.

Status quo bias and sexism

Last item of note. The portrayal of Warren’s sound defeat as sexism is as predictable as it is unfalsifiable. But for the purposes of this blog post, we’re not really interested in if it’s true so much as the idea that her supporters want it to be true.

As I see it, sexism provides the least challenging explanation for her failure, not intellectually — it requires some serious mental gymnastics to fit her third-place finish in Massachusetts (among Democratic women!) into that narrative — but personally and philosophically.

If someone is rejected for their immutable qualities, those doing the rejecting can be safely dismissed as bigots, and their opinions need not be taken too seriously. It’s not me; it’s you. The rejection of an ethos is different, because it’s not a repudiation of what you happen to be but rather what you choose to be. It’s harder for Warren supporters to swallow because they share her convictions, part of which is that they possess The Truth (which is why they should be in charge of policy and journalism and academia and human resources and…). This is like the ultimate public repudiation of that.

But most of all, the cure for sexism — no doubt some combination of activism; advocacy; TED Talks; and a ubiquitous, memetic media campaign — requires no change on their part. They’re already doing these things; in fact, there are entire industries and departments, staffed by the Warren demographic, devoted to these endeavors. Insofar as their daily lives are concerned, doubling down on sexism being the problem is activism against change.

“IRL Impressions”

I have, perhaps belatedly, entered the point in life at which I no longer have standing weekend plans to drink with friends. Not coincidentally, I’ve been doing more in the way the contemplative and outdoors-y. A few weekends ago, my girlfriend and I went for a hike on Monument Mountain in Great Barrington, Massachusetts. (Credit where it’s due: we got the idea from MassLive’s list of the best hikes in Massachusetts. We’ve tepidly declared our intention to hit them all this spring and summer.)

We opted for the mountain’s most direct root, looking for a challenge. But despite being advertised as strenuous, the trail was mostly tame, its steepest segments obviated by the installation of stone steps. We had a pleasant, if not effortless, ascent, punctuated by this or that detour to examine our surroundings.

After an hour or so, we reached the summit. Monument Mountain isn’t very tall. At a little over 500 meters, it’s about half the height of Massachusetts’ tallest peak, Mount Greylock. But that bit of relativity is less salient when you’re looking down on soaring hawks and the slow-motion lives of the humans below.

It was a beautiful day, and we weren’t the only ones out. In the background, two young women were taking turns photographing each other for the ‘Gram, the perfect receipt of an afternoon well spent. “I’m gonna do some sunglasses on, then do some sunglasses off,” one said. I immediately wrote down the quote in a text to Megan, who laughed.

Every now and again, something like this makes me think about the growing importance of online media — in business, culture, love, politics, and other areas of life. Social media is a mixed bag, but the advantages of scale it offers are pretty much uncontested. I wonder if we’ll reach a point at which the significance of online life — where 10 million can attend a concert and content can be redistributed in perpetuity at low marginal costs — eclipses that of our offline lives. If awareness is a necessary component of significance, it’s hard to see how it wouldn’t.

A few months ago, my company hired a consultant to help us attract sponsorship for an event. As part of their information-gathering, the consultant asked us what the “IRL impressions” of the event would be — a term that mimics social media analytics and that both parties eventually decided probably meant “attendance.” This struck me as at once amusing and depressing: the internet’s centrality is such that it must now be specified when we’re talking about the real — no, physical — world.

What Colleges Sell (continued)

I’m obviously not one to prioritize quantity when it comes to writing. Counting this one, I’ve written four blog posts this year — not great for a guy whose New Year’s resolution set the pace at two per month. Even less so when you consider that half of them have now been follow-up posts.

However, there was some interesting Facebook discussion on my last post that I felt merited some elucidation here, where those who don’t follow me on social media can digest it. (I won’t ask anyone to follow on social, but to those of you who are here via social media, you should subscribe to get these posts by email.) I’m also working on something else that’s a bit involved, and I thought this would be a good stopgap.

As loyal readers are aware, my last post touched on the college-admissions scandal and the cultural legwork being done by our vision of education as a transformative asset.

Elite colleges sell these ideas back to us by marketing education as a transformative experience, an extrinsic asset to be wielded. In an unequal society, this is a particularly comforting message, because it implies:

  1. The world works on meritocracy. High-status individuals not only are better than most, they became so through efforts the rest of us can replicate.
  2. We can achieve equality of outcomes with sufficient resources. This has the added bonus of perpetuating the demand for high-end education.

An observation I couldn’t figure out how to work in is that getting into elite colleges seems by far the hardest part of graduating from them. Admissions is, after all, the part of the process the accused parents were cheating, and to my knowledge, none of the students involved were in danger of failing out, despite having been let in under false pretense.

The low bar for good grades at elite colleges, the “Harvard A,”¹ is so widely acknowledged that to call it an open secret would be misleading.² Stuart Rojstaczer, the author of gradeinflation.com documents two distinct periods of grade inflation in the last 50 years: the Vietnam War era, in which men who flunked out would likely be sent off to fight an unpopular war, and the “Student as a Consumer” era of today.

The transition to the latter has meant a change in teaching philosophy and an increased centrality of the admissions process. On his website, Mr Rojstaczer quotes a former University of Wisconsin Chancellor as saying, “Today, our attitude is we do our screening of students at the time of admission. Once students have been admitted, we have said to them, ‘You have what it takes to succeed.’ Then it’s our job to help them succeed.” (Emphasis mine.)

This is consistent with my not-so-between-the-lines theorizing that the later-in-life achievements of elite colleges grads are mostly attributable to selection effects, not education. It turns out this was studied by Alan Krueger and Stacy Dale, who found salary differences between elite college graduates and those who applied to elite schools but didn’t attend were “generally indistinguishable from zero.”

Of course, this is kind of depressing, because if good schools don’t make “winners,” but rather attract and rebrand them, then it’s a lot easier to attribute their graduates’ success to factors that are not only beyond their control but for which there are likely no or few policy levers — genetics, culture, family structure, and others.

I think this is an unwelcome conclusion to the point that even incontrovertible evidence — whatever that would look like — would be ignored or stigmatized by polite society. Most people probably agree that public policy should keep away from these areas of life.³

Regardless, I think we should be more honest with ourselves about our obsession with elite schools and our expectations of education more generally.

*

Footnotes:

  1. In case you don’t feel like clicking the link: In 2013, Harvard’s dean revealed the median grade awarded at the school to be an A-, while the most common grade given was a straight A.
  2. Though apparently to a lesser degree, this has been the case at four-year colleges across the board, not just top-tier private ones.
  3. Then again, maybe they don’t. A recent survey of over 400 US adults found “nontrivial” levels of support for eugenic policies among the public, increasing with the belief that various traits — intelligence, poverty, and criminality — are heritable and also associated with attitudes held by the respondent about the group in question. The questions in the study were framed as support for policies that would encourage or discourage people with particular traits to have more or fewer children. (If you have 10 minutes, read the study, freely accessible at slatestarcodex. Also good: Scott Alexander’s piece on social censorship, in which the aforementioned paper is linked.)

What Colleges Sell

The recent college-admissions scandal has me, and probably many of you, thinking about the institutional power of elite colleges. It’s remarkable that even those we would consider society’s “winners” aren’t immune to their pull. Take for example Olivia Giannulli, who is from a wealthy family; has nearly 2 million YouTube followers; owns a successful cosmetics line (pre-scandal, anyway); and whose parents, Laurie Loughlin and Mossimo Giannulli, allegedly paid $500,000 to get her and her sister accepted to USC.

Why?

The standard line is that the point of college is to learn. Getting into a better school avails one of better information, which translates into more marketable skills—human capital accrual, in economics jargon. The many deficiencies of this view have birthed the somewhat-cynical “signaling theory”: the idea that college degrees serve mainly as signals to employers of positive, pre-existing characteristics like intelligence or attention to detail.

Signalling theory is powerfully convincing, but it doesn’t fully explain the insanity endemic to the elite college scene. There’s more going on at the individual, familial, and societal levels.

First the individual. If the human capital isn’t the point, social capital could be. The student bodies of elite schools are well curated for networking among the intelligent, the wealthy, and what we might call the “legacy crowd”—non-mutually exclusive groups that mutually benefit from this four-year mixer. Who you sit next to in class might matter more than what’s being taught.

Colleges, particularly those of renown, provide a sense of unabashed community that is in short supply elsewhere in American life. If you read universities’ marketing or speak with admissions staff, this is often a selling point. The idea that former classmates and fraternity brothers become a nepotistic social network post-graduation is intuitive, and probably a very compelling reason to attend a particular school.¹

What’s true for the individual is true for the family. Parents want the best for their children, and they know the kinds of doors attending the right school will open. But for parents, there are added elements at stake: self- and peer-appraisal.² That is, as educational attainment has become accepted not only as a means to but validation of social mobility, parents have come to define their success by the institutions their children attend. YouGov polling found that thirty-four percent of parents would pay a college prep organization to take a college admittance test on their child’s behalf. One in four would pay college officials to get their child into a good school.

college bribery 2

I’d bet this is an understatement caused by social-desirability bias.³

Last up, and most interesting, is society at large. Even though most of us won’t attend a very prestigious university, if we attend one at all, the legitimacy of those institutions still rests on our perception. For us to be bought in, we need a culturally acceptable premise for the power enjoyed by Harvard, Yale, and the like—a role that can’t be filled by the networking and status-driven benefits I’ve described so far. This brings us full circle, back to the idea of higher education as a method of information conveyance.

Though the human capital accrual theory of education is probably bunk, most people’s belief in it feels sincere. In my view, this is the confluence of three phenomena: observed correlations between educational attainment and positive outcomes, our cultural commitments to self-sufficiency and equal opportunity, and a mostly unstated but potent desire to manufacture equality of outcomes.

Elite colleges sell these ideas back to us by marketing education as a transformative experience, an extrinsic asset to be wielded. In an unequal society, this is a particularly comforting message, because it implies:

  1. The world works on meritocracy. High-status individuals not only are better than most, they became so through efforts the rest of us can replicate.
  2. We can achieve equality of outcomes with sufficient resources. This has the added bonus of perpetuating the demand for high-end education.

The meritocratic, knowledge-driven higher education model is a product we’re all happy to buy because we like what it says about us. Its violation is disillusioning on a societal level, hence the disproportionate outrage created by scandal involving some 50 students.

Perhaps this is an opportunity to reexamine our relationship with and expectations of the upper echelons of higher education. If we find signaling theory compelling, and I personally do, shouldn’t a society committed to equality of opportunity and social mobility seek to marginalize, rather than fetishize, the institutional power of these universities?

Somewhat more darkly, we should ask ourselves if our belief in the transformative power of education might not be the product of our collective willing ignorance—a noble lie we tell ourselves to avoid confronting problems to which we have few or no solutions. If pre-existing traits—innate intelligence, social connections, wealth, and others—most accurately explain one’s success, what of the increasingly selective institutions that facilitate their convergence?

*

Footnotes:

  1. Though I’ve heard plenty of anecdotal claims to this effect (including from an admissions officer during a grad school interview), I don’t have any hard proof. If one of you knows of a such a study, point me in the right direction.
  2. I just wanted to note that this feels very in line with the general trend of wealthier people having fewer children but spending an enormous amount of resources to give them even very marginal advantages.
  3. This is when people respond to polls in the ways they think are more likely to be viewed favorably by others. Basically, people under-report bad behavior (maybe using drugs or committing crimes) and over-report good behavior (like voting).

Business Is Getting Political—and Personal

As anyone reading this blog is undoubtedly aware, Sarah Huckabee Sanders, the current White House Press Secretary, was asked last month by the owner of a restaurant to leave the establishment on the basis that she and her staff felt a moral imperative to refuse service to a member of the Trump administration. The incident, and the ensuing turmoil, highlights the extent to which business has become another political battleground—a concept that makes many anxious.

Whether or not businesses should take on political and social responsibilities is a fraught question—but not a new one. Writing for the New York Times in 1970, Milton Friedman famously argued that businesses should avoid the temptation go out of their way to be socially responsible and instead focus on maximizing profits within the legal and ethical framework erected by government and society. To act otherwise at the expense profitability, he reasoned, is to spend other people’s money—that of shareholders, employees, or customers—robbing them of their agency.

Though nearing fifty years of age, much of Milton Friedman’s windily and aptly titled essay, The Social Responsibility of Business Is to Increase Profits, feels like it could have been written today. Many of the hypotheticals he cites of corporate social responsibility—“providing employment, eliminating discrimination, avoiding pollution”—are charmingly relevant in the era of automation anxiety, BDS, and one-star campaigns. His solution, that businesses sidestep the whole mess, focus on what they do best, and play by the rules set forth by the public, is elegant and simple—and increasingly untenable.

One reason for this is that businesses and the governments Friedman imagined would reign them in have grown much closer, even as the latter have grown comparatively weaker. In sharp contrast to the get-government-out-of-business attitude that prevailed in the boardrooms of the 1970s, modern industry groups collectively spend hundreds of millions to get the ears of lawmakers, hoping to obtain favorable legislation or stave off laws that would hurt them. Corporate (and other) lobbyists are known to write and edit bills, sometimes word for word.

You could convincingly argue that this is done in pursuit of profit: Boeing, for example, spent $17 million lobbying federal politicians in 2016 and received $20 million in federal subsidies the same year. As of a 2014 report by Good Jobs First, an organization that tracks corporate subsidies, Boeing had received over $13 billion of subsidies and loans from various levels of government. Nevertheless, this is wildly divergent from Friedman’s idea of business as an adherent to, not architect of, policy.

As business has influenced policy, so too have politics made their mark on business. Far more so than in the past, today’s customers expect brands to take stands on social and political issues. A report by Edelman, a global communications firm, finds a whopping 60% of American Millennials (and 30% of consumers worldwide) are “belief-driven” buyers.

This, the report states, is the new normal for businesses—like it or not. Brands that refrain from speaking out on social and political issues now increasingly risk consumer indifference, which, I am assured by the finest minds in marketing, is not good. In an age of growing polarization, every purchase is becoming a political act. Of course, when you take a stand on a controversial issue, you also risk alienating people who think you’re wrong: 57% of consumers now say they will buy or boycott a brand based on its position on an issue.

This isn’t limited to merely how corporations talk. Firms are under increasing social pressure to hire diversity officers, change where they do business, and reduce their environmental impact, among other things. According to a 2017 KPMG survey on corporate social responsibility, 90% of the world’s largest companies now publish reports on their non-business responsibilities. This reporting rate, the survey says, is being driven by pressure from investors and government regulators alike.

It turns out that a well marketed stance on social responsibility can be a powerful recruiting tool. A 2003 study by the Stanford Graduate School of Business found 90% of graduating MBAs in the United States and Europe prioritize working for organizations committed to social responsibility. Often, these social objectives can be met in ways that employees enjoy: for example, cutting a company’s carbon footprint by letting employees work from home.

In light of all this, the choice between social and political responsibility and profitability seems something of a false dichotomy. The stakes are too high now for corporations to sit on the sidelines of policy, politics, and society, and businesses increasingly find themselves taking on such responsibilities in pursuit of profitability. Whether that’s good or bad is up for debate. But as businesses have grown more powerful and felt the need to transcend their formerly transactional relationships with consumers, it seems to be the new way of things.