“IRL Impressions”

I have, perhaps belatedly, entered the point in life at which I no longer have standing weekend plans to drink with friends. Not coincidentally, I’ve been doing more in the way the contemplative and outdoors-y. A few weekends ago, my girlfriend and I went for a hike on Monument Mountain in Great Barrington, Massachusetts. (Credit where it’s due: we got the idea from MassLive’s list of the best hikes in Massachusetts. We’ve tepidly declared our intention to hit them all this spring and summer.)

We opted for the mountain’s most direct root, looking for a challenge. But despite being advertised as strenuous, the trail was mostly tame, its steepest segments obviated by the installation of stone steps. We had a pleasant, if not effortless, ascent, punctuated by this or that detour to examine our surroundings.

After an hour or so, we reached the summit. Monument Mountain isn’t very tall. At a little over 500 meters, it’s about half the height of Massachusetts’ tallest peak, Mount Greylock. But that bit of relativity is less salient when you’re looking down on soaring hawks and the slow-motion lives of the humans below.

It was a beautiful day, and we weren’t the only ones out. In the background, two young women were taking turns photographing each other for the ‘Gram, the perfect receipt of an afternoon well spent. “I’m gonna do some sunglasses on, then do some sunglasses off,” one said. I immediately wrote down the quote in a text to Megan, who laughed.

Every now and again, something like this makes me think about the growing importance of online media — in business, culture, love, politics, and other areas of life. Social media is a mixed bag, but the advantages of scale it offers are pretty much uncontested. I wonder if we’ll reach a point at which the significance of online life — where 10 million can attend a concert and content can be redistributed in perpetuity at low marginal costs — eclipses that of our offline lives. If awareness is a necessary component of significance, it’s hard to see how it wouldn’t.

A few months ago, my company hired a consultant to help us attract sponsorship for an event. As part of their information-gathering, the consultant asked us what the “IRL impressions” of the event would be — a term that mimics social media analytics and that both parties eventually decided probably meant “attendance.” This struck me as at once amusing and depressing: the internet’s centrality is such that it must now be specified when we’re talking about the real — no, physical — world.

What Colleges Sell (continued)

I’m obviously not one to prioritize quantity when it comes to writing. Counting this one, I’ve written four blog posts this year — not great for a guy whose New Year’s resolution set the pace at two per month. Even less so when you consider that half of them have now been follow-up posts.

However, there was some interesting Facebook discussion on my last post that I felt merited some elucidation here, where those who don’t follow me on social media can digest it. (I won’t ask anyone to follow on social, but to those of you who are here via social media, you should subscribe to get these posts by email.) I’m also working on something else that’s a bit involved, and I thought this would be a good stopgap.

As loyal readers are aware, my last post touched on the college-admissions scandal and the cultural legwork being done by our vision of education as a transformative asset.

Elite colleges sell these ideas back to us by marketing education as a transformative experience, an extrinsic asset to be wielded. In an unequal society, this is a particularly comforting message, because it implies:

  1. The world works on meritocracy. High-status individuals not only are better than most, they became so through efforts the rest of us can replicate.
  2. We can achieve equality of outcomes with sufficient resources. This has the added bonus of perpetuating the demand for high-end education.

An observation I couldn’t figure out how to work in is that getting into elite colleges seems by far the hardest part of graduating from them. Admissions is, after all, the part of the process the accused parents were cheating, and to my knowledge, none of the students involved were in danger of failing out, despite having been let in under false pretense.

The low bar for good grades at elite colleges, the “Harvard A,”¹ is so widely acknowledged that to call it an open secret would be misleading.² Stuart Rojstaczer, the author of gradeinflation.com documents two distinct periods of grade inflation in the last 50 years: the Vietnam War era, in which men who flunked out would likely be sent off to fight an unpopular war, and the “Student as a Consumer” era of today.

The transition to the latter has meant a change in teaching philosophy and an increased centrality of the admissions process. On his website, Mr Rojstaczer quotes a former University of Wisconsin Chancellor as saying, “Today, our attitude is we do our screening of students at the time of admission. Once students have been admitted, we have said to them, ‘You have what it takes to succeed.’ Then it’s our job to help them succeed.” (Emphasis mine.)

This is consistent with my not-so-between-the-lines theorizing that the later-in-life achievements of elite colleges grads are mostly attributable to selection effects, not education. It turns out this was studied by Alan Krueger and Stacy Dale, who found salary differences between elite college graduates and those who applied to elite schools but didn’t attend were “generally indistinguishable from zero.”

Of course, this is kind of depressing, because if good schools don’t make “winners,” but rather attract and rebrand them, then it’s a lot easier to attribute their graduates’ success to factors that are not only beyond their control but for which there are likely no or few policy levers — genetics, culture, family structure, and others.

I think this is an unwelcome conclusion to the point that even incontrovertible evidence — whatever that would look like — would be ignored or stigmatized by polite society. Most people probably agree that public policy should keep away from these areas of life.³

Regardless, I think we should be more honest with ourselves about our obsession with elite schools and our expectations of education more generally.

*

Footnotes:

  1. In case you don’t feel like clicking the link: In 2013, Harvard’s dean revealed the median grade awarded at the school to be an A-, while the most common grade given was a straight A.
  2. Though apparently to a lesser degree, this has been the case at four-year colleges across the board, not just top-tier private ones.
  3. Then again, maybe they don’t. A recent survey of over 400 US adults found “nontrivial” levels of support for eugenic policies among the public, increasing with the belief that various traits — intelligence, poverty, and criminality — are heritable and also associated with attitudes held by the respondent about the group in question. The questions in the study were framed as support for policies that would encourage or discourage people with particular traits to have more or fewer children. (If you have 10 minutes, read the study, freely accessible at slatestarcodex. Also good: Scott Alexander’s piece on social censorship, in which the aforementioned paper is linked.)

What Colleges Sell

The recent college-admissions scandal has me, and probably many of you, thinking about the institutional power of elite colleges. It’s remarkable that even those we would consider society’s “winners” aren’t immune to their pull. Take for example Olivia Giannulli, who is from a wealthy family; has nearly 2 million YouTube followers; owns a successful cosmetics line (pre-scandal, anyway); and whose parents, Laurie Loughlin and Mossimo Giannulli, allegedly paid $500,000 to get her and her sister accepted to USC.

Why?

The standard line is that the point of college is to learn. Getting into a better school avails one of better information, which translates into more marketable skills—human capital accrual, in economics jargon. The many deficiencies of this view have birthed the somewhat-cynical “signaling theory”: the idea that college degrees serve mainly as signals to employers of positive, pre-existing characteristics like intelligence or attention to detail.

Signalling theory is powerfully convincing, but it doesn’t fully explain the insanity endemic to the elite college scene. There’s more going on at the individual, familial, and societal levels.

First the individual. If the human capital isn’t the point, social capital could be. The student bodies of elite schools are well curated for networking among the intelligent, the wealthy, and what we might call the “legacy crowd”—non-mutually exclusive groups that mutually benefit from this four-year mixer. Who you sit next to in class might matter more than what’s being taught.

Colleges, particularly those of renown, provide a sense of unabashed community that is in short supply elsewhere in American life. If you read universities’ marketing or speak with admissions staff, this is often a selling point. The idea that former classmates and fraternity brothers become a nepotistic social network post-graduation is intuitive, and probably a very compelling reason to attend a particular school.¹

What’s true for the individual is true for the family. Parents want the best for their children, and they know the kinds of doors attending the right school will open. But for parents, there are added elements at stake: self- and peer-appraisal.² That is, as educational attainment has become accepted not only as a means to but validation of social mobility, parents have come to define their success by the institutions their children attend. YouGov polling found that thirty-four percent of parents would pay a college prep organization to take a college admittance test on their child’s behalf. One in four would pay college officials to get their child into a good school.

college bribery 2

I’d bet this is an understatement caused by social-desirability bias.³

Last up, and most interesting, is society at large. Even though most of us won’t attend a very prestigious university, if we attend one at all, the legitimacy of those institutions still rests on our perception. For us to be bought in, we need a culturally acceptable premise for the power enjoyed by Harvard, Yale, and the like—a role that can’t be filled by the networking and status-driven benefits I’ve described so far. This brings us full circle, back to the idea of higher education as a method of information conveyance.

Though the human capital accrual theory of education is probably bunk, most people’s belief in it feels sincere. In my view, this is the confluence of three phenomena: observed correlations between educational attainment and positive outcomes, our cultural commitments to self-sufficiency and equal opportunity, and a mostly unstated but potent desire to manufacture equality of outcomes.

Elite colleges sell these ideas back to us by marketing education as a transformative experience, an extrinsic asset to be wielded. In an unequal society, this is a particularly comforting message, because it implies:

  1. The world works on meritocracy. High-status individuals not only are better than most, they became so through efforts the rest of us can replicate.
  2. We can achieve equality of outcomes with sufficient resources. This has the added bonus of perpetuating the demand for high-end education.

The meritocratic, knowledge-driven higher education model is a product we’re all happy to buy because we like what it says about us. Its violation is disillusioning on a societal level, hence the disproportionate outrage created by scandal involving some 50 students.

Perhaps this is an opportunity to reexamine our relationship with and expectations of the upper echelons of higher education. If we find signaling theory compelling, and I personally do, shouldn’t a society committed to equality of opportunity and social mobility seek to marginalize, rather than fetishize, the institutional power of these universities?

Somewhat more darkly, we should ask ourselves if our belief in the transformative power of education might not be the product of our collective willing ignorance—a noble lie we tell ourselves to avoid confronting problems to which we have few or no solutions. If pre-existing traits—innate intelligence, social connections, wealth, and others—most accurately explain one’s success, what of the increasingly selective institutions that facilitate their convergence?

*

Footnotes:

  1. Though I’ve heard plenty of anecdotal claims to this effect (including from an admissions officer during a grad school interview), I don’t have any hard proof. If one of you knows of a such a study, point me in the right direction.
  2. I just wanted to note that this feels very in line with the general trend of wealthier people having fewer children but spending an enormous amount of resources to give them even very marginal advantages.
  3. This is when people respond to polls in the ways they think are more likely to be viewed favorably by others. Basically, people under-report bad behavior (maybe using drugs or committing crimes) and over-report good behavior (like voting).

Business Is Getting Political—and Personal

As anyone reading this blog is undoubtedly aware, Sarah Huckabee Sanders, the current White House Press Secretary, was asked last month by the owner of a restaurant to leave the establishment on the basis that she and her staff felt a moral imperative to refuse service to a member of the Trump administration. The incident, and the ensuing turmoil, highlights the extent to which business has become another political battleground—a concept that makes many anxious.

Whether or not businesses should take on political and social responsibilities is a fraught question—but not a new one. Writing for the New York Times in 1970, Milton Friedman famously argued that businesses should avoid the temptation go out of their way to be socially responsible and instead focus on maximizing profits within the legal and ethical framework erected by government and society. To act otherwise at the expense profitability, he reasoned, is to spend other people’s money—that of shareholders, employees, or customers—robbing them of their agency.

Though nearing fifty years of age, much of Milton Friedman’s windily and aptly titled essay, The Social Responsibility of Business Is to Increase Profits, feels like it could have been written today. Many of the hypotheticals he cites of corporate social responsibility—“providing employment, eliminating discrimination, avoiding pollution”—are charmingly relevant in the era of automation anxiety, BDS, and one-star campaigns. His solution, that businesses sidestep the whole mess, focus on what they do best, and play by the rules set forth by the public, is elegant and simple—and increasingly untenable.

One reason for this is that businesses and the governments Friedman imagined would reign them in have grown much closer, even as the latter have grown comparatively weaker. In sharp contrast to the get-government-out-of-business attitude that prevailed in the boardrooms of the 1970s, modern industry groups collectively spend hundreds of millions to get the ears of lawmakers, hoping to obtain favorable legislation or stave off laws that would hurt them. Corporate (and other) lobbyists are known to write and edit bills, sometimes word for word.

You could convincingly argue that this is done in pursuit of profit: Boeing, for example, spent $17 million lobbying federal politicians in 2016 and received $20 million in federal subsidies the same year. As of a 2014 report by Good Jobs First, an organization that tracks corporate subsidies, Boeing had received over $13 billion of subsidies and loans from various levels of government. Nevertheless, this is wildly divergent from Friedman’s idea of business as an adherent to, not architect of, policy.

As business has influenced policy, so too have politics made their mark on business. Far more so than in the past, today’s customers expect brands to take stands on social and political issues. A report by Edelman, a global communications firm, finds a whopping 60% of American Millennials (and 30% of consumers worldwide) are “belief-driven” buyers.

This, the report states, is the new normal for businesses—like it or not. Brands that refrain from speaking out on social and political issues now increasingly risk consumer indifference, which, I am assured by the finest minds in marketing, is not good. In an age of growing polarization, every purchase is becoming a political act. Of course, when you take a stand on a controversial issue, you also risk alienating people who think you’re wrong: 57% of consumers now say they will buy or boycott a brand based on its position on an issue.

This isn’t limited to merely how corporations talk. Firms are under increasing social pressure to hire diversity officers, change where they do business, and reduce their environmental impact, among other things. According to a 2017 KPMG survey on corporate social responsibility, 90% of the world’s largest companies now publish reports on their non-business responsibilities. This reporting rate, the survey says, is being driven by pressure from investors and government regulators alike.

It turns out that a well marketed stance on social responsibility can be a powerful recruiting tool. A 2003 study by the Stanford Graduate School of Business found 90% of graduating MBAs in the United States and Europe prioritize working for organizations committed to social responsibility. Often, these social objectives can be met in ways that employees enjoy: for example, cutting a company’s carbon footprint by letting employees work from home.

In light of all this, the choice between social and political responsibility and profitability seems something of a false dichotomy. The stakes are too high now for corporations to sit on the sidelines of policy, politics, and society, and businesses increasingly find themselves taking on such responsibilities in pursuit of profitability. Whether that’s good or bad is up for debate. But as businesses have grown more powerful and felt the need to transcend their formerly transactional relationships with consumers, it seems to be the new way of things.

Is College Worth It?

It’s a query that would have been unthinkable a generation or two ago. College was once – and in fairness, to a large extent, still is – viewed as a path to the middle class and a cultural rite of passage. But those assumptions are, on many fronts, being challenged. Radical changes on the cost and benefit sides of the equation have thrown the once axiomatic value of higher education into question.

Let’s talk about money first. It’s no secret that the price of a degree has climbed rapidly in recent decades. Between 1985 and 2015, the average cost of attending a four-year institution increased by 120 percent, according to data compiled by the National Center for Education Statistics, putting it in the neighborhood of $25,000 per year – a figure pushing 40 percent of the median income.

That increase has left students taking more and bigger loans to pay for their educations. According to ValuePenguin, a company that helps consumers understand financial decisions, between 2004 and 2014, the amount of student loan borrowers and their average balance size increased by 90 percent and 80 percent, respectively. Among the under-thirty crowd, 53 percent with a bachelor’s degree or higher now report carrying student debt.

Then there’s time to consider. Optimistically, a bachelor’s degree can be obtained after four years of study. For the minority of students who manage this increasingly rare feat, that’s still a hefty investment: time spent on campus can’t be spent doing other things, like work, travel, or even just enjoying the twilight of youth.

And for all the money and time students are sinking into their post-secondary educations, it’s not exactly clear they’re getting a good deal – whether gauged by future earnings or the measurable acquisition of knowledge. Consider the former: While there is a well acknowledged “college wage premium,” the forces powering it are up for debate. A Pew Research Center report from 2014 shows the growing disparity to be less a product of the rising value of a college diploma than the cratering value of a high school diploma. The same report notes that while the percentage of degree-holders aged 25-32 has soared since the Silent Generation, median earnings for full-time workers of that cohort have more or less stagnated across the same time period.

Meanwhile, some economists contend that to whatever extent the wage premium exists, it’s impossible to attribute to college education itself. Since the people most likely to be successful are also the most likely to go to college, we can’t know to what extent a diploma is a cause or consequence of what made them successful.

In fact, some believe the real purpose of formal education isn’t so much to learn as to display to employers that a degree-holder possess the attributes that correlate with success, a process known as signalling. As George Mason Professor of Economics (and noted higher-ed skeptic) Bryan Caplan has pointed out, much of what students learn, when they learn anything, isn’t relevant to the real world. Professor Caplan thinks students are wise to the true value of a degree, which could explain why almost no student ever audits a class, why students spend about 14 hours a week studying, and why two-thirds of students fail to leave university proficient in reading.

Having spent the last 550-ish words bashing graduates and calling into question the legitimacy of the financial returns on a degree, you might fairly ask if I’m saying college really isn’t worth your time and money. While I’d love to end it here and now with a hot take like that, the truth is it’s a really complicated, personal question, and I can’t give a definitive answer. What I can offer are some prompts that might help someone considering college to make that choice for themself, based on things I wish I’d known before heading off to school.

  • College graduates fare better on average by many metrics. Even if costs of attendance are rising, they still have to be weighed against the potential benefits. Income, unemployment, retirement benefits, and health care: those with a degree really do fare better. Even if we can’t be sure of the direction or extent this relationship is causal, one could reasonably conclude the benefits are worth the uncertainty.
  • Credentialism might not be fair, but it’s real. Plenty of employers use education level as a proxy for job performance. If the signalling theory really is accurate, the students who pursue a degree without bogging themselves down with pointless knowledge are acting rationally. As Professor Caplan points out in what seems a protracted, nerdy online feud with Bloomberg View’s Noah Smith, the decision to attend school isn’t made in a cultural vacuum. Sometimes, there are real benefits to conformity – in this case, getting a prospective employer to give you a shot at an interview. Despite my having never worked as a sociologist (alas!), my degree has probably opened more than a few doors for me.
  • What and where you study are important. Some degrees have markedly higher returns than others, and if money is part of the consideration (and I hope it would be), students owe it to themselves to research this stuff beforehand.
  • For the love of god, if you’re taking loans, know how compound interest works. A younger, more ignorant version of myself once thought I could pay my loans off in a few years. How did I reach this improbable conclusion? I conveniently ignored the fact that interest on my loans would compound. Debt can be a real bummer. It can keep you tethered to things you might prefer to change, say a job or location, and it makes saving a challenge.
  • Relatedly, be familiar with the economic concept of opportunity cost. In short, this just means that time and money spent doing one thing can’t do something else. To calculate the “economic cost” of college, students have to include the money they could have made by working for those four years. If we conservatively put this number at $25,000 per year, that means they should add $100,000 in lost wages to the other costs of attending college (less if they work during the school year and summer).
  • Alternatives to the traditional four-year path are emerging. Online classes, some of which are offering credentials of their own, are gaining popularity. If they’re able to gain enough repute among employers and other institutions, they might be able to provide a cheaper alternative for credentialing the masses. Community colleges are also presenting themselves as a viable option for those looking to save money, an option increasingly popular among middle class families.

There’s certainly more to consider, but I think the most important thing is that prospective students take time to consider the decision and not simply take it on faith that higher education is the right move for everyone. After all, we’re talking about a huge investment of time and money.

A different version of this article was published on Merion West.

Ben Carson’s Tragically Mundane Scandal

Whatever else it might accomplish, President Donald Trump’s administration has surely earned its place in history for laying to rest the myth of Republican fiscal prudence. Be they the tax dollars of today’s citizens or tomorrow’s, high ranking officials within Mr. Trump’s White House seem to have no qualms about spending them.

The latest in a long series of questionable expenses is, of course, none other than Department of Housing and Urban Development Secretary Ben Carson’s now infamous $31,000 dining set, first reported on by the New York Times.¹ Since the Times broke the story, Mr. Carson has attempted to cancel the order, having come under public scrutiny for what many understandably deem to be an overly lavish expenditure on the public dime.

At first blush, Secretary Mr. Carson’s act is egregious. As the head of HUD, he has a proposed $41 billion of taxpayer money at his disposal. Such frivolous and seemingly self-aggrandizing spending undermines public trust in his ability to use taxpayer funds wisely and invites accusations of corruption. It certainly doesn’t help the narrative that, as some liberals have noted with derision, this scandal coincides with the proposal of significant cuts to the department’s budget.

But the more I think about it, the more I’m puzzled as to why people are so worked up about this.

Let me be clear: this certainly isn’t a good look for the Secretary of an anti-poverty department with a shrinking budget, and it’s justifiable that people are irritated. At a little more than half the median annual wage, most of us would consider $31,000 an absurd sum to spend on dining room furniture. The money that pays for it does indeed come from private citizens who would probably have chosen not to buy Mr. Carson a new dining room with it.

And yet, in the realm of government waste, that amount is practically nothing.
Government has a long, and occasionally humorous, history of odd and inefficient spending.

Sometimes, it can fly under the radar simply by virtue of being bizarre. Last year, for example, the federal government spent $30,000 in the form of a National Endowment for the Arts grant to recreate William Shakespeare’s play “Hamlet” – with a cast of dogs. Other times, the purchase at hand is too unfamiliar to the public to spark outrage. In 2016, the federal government spent $1.04 billion expanding trolley service a grand total of 10.92 miles in San Diego: an average cost of $100 million per mile.

Both of those put Mr. Carson’s $31,000 dining set in a bit of perspective. It is neither as ridiculous as the play nor as great in magnitude as the trolley. So why didn’t either of those incidents receive the kind of public ire he is contending with now?

The mundanity of Mr. Carson’s purchase probably hurts him in this regard. Not many of us feel informed enough to opine on the kind of money one should spend building ten miles of trolley track, but most of us have bought a chair or table. That reference point puts things in perspective and allows room for an emotional response. It’s also likely this outrage is more than a little tied to the President’s unpopularity.

Ironically, the relatively small amount of money spent might also contribute to this effect. When amounts get large enough, like a billion dollars, we tend to lose perspective – what’s a couple million here or there? But $31,000 is an amount we can conceptualize.

So it’s possible that we’re blowing this a little out of proportion for forces that are more emotional than logical. But I still think the issue is a legitimate one that deserves more public attention than it usually gets, and it would be interesting if the public were able to apply this kind of pressure to other instances of goofy spending. Here’s hoping, anyway.

A version of this article originally appeared on Merion West

1. I wrote this article the day before word broke that Secretary of the Interior Ryan Zinke had spent $139,000 upgrading the department’s doors.

A Political Future for Libertarians? Not Likely.

When it was suggested I do a piece about the future of the Libertarian Party, I had to laugh. Though I’ve been voting Libertarian since before Gary Johnson could find Aleppo on a map, I’ve never really had an interest in Libertarian Party politics.

Sure, the idea is appealing on a lot of levels. Being of the libertarian persuasion often leaves you feeling frustrated with politics, especially politicians. It’s tempting to watch the approval ratings of Democrats and Republicans trend downward and convince yourself the revolution is nigh.

But if I had to guess, the party will remain on the periphery of American political life, despite a relatively strong showing in the 2016 Presidential election. A large part of this – no fault of the Libertarian party – is due to anti-competitive behavior and regulation in the industry of politics. But a substantial amount of blame can be attributed to the simple and sobering fact that the type of government and society envisioned by hardcore Libertarians – the type that join the party – is truly unappealing to most of America.

Unless public opinion radically shifts, it feels like the Libertarian Party will mainly continue to offer voters a symbolic choice. Don’t get me wrong: I’m happy to have that choice, and it really would be nice to see people who genuinely value individual freedom elected to public office. But political realities being what they are, I’m going to hold off on exchanging my dollars for gold and continue paying my income taxes.

So that’s the bad news for libertarians. Here’s the good news: the cause of advancing human liberty isn’t dependent on a niche political party. The goal of libertarianism as a philosophy – the preservation and expansion of individual liberties – has no partisan allegiance. Victory for the Libertarian Party is (thankfully) not requisite for libertarians to get more of what they want.

Advancing their agenda has, for libertarians, proved to be more a question of winning minds than elections. While “capital-L” Libertarians remain on the political margins, aspects of libertarian thought are appreciated by people of all political persuasions and often receive appeal for support. Although no Libertarian Party member has ever held a seat in Congress, moved into a governor’s mansion, or garnered more than four percent of the national vote, many long-held libertarian ideas have enjoyed incredible success, and others are still gaining momentum.

Same-sex marriage is now the law of the land, as has been interracial marriage. Support for legalizing marijuana is at an all-time high (pun intended), and ending the larger ‘war on drugs’ is an idea gaining currency, not only in the US but worldwide. The draft is a thing of the past; the public is growing wary and weary of interventionist foreign policy. A plan to liberalize our immigration system, though stuck in legislative limbo, remains a priority for most Americans, and the United States remains a low-tax nation among countries in the developed world, especially when it comes to individual tax rates.

And not all the good news comes from the realm of politics. Americans have maintained and expanded on a culture of philanthropy, per-capita giving having tripled since the mid-1950s. The rise of social media and the internet has made it easier than ever for people to exchange ideas. Technology of all sorts has lowered prices for consumers and helped people live more productive lives. Even space exploration – until recently exclusively the purview of governments – is now within private reach.

None of this was or will be passed with legislation written or signed into law by a Libertarian politician. But that’s not what really matters. What really matters is that people are freer today to live the kinds of lives they want, peacefully, and without fear of persecution. Yes, there is still much that might be improved, at home and certainly abroad. But in a lot of ways, libertarians can rest happily knowing that their ideas are winning, even if their candidates are not.

This article originally appeared on Merion West.

Human Mobility is Key to Fighting Poverty

Some sixty years into the “war on poverty,” government welfare programs remain the subject of much scrutiny. As the Trump administration unveils a new tax plan, fresh off numerous attempts to repeal and replace the Affordable Care Act, perennial questions about whether the government is doing enough to reduce poverty have resurfaced.

This debate often focuses almost exclusively on poor Americans, and solutions mostly center around the redistribution of resources via government transfers. On many levels, this makes sense; on the first count, non-Americans don’t vote, and politicians tend not to pay much attention to groups that cannot help them win elections. Secondly, the government’s ability to act on poverty is somewhat limited — it can try to create policies that facilitate wealth, but it cannot actually produce wealth on its own. Spreading around some of the surplus is therefore an attractive option.

But from a utilitarian and humanitarian perspective, this debate represents a missed opportunity. Limiting the conversation to wealth transfers within an already wealthy nation encourages inefficient solutions at the expense of ideas that might do a lot more good for a lot more people: namely, freeing those people, who are not at maximum productivity, to pursue opportunity.

Between the EITC, TANF, SNAP, SSI, Medicaid, and other programs, the United States spent over $700 billion at the federal level in the name of alleviating poverty in 2015. A 2014 census report estimates that Social Security payments alone reduced the number of poor Americans by nearly 27 million the previous year. Whatever your stance on the long-run effects of welfare programs, it’s safe to say that in the short term, government transfers provide substantial material benefits to recipients.

Yet if the virtue of welfare programs is their ability to improve living standards for the needy, their value pales in comparison to the potential held by labor relocation.

Political boundaries are funny things. By crossing them, workers moving from poor to rich nations can increase their productivity dramatically. That’s not necessarily because they can make more products or offer better services — although that is sometimes the case as well — but rather because what they produce is more economically valuable. This is what economists refer to as the “place premium,” and it’s partly created by differences in opportunity costs between consumers in each location.

Median wages of foreign-born US workers from 42 developing countries are shown to be 4.1 times higher than those of their observably identical counterparts in their country of origin. Some enthusiasts even speculate that the elimination of immigration restrictions alone could double global GDP. The place premium effect can be powerful enough to make low-skilled positions in rich countries economically preferable to high-skill immigrants from poor nations.

We have a lot of inequality in the United States, and that often masks the fact that we have very little absolute poverty. Even someone who is poor by American standards (an annual pre-transfer income of about $12,000 or less for a single-person household) can have an income that exceeds that of the global median household. Even with relatively generous government transfers, we probably would not increase their incomes by more than triple.

On the other hand, because they start with lower incomes, this same effect allows low-earning immigrants to proportionally increase their standard of living in a way that can’t be matched by redistribution within a relatively wealthy population. For example, the average hourly wage in the US manufacturing sector is slightly over $20; in Mexico, it’s around $2.30. Assuming a manufacturer from Mexico could find a similar position in the United States, their income would increase by around 900%. To provide the same proportional benefit to a severely poor American — defined as a person or household with an income under half the poverty threshold — could cost up to $54,000.

What’s true across national borders is true within them. Americans living in economically desolate locations could improve their prospects by relocating to more prosperous and opportune areas. Indeed, this is exactly what’s been happening for decades. The percentage of Americans living in cities has increased steadily, going from 45% in 1910 to nearly 81% by 2010. Nor is relocation exclusively a long-term solution. During oil rushes in Alaska and North Dakota, populations within the two states exploded as people flocked to economic activity.

Recently, however, rates of migration have been dwindling. Admittedly, there are fewer barriers to intra-national migration than immigration. But there are still things we might do to make it easier for people to move where the money is.

One obvious solution would be to encourage local governments to cut back on zoning regulations that make building new housing stock less affordable. Zoning laws contribute heavily to the rising costs of living in the most expensive cities, leading to the displacement of poorer residents and the sequestration of opportunity. As with immigration, this poses a bit of a political problem — it requires politicians to prioritize the interests of the people who would live in a city over those of the people who currently live there — the ones who vote in local elections.

Relatedly, we might consider revising our approach to the mortgage interest deduction and other incentives for homeownership. While the conventional wisdom is that homeownership is almost always desirable because it allows the buyer to build equity on an appreciable asset, some studies have found a strong positive correlation between levels of homeownership and unemployment. The upshot is that tying up most of one’s money in a home reduces the ability and desire to move for employment, leading to unemployment and downward pressure on wages. Whether or not to buy a home is the buyer’s decision, but these data cast doubt on the idea that the government should subsidize such behavior.

If the goal of policy is to promote human well being, then increasing mobility should be a priority for policy makers. As a species, as nations, as communities, and as individuals, we should strive for a more productive world. Allowing people the opportunity to relocate in the name of increasing their output is a near-free lunch in this regard.

But while the economic dream of frictionless markets is a beautiful one, we live in a world complicated by politics. It’s unrealistic to expect politicians to set aside the concerns of their constituents for the greater good. I will therefore stop short of asking for open borders, the abolition of zoning laws, or the removal of the mortgage interest deduction. Instead, I offer the humbler suggestion that we exercise restraint in such measures, striving to remove and lessen barriers to mobility whenever possible. The result will be a freer, more equal, and wealthier world.

This article originally appeared on Merion West

Universal Basic Income is Probably Not the Future of Welfare

If for no other reason, universal basic income — that is, the idea to replace the current means-tested welfare system with regular, unconditional cash payments to every citizen — is remarkable for the eclectic support it receives. The coalition for universal basic income (UBI) includes libertarians, progressives, a growing chorus of Luddites, and others still who believe a scarcity-free world is just around the corner. Based on its popularity and the growing concerns of coming economic upheaval and inequality, it’s tempting to believe the centuries-old idea is a policy whose time has finally come.

Personally, I’m not sold. There are several obstacles to establishing a meaningful universal basic income that would, in my mind, be nearly impossible to overcome as things stand now.

For one, the numbers are pretty tough to reconcile.

According to 2017 federal guidelines, the poverty level for a single-person household is about $12,000 per year. Let’s assume we’re intent on paying each American $1,000 per month in order to bring them to that level of income.

Distributing that much money to all 320 million Americans would cost $3.84 trillion, approximately the entire 2015 federal budget and far greater than the $3.18 trillion of tax revenue the federal government collected in the same year. Even if we immediately eliminated all other entitlement payments, as libertarians tend to imagine, such a program would still require the federal government to increase its income by $1.3 trillion to resist increasing the debt any further.

Speaking of eliminating those entitlement programs, hopes of doing so are probably far-fetched without a massive increase in taxation. A $1,000 monthly payment to every American — which again, would consume the entire federal budget — would require a lot of people currently benefiting from government transfers to take a painful cut. For example, the average monthly social security check is a little over $1,300. Are we really going to create a program that cuts benefits for the poor and spends a lot of money on the middle class and affluent?

In spite of the overwhelming total cost of such a program, its per capita impact would be pretty small, since all the cash would be disbursed over a much greater population than current entitlements. For this reason, its merit as an anti-poverty program would be questionable at best.

Yes, you can fiddle with the disbursement amounts and exclude segments of the population — dropping minors from the dole would reduce the cost to around $2.96 trillion — to make the numbers work a little better, but the more you do that the less universal and basic it becomes, and the more it starts to look like a modest supplement to our existing welfare programs.

*

Universal basic income’s problems go beyond the budget. If a UBI was somehow passed (which would likely require our notoriously tax-averse nation to OK trillions of additional dollars of government spending), it would set us up for a slew of contentious policy battles in the future.

Entitlement reform, already a major preoccupation for many, would become a more pressing concern in the event that a UBI of any significant size were implemented. Mandatory spending would increase as more people draw benefits for more years and continue to live longer. Like the entitlements it may or may not replace, universal basic income would probably be extremely difficult to reform in the future.

Then there’s the matter of immigration. If you think reaching consensus on immigration policy is difficult in the age of President Trump, imagine how it would look once we began offering each American a guaranteed income large enough to offer them an alternative to paid work. Bloomberg columnist Megan McArdle estimates that establishing a such a program would require the United States to “shut down immigration, or at least immigration from lower-skilled countries,” thereby leading to an increase in global poverty.

There’s also the social aspect to consider. I don’t want to get into it too much because everybody’s view of what makes people tick is different. But it seems to me that collecting money from the government doesn’t make people especially happy or fulfilled.

The point is, part of what makes universal basic income appear realistic is the political coalition backing it. But libertarians, progressives, and the rest of the groups superficially united behind this idea have very different opinions about how it would operate and very different motivations for its implementation. When you press the issue and really think through the consequences, the united front for universal basic income begins to crack.

*

Don’t get me wrong; there’s plenty about universal basic income that appeals to this author’s libertarian sensibilities. I think there’s a strong argument for reforming the welfare system in a way that renders it more similar to a basic income scheme, namely replacing in-kind payments and some subsidies with direct cash transfers. Doing so would, as advocates of UBI claim, promote the utility of the money transferred and reduce government paternalism, both goals which I find laudable.

I should also note that not all UBI programs are created equal. Universal basic income has become something of a catch-all term used to describe policies that are quite different from each other. The negative income tax plan Sam Bowman describes on the Adam Smith Institute’s website is much more realistic and well-thought-out than a system that gives a flat amount to each citizen. That it is neither unconditional nor given equally are its two greatest strengths.

However, the issues of cost and dispersion, both consequences of UBI’s defining characteristics, seem to me insurmountable. Unless the United States becomes dramatically wealthier, I don’t see us being able to afford to pay any significant amount of money to all or most people. We would need to replace a huge amount of human labor with automation before this plan can start to look even a little realistic. Even if that does happen, and I’m not sure that it will anytime soon, I think there are better things we could do with the money.

This article originally appeared on Merion West.

Obamacare’s Complicated Relationship with the Opioid Crisis

The opioid epidemic, having evolved into one of the greatest public health crises of our time, is a contentious aspect of the ongoing debate surrounding the Republican healthcare proposal.

Voices on the left worry that the repeal of Obamacare’s Medicaid expansion and essential health benefits would worsen the crisis by cutting off access to treatment for addiction and substance abuse. Mother Jones and Vox have both covered the issue. Former President Obama’s June 22nd Facebook post stated his hope that Senators looking to replace the ACA would ask themselves, “What will happen to the Americans grappling with opioid addiction who suddenly lose their coverage?”

On the other side of things, there are theories that the Affordable Care Act actually helped finance–and perhaps exacerbated–the opioid epidemic. It goes something like this: Expanded insurance coverage was taken as a primary goal of the ACA. Two of its policies that supported that goal–allowing adults up to 26 years of age to remain on their parents’ insurance and expanding Medicaid to cover a greater percentage of the population–unintentionally connected at-risk cohorts (chronically unemployed prime-age men and young, previously uninsured whites with an appetite for drugs, to name two) with the means to obtain highly addictive and liberally-prescribed pain medications at a fraction of their street price. (Some knowledge about labor force and other social trends helps paint a clearer picture on this.) Once addicted, many moved on to cheaper and more dangerous alternatives, like heroin or synthetic opioids, thus driving the growth in overdose deaths.

This is a really interesting, if tragic, narrative, so I decided to take a look. I focused on state-level analysis by comparing CDC Wonder data on drug-induced deaths with Kaiser Family Foundation data on expansion status and growth in Medicaid enrollment. The graph below plots states based on their rates of growth in Medicaid rolls and overdose deaths from 2010 to 2015, and is color-coded for expansion status: blue for states that expanded coverage before or by 2015, yellow for states that expanded during 2015, and red for states that hadn’t expanded by the end of 2015. (A note: this isn’t an original idea; for a more in-depth analysis, check out this post.)

What’s interesting is the places where overdoses have increased the quickest. The fastest growing rates of overdose deaths were mostly in states that had expanded Medicaid by 2015; the only non-expansion state to grow by more than 50% since 2010 was Virginia, which in 2015 still had a relatively low rate of 12.7 fatal overdoses per 100,000 population. For some perspective on how bad things have gotten, that rate would have been the 19th highest among states in 2005; today Virginia ranks 42nd in terms of OD rate.

On the other hand, there isn’t a noticeable correlation between increases in Medicaid coverage and increases in the rate of fatal overdoses. Additionally, the rates of overdose deaths in expansion states were increasing before many of the Affordable Care Act’s key provisions went into effect. Starting around 2010, there was a dramatic divergence between would-be expansion states and the rest. It’s possible that states with accelerating rates were more likely to expand coverage in response to increased frequency of fatal overdoses.

So what’s the deal? Did the Affordable Care Act agitate the opioid epidemic? Obviously I don’t have the answer to that, but here’s my take:

I think it would be difficult to argue it hasn’t been a factor on some level, given the far-higher rates of prescription, death, and opioid use and among Medicaid patients than the general population, as well as the state-level trends in OD rates (with acknowledgement that state-level analysis is pretty clunky in this regard; for many reasons West Virginia’s population isn’t really comparable to California’s). I think the fact that state Medicaid programs are adjusting regulations for painkiller prescriptions is an acknowledgement of that.

But if the ACA had a negative effect, I’d think it must register as a drop in the bucket. There are so many pieces to this story: lax prescription practices and the rise of “pill mills,” declining labor force participation, sophisticated distribution networks of Mexican heroin, bogus research and marketing on pain management, stark disparities between expectations and reality. It’s nice to think there’s one thing we can change to solve everything, but I don’t think we’re going to be so lucky.

Here’s another twist: Even if the Affordable Care Act had some negative impact, it could very well be that ACA repeal could make things worse. Scrapping essential benefits coverage could lead to a loss or reduction of access to addiction treatment for millions of Americans. Moreover, gaining insurance has been shown to alleviate feelings of depression and anxiety. How then, might we guess 20 million Americans will feel after losing their insurance? Given the feedback loop between pain and depression, this question deserves a lot of deliberation.