Is College Worth It?

It’s a query that would have been unthinkable a generation or two ago. College was once – and in fairness, to a large extent, still is – viewed as a path to the middle class and a cultural rite of passage. But those assumptions are, on many fronts, being challenged. Radical changes on the cost and benefit sides of the equation have thrown the once axiomatic value of higher education into question.

Let’s talk about money first. It’s no secret that the price of a degree has climbed rapidly in recent decades. Between 1985 and 2015, the average cost of attending a four-year institution increased by 120 percent, according to data compiled by the National Center for Education Statistics, putting it in the neighborhood of $25,000 per year – a figure pushing 40 percent of the median income.

That increase has left students taking more and bigger loans to pay for their educations. According to ValuePenguin, a company that helps consumers understand financial decisions, between 2004 and 2014, the amount of student loan borrowers and their average balance size increased by 90 percent and 80 percent, respectively. Among the under-thirty crowd, 53 percent with a bachelor’s degree or higher now report carrying student debt.

Then there’s time to consider. Optimistically, a bachelor’s degree can be obtained after four years of study. For the minority of students who manage this increasingly rare feat, that’s still a hefty investment: time spent on campus can’t be spent doing other things, like work, travel, or even just enjoying the twilight of youth.

And for all the money and time students are sinking into their post-secondary educations, it’s not exactly clear they’re getting a good deal – whether gauged by future earnings or the measurable acquisition of knowledge. Consider the former: While there is a well acknowledged “college wage premium,” the forces powering it are up for debate. A Pew Research Center report from 2014 shows the growing disparity to be less a product of the rising value of a college diploma than the cratering value of a high school diploma. The same report notes that while the percentage of degree-holders aged 25-32 has soared since the Silent Generation, median earnings for full-time workers of that cohort have more or less stagnated across the same time period.

Meanwhile, some economists contend that to whatever extent the wage premium exists, it’s impossible to attribute to college education itself. Since the people most likely to be successful are also the most likely to go to college, we can’t know to what extent a diploma is a cause or consequence of what made them successful.

In fact, some believe the real purpose of formal education isn’t so much to learn as to display to employers that a degree-holder possess the attributes that correlate with success, a process known as signalling. As George Mason Professor of Economics (and noted higher-ed skeptic) Bryan Caplan has pointed out, much of what students learn, when they learn anything, isn’t relevant to the real world. Professor Caplan thinks students are wise to the true value of a degree, which could explain why almost no student ever audits a class, why students spend about 14 hours a week studying, and why two-thirds of students fail to leave university proficient in reading.

Having spent the last 550-ish words bashing graduates and calling into question the legitimacy of the financial returns on a degree, you might fairly ask if I’m saying college really isn’t worth your time and money. While I’d love to end it here and now with a hot take like that, the truth is it’s a really complicated, personal question, and I can’t give a definitive answer. What I can offer are some prompts that might help someone considering college to make that choice for themself, based on things I wish I’d known before heading off to school.

  • College graduates fare better on average by many metrics. Even if costs of attendance are rising, they still have to be weighed against the potential benefits. Income, unemployment, retirement benefits, and health care: those with a degree really do fare better. Even if we can’t be sure of the direction or extent this relationship is causal, one could reasonably conclude the benefits are worth the uncertainty.
  • Credentialism might not be fair, but it’s real. Plenty of employers use education level as a proxy for job performance. If the signalling theory really is accurate, the students who pursue a degree without bogging themselves down with pointless knowledge are acting rationally. As Professor Caplan points out in what seems a protracted, nerdy online feud with Bloomberg View’s Noah Smith, the decision to attend school isn’t made in a cultural vacuum. Sometimes, there are real benefits to conformity – in this case, getting a prospective employer to give you a shot at an interview. Despite my having never worked as a sociologist (alas!), my degree has probably opened more than a few doors for me.
  • What and where you study are important. Some degrees have markedly higher returns than others, and if money is part of the consideration (and I hope it would be), students owe it to themselves to research this stuff beforehand.
  • For the love of god, if you’re taking loans, know how compound interest works. A younger, more ignorant version of myself once thought I could pay my loans off in a few years. How did I reach this improbable conclusion? I conveniently ignored the fact that interest on my loans would compound. Debt can be a real bummer. It can keep you tethered to things you might prefer to change, say a job or location, and it makes saving a challenge.
  • Relatedly, be familiar with the economic concept of opportunity cost. In short, this just means that time and money spent doing one thing can’t do something else. To calculate the “economic cost” of college, students have to include the money they could have made by working for those four years. If we conservatively put this number at $25,000 per year, that means they should add $100,000 in lost wages to the other costs of attending college (less if they work during the school year and summer).
  • Alternatives to the traditional four-year path are emerging. Online classes, some of which are offering credentials of their own, are gaining popularity. If they’re able to gain enough repute among employers and other institutions, they might be able to provide a cheaper alternative for credentialing the masses. Community colleges are also presenting themselves as a viable option for those looking to save money, an option increasingly popular among middle class families.

There’s certainly more to consider, but I think the most important thing is that prospective students take time to consider the decision and not simply take it on faith that higher education is the right move for everyone. After all, we’re talking about a huge investment of time and money.

A different version of this article was published on Merion West.

Ben Carson’s Tragically Mundane Scandal

Whatever else it might accomplish, President Donald Trump’s administration has surely earned its place in history for laying to rest the myth of Republican fiscal prudence. Be they the tax dollars of today’s citizens or tomorrow’s, high ranking officials within Mr. Trump’s White House seem to have no qualms about spending them.

The latest in a long series of questionable expenses is, of course, none other than Department of Housing and Urban Development Secretary Ben Carson’s now infamous $31,000 dining set, first reported on by the New York Times.¹ Since the Times broke the story, Mr. Carson has attempted to cancel the order, having come under public scrutiny for what many understandably deem to be an overly lavish expenditure on the public dime.

At first blush, Secretary Mr. Carson’s act is egregious. As the head of HUD, he has a proposed $41 billion of taxpayer money at his disposal. Such frivolous and seemingly self-aggrandizing spending undermines public trust in his ability to use taxpayer funds wisely and invites accusations of corruption. It certainly doesn’t help the narrative that, as some liberals have noted with derision, this scandal coincides with the proposal of significant cuts to the department’s budget.

But the more I think about it, the more I’m puzzled as to why people are so worked up about this.

Let me be clear: this certainly isn’t a good look for the Secretary of an anti-poverty department with a shrinking budget, and it’s justifiable that people are irritated. At a little more than half the median annual wage, most of us would consider $31,000 an absurd sum to spend on dining room furniture. The money that pays for it does indeed come from private citizens who would probably have chosen not to buy Mr. Carson a new dining room with it.

And yet, in the realm of government waste, that amount is practically nothing.
Government has a long, and occasionally humorous, history of odd and inefficient spending.

Sometimes, it can fly under the radar simply by virtue of being bizarre. Last year, for example, the federal government spent $30,000 in the form of a National Endowment for the Arts grant to recreate William Shakespeare’s play “Hamlet” – with a cast of dogs. Other times, the purchase at hand is too unfamiliar to the public to spark outrage. In 2016, the federal government spent $1.04 billion expanding trolley service a grand total of 10.92 miles in San Diego: an average cost of $100 million per mile.

Both of those put Mr. Carson’s $31,000 dining set in a bit of perspective. It is neither as ridiculous as the play nor as great in magnitude as the trolley. So why didn’t either of those incidents receive the kind of public ire he is contending with now?

The mundanity of Mr. Carson’s purchase probably hurts him in this regard. Not many of us feel informed enough to opine on the kind of money one should spend building ten miles of trolley track, but most of us have bought a chair or table. That reference point puts things in perspective and allows room for an emotional response. It’s also likely this outrage is more than a little tied to the President’s unpopularity.

Ironically, the relatively small amount of money spent might also contribute to this effect. When amounts get large enough, like a billion dollars, we tend to lose perspective – what’s a couple million here or there? But $31,000 is an amount we can conceptualize.

So it’s possible that we’re blowing this a little out of proportion for forces that are more emotional than logical. But I still think the issue is a legitimate one that deserves more public attention than it usually gets, and it would be interesting if the public were able to apply this kind of pressure to other instances of goofy spending. Here’s hoping, anyway.

A version of this article originally appeared on Merion West

1. I wrote this article the day before word broke that Secretary of the Interior Ryan Zinke had spent $139,000 upgrading the department’s doors.

A Political Future for Libertarians? Not Likely.

When it was suggested I do a piece about the future of the Libertarian Party, I had to laugh. Though I’ve been voting Libertarian since before Gary Johnson could find Aleppo on a map, I’ve never really had an interest in Libertarian Party politics.

Sure, the idea is appealing on a lot of levels. Being of the libertarian persuasion often leaves you feeling frustrated with politics, especially politicians. It’s tempting to watch the approval ratings of Democrats and Republicans trend downward and convince yourself the revolution is nigh.

But if I had to guess, the party will remain on the periphery of American political life, despite a relatively strong showing in the 2016 Presidential election. A large part of this – no fault of the Libertarian party – is due to anti-competitive behavior and regulation in the industry of politics. But a substantial amount of blame can be attributed to the simple and sobering fact that the type of government and society envisioned by hardcore Libertarians – the type that join the party – is truly unappealing to most of America.

Unless public opinion radically shifts, it feels like the Libertarian Party will mainly continue to offer voters a symbolic choice. Don’t get me wrong: I’m happy to have that choice, and it really would be nice to see people who genuinely value individual freedom elected to public office. But political realities being what they are, I’m going to hold off on exchanging my dollars for gold and continue paying my income taxes.

So that’s the bad news for libertarians. Here’s the good news: the cause of advancing human liberty isn’t dependent on a niche political party. The goal of libertarianism as a philosophy – the preservation and expansion of individual liberties – has no partisan allegiance. Victory for the Libertarian Party is (thankfully) not requisite for libertarians to get more of what they want.

Advancing their agenda has, for libertarians, proved to be more a question of winning minds than elections. While “capital-L” Libertarians remain on the political margins, aspects of libertarian thought are appreciated by people of all political persuasions and often receive appeal for support. Although no Libertarian Party member has ever held a seat in Congress, moved into a governor’s mansion, or garnered more than four percent of the national vote, many long-held libertarian ideas have enjoyed incredible success, and others are still gaining momentum.

Same-sex marriage is now the law of the land, as has been interracial marriage. Support for legalizing marijuana is at an all-time high (pun intended), and ending the larger ‘war on drugs’ is an idea gaining currency, not only in the US but worldwide. The draft is a thing of the past; the public is growing wary and weary of interventionist foreign policy. A plan to liberalize our immigration system, though stuck in legislative limbo, remains a priority for most Americans, and the United States remains a low-tax nation among countries in the developed world, especially when it comes to individual tax rates.

And not all the good news comes from the realm of politics. Americans have maintained and expanded on a culture of philanthropy, per-capita giving having tripled since the mid-1950s. The rise of social media and the internet has made it easier than ever for people to exchange ideas. Technology of all sorts has lowered prices for consumers and helped people live more productive lives. Even space exploration – until recently exclusively the purview of governments – is now within private reach.

None of this was or will be passed with legislation written or signed into law by a Libertarian politician. But that’s not what really matters. What really matters is that people are freer today to live the kinds of lives they want, peacefully, and without fear of persecution. Yes, there is still much that might be improved, at home and certainly abroad. But in a lot of ways, libertarians can rest happily knowing that their ideas are winning, even if their candidates are not.

This article originally appeared on Merion West.

Human Mobility is Key to Fighting Poverty

Some sixty years into the “war on poverty,” government welfare programs remain the subject of much scrutiny. As the Trump administration unveils a new tax plan, fresh off numerous attempts to repeal and replace the Affordable Care Act, perennial questions about whether the government is doing enough to reduce poverty have resurfaced.

This debate often focuses almost exclusively on poor Americans, and solutions mostly center around the redistribution of resources via government transfers. On many levels, this makes sense; on the first count, non-Americans don’t vote, and politicians tend not to pay much attention to groups that cannot help them win elections. Secondly, the government’s ability to act on poverty is somewhat limited — it can try to create policies that facilitate wealth, but it cannot actually produce wealth on its own. Spreading around some of the surplus is therefore an attractive option.

But from a utilitarian and humanitarian perspective, this debate represents a missed opportunity. Limiting the conversation to wealth transfers within an already wealthy nation encourages inefficient solutions at the expense of ideas that might do a lot more good for a lot more people: namely, freeing those people, who are not at maximum productivity, to pursue opportunity.

Between the EITC, TANF, SNAP, SSI, Medicaid, and other programs, the United States spent over $700 billion at the federal level in the name of alleviating poverty in 2015. A 2014 census report estimates that Social Security payments alone reduced the number of poor Americans by nearly 27 million the previous year. Whatever your stance on the long-run effects of welfare programs, it’s safe to say that in the short term, government transfers provide substantial material benefits to recipients.

Yet if the virtue of welfare programs is their ability to improve living standards for the needy, their value pales in comparison to the potential held by labor relocation.

Political boundaries are funny things. By crossing them, workers moving from poor to rich nations can increase their productivity dramatically. That’s not necessarily because they can make more products or offer better services — although that is sometimes the case as well — but rather because what they produce is more economically valuable. This is what economists refer to as the “place premium,” and it’s partly created by differences in opportunity costs between consumers in each location.

Median wages of foreign-born US workers from 42 developing countries are shown to be 4.1 times higher than those of their observably identical counterparts in their country of origin. Some enthusiasts even speculate that the elimination of immigration restrictions alone could double global GDP. The place premium effect can be powerful enough to make low-skilled positions in rich countries economically preferable to high-skill immigrants from poor nations.

We have a lot of inequality in the United States, and that often masks the fact that we have very little absolute poverty. Even someone who is poor by American standards (an annual pre-transfer income of about $12,000 or less for a single-person household) can have an income that exceeds that of the global median household. Even with relatively generous government transfers, we probably would not increase their incomes by more than triple.

On the other hand, because they start with lower incomes, this same effect allows low-earning immigrants to proportionally increase their standard of living in a way that can’t be matched by redistribution within a relatively wealthy population. For example, the average hourly wage in the US manufacturing sector is slightly over $20; in Mexico, it’s around $2.30. Assuming a manufacturer from Mexico could find a similar position in the United States, their income would increase by around 900%. To provide the same proportional benefit to a severely poor American — defined as a person or household with an income under half the poverty threshold — could cost up to $54,000.

What’s true across national borders is true within them. Americans living in economically desolate locations could improve their prospects by relocating to more prosperous and opportune areas. Indeed, this is exactly what’s been happening for decades. The percentage of Americans living in cities has increased steadily, going from 45% in 1910 to nearly 81% by 2010. Nor is relocation exclusively a long-term solution. During oil rushes in Alaska and North Dakota, populations within the two states exploded as people flocked to economic activity.

Recently, however, rates of migration have been dwindling. Admittedly, there are fewer barriers to intra-national migration than immigration. But there are still things we might do to make it easier for people to move where the money is.

One obvious solution would be to encourage local governments to cut back on zoning regulations that make building new housing stock less affordable. Zoning laws contribute heavily to the rising costs of living in the most expensive cities, leading to the displacement of poorer residents and the sequestration of opportunity. As with immigration, this poses a bit of a political problem — it requires politicians to prioritize the interests of the people who would live in a city over those of the people who currently live there — the ones who vote in local elections.

Relatedly, we might consider revising our approach to the mortgage interest deduction and other incentives for homeownership. While the conventional wisdom is that homeownership is almost always desirable because it allows the buyer to build equity on an appreciable asset, some studies have found a strong positive correlation between levels of homeownership and unemployment. The upshot is that tying up most of one’s money in a home reduces the ability and desire to move for employment, leading to unemployment and downward pressure on wages. Whether or not to buy a home is the buyer’s decision, but these data cast doubt on the idea that the government should subsidize such behavior.

If the goal of policy is to promote human well being, then increasing mobility should be a priority for policy makers. As a species, as nations, as communities, and as individuals, we should strive for a more productive world. Allowing people the opportunity to relocate in the name of increasing their output is a near-free lunch in this regard.

But while the economic dream of frictionless markets is a beautiful one, we live in a world complicated by politics. It’s unrealistic to expect politicians to set aside the concerns of their constituents for the greater good. I will therefore stop short of asking for open borders, the abolition of zoning laws, or the removal of the mortgage interest deduction. Instead, I offer the humbler suggestion that we exercise restraint in such measures, striving to remove and lessen barriers to mobility whenever possible. The result will be a freer, more equal, and wealthier world.

This article originally appeared on Merion West

Universal Basic Income is Probably Not the Future of Welfare

If for no other reason, universal basic income — that is, the idea to replace the current means-tested welfare system with regular, unconditional cash payments to every citizen — is remarkable for the eclectic support it receives. The coalition for universal basic income (UBI) includes libertarians, progressives, a growing chorus of Luddites, and others still who believe a scarcity-free world is just around the corner. Based on its popularity and the growing concerns of coming economic upheaval and inequality, it’s tempting to believe the centuries-old idea is a policy whose time has finally come.

Personally, I’m not sold. There are several obstacles to establishing a meaningful universal basic income that would, in my mind, be nearly impossible to overcome as things stand now.

For one, the numbers are pretty tough to reconcile.

According to 2017 federal guidelines, the poverty level for a single-person household is about $12,000 per year. Let’s assume we’re intent on paying each American $1,000 per month in order to bring them to that level of income.

Distributing that much money to all 320 million Americans would cost $3.84 trillion, approximately the entire 2015 federal budget and far greater than the $3.18 trillion of tax revenue the federal government collected in the same year. Even if we immediately eliminated all other entitlement payments, as libertarians tend to imagine, such a program would still require the federal government to increase its income by $1.3 trillion to resist increasing the debt any further.

Speaking of eliminating those entitlement programs, hopes of doing so are probably far-fetched without a massive increase in taxation. A $1,000 monthly payment to every American — which again, would consume the entire federal budget — would require a lot of people currently benefiting from government transfers to take a painful cut. For example, the average monthly social security check is a little over $1,300. Are we really going to create a program that cuts benefits for the poor and spends a lot of money on the middle class and affluent?

In spite of the overwhelming total cost of such a program, its per capita impact would be pretty small, since all the cash would be disbursed over a much greater population than current entitlements. For this reason, its merit as an anti-poverty program would be questionable at best.

Yes, you can fiddle with the disbursement amounts and exclude segments of the population — dropping minors from the dole would reduce the cost to around $2.96 trillion — to make the numbers work a little better, but the more you do that the less universal and basic it becomes, and the more it starts to look like a modest supplement to our existing welfare programs.

*

Universal basic income’s problems go beyond the budget. If a UBI was somehow passed (which would likely require our notoriously tax-averse nation to OK trillions of additional dollars of government spending), it would set us up for a slew of contentious policy battles in the future.

Entitlement reform, already a major preoccupation for many, would become a more pressing concern in the event that a UBI of any significant size were implemented. Mandatory spending would increase as more people draw benefits for more years and continue to live longer. Like the entitlements it may or may not replace, universal basic income would probably be extremely difficult to reform in the future.

Then there’s the matter of immigration. If you think reaching consensus on immigration policy is difficult in the age of President Trump, imagine how it would look once we began offering each American a guaranteed income large enough to offer them an alternative to paid work. Bloomberg columnist Megan McArdle estimates that establishing a such a program would require the United States to “shut down immigration, or at least immigration from lower-skilled countries,” thereby leading to an increase in global poverty.

There’s also the social aspect to consider. I don’t want to get into it too much because everybody’s view of what makes people tick is different. But it seems to me that collecting money from the government doesn’t make people especially happy or fulfilled.

The point is, part of what makes universal basic income appear realistic is the political coalition backing it. But libertarians, progressives, and the rest of the groups superficially united behind this idea have very different opinions about how it would operate and very different motivations for its implementation. When you press the issue and really think through the consequences, the united front for universal basic income begins to crack.

*

Don’t get me wrong; there’s plenty about universal basic income that appeals to this author’s libertarian sensibilities. I think there’s a strong argument for reforming the welfare system in a way that renders it more similar to a basic income scheme, namely replacing in-kind payments and some subsidies with direct cash transfers. Doing so would, as advocates of UBI claim, promote the utility of the money transferred and reduce government paternalism, both goals which I find laudable.

I should also note that not all UBI programs are created equal. Universal basic income has become something of a catch-all term used to describe policies that are quite different from each other. The negative income tax plan Sam Bowman describes on the Adam Smith Institute’s website is much more realistic and well-thought-out than a system that gives a flat amount to each citizen. That it is neither unconditional nor given equally are its two greatest strengths.

However, the issues of cost and dispersion, both consequences of UBI’s defining characteristics, seem to me insurmountable. Unless the United States becomes dramatically wealthier, I don’t see us being able to afford to pay any significant amount of money to all or most people. We would need to replace a huge amount of human labor with automation before this plan can start to look even a little realistic. Even if that does happen, and I’m not sure that it will anytime soon, I think there are better things we could do with the money.

This article originally appeared on Merion West.

Obamacare’s Complicated Relationship with the Opioid Crisis

The opioid epidemic, having evolved into one of the greatest public health crises of our time, is a contentious aspect of the ongoing debate surrounding the Republican healthcare proposal.

Voices on the left worry that the repeal of Obamacare’s Medicaid expansion and essential health benefits would worsen the crisis by cutting off access to treatment for addiction and substance abuse. Mother Jones and Vox have both covered the issue. Former President Obama’s June 22nd Facebook post stated his hope that Senators looking to replace the ACA would ask themselves, “What will happen to the Americans grappling with opioid addiction who suddenly lose their coverage?”

On the other side of things, there are theories that the Affordable Care Act actually helped finance–and perhaps exacerbated–the opioid epidemic. It goes something like this: Expanded insurance coverage was taken as a primary goal of the ACA. Two of its policies that supported that goal–allowing adults up to 26 years of age to remain on their parents’ insurance and expanding Medicaid to cover a greater percentage of the population–unintentionally connected at-risk cohorts (chronically unemployed prime-age men and young, previously uninsured whites with an appetite for drugs, to name two) with the means to obtain highly addictive and liberally-prescribed pain medications at a fraction of their street price. (Some knowledge about labor force and other social trends helps paint a clearer picture on this.) Once addicted, many moved on to cheaper and more dangerous alternatives, like heroin or synthetic opioids, thus driving the growth in overdose deaths.

This is a really interesting, if tragic, narrative, so I decided to take a look. I focused on state-level analysis by comparing CDC Wonder data on drug-induced deaths with Kaiser Family Foundation data on expansion status and growth in Medicaid enrollment. The graph below plots states based on their rates of growth in Medicaid rolls and overdose deaths from 2010 to 2015, and is color-coded for expansion status: blue for states that expanded coverage before or by 2015, yellow for states that expanded during 2015, and red for states that hadn’t expanded by the end of 2015. (A note: this isn’t an original idea; for a more in-depth analysis, check out this post.)

What’s interesting is the places where overdoses have increased the quickest. The fastest growing rates of overdose deaths were mostly in states that had expanded Medicaid by 2015; the only non-expansion state to grow by more than 50% since 2010 was Virginia, which in 2015 still had a relatively low rate of 12.7 fatal overdoses per 100,000 population. For some perspective on how bad things have gotten, that rate would have been the 19th highest among states in 2005; today Virginia ranks 42nd in terms of OD rate.

On the other hand, there isn’t a noticeable correlation between increases in Medicaid coverage and increases in the rate of fatal overdoses. Additionally, the rates of overdose deaths in expansion states were increasing before many of the Affordable Care Act’s key provisions went into effect. Starting around 2010, there was a dramatic divergence between would-be expansion states and the rest. It’s possible that states with accelerating rates were more likely to expand coverage in response to increased frequency of fatal overdoses.

So what’s the deal? Did the Affordable Care Act agitate the opioid epidemic? Obviously I don’t have the answer to that, but here’s my take:

I think it would be difficult to argue it hasn’t been a factor on some level, given the far-higher rates of prescription, death, and opioid use and among Medicaid patients than the general population, as well as the state-level trends in OD rates (with acknowledgement that state-level analysis is pretty clunky in this regard; for many reasons West Virginia’s population isn’t really comparable to California’s). I think the fact that state Medicaid programs are adjusting regulations for painkiller prescriptions is an acknowledgement of that.

But if the ACA had a negative effect, I’d think it must register as a drop in the bucket. There are so many pieces to this story: lax prescription practices and the rise of “pill mills,” declining labor force participation, sophisticated distribution networks of Mexican heroin, bogus research and marketing on pain management, stark disparities between expectations and reality. It’s nice to think there’s one thing we can change to solve everything, but I don’t think we’re going to be so lucky.

Here’s another twist: Even if the Affordable Care Act had some negative impact, it could very well be that ACA repeal could make things worse. Scrapping essential benefits coverage could lead to a loss or reduction of access to addiction treatment for millions of Americans. Moreover, gaining insurance has been shown to alleviate feelings of depression and anxiety. How then, might we guess 20 million Americans will feel after losing their insurance? Given the feedback loop between pain and depression, this question deserves a lot of deliberation.

In Defense of the Center

The mushy center never inspires passion like ideological purity. The spectacle of radicalism puts asses in the seats. It’s hard, on the other hand, to imagine rebellious, mask-clad youths taking to the street in the name of fine-tuning marginal tax rates.

Oh sure, you may see a protest here and there, and practically everyone grumbles about this or that issue in which they have an interest. But as the great philosopher Calvin once said: a good compromise leaves everybody mad.

calvin

Some more so than others. Opining in the New York Times, Senator Bernie Sanders suggests Democrats can reverse their political fortunes by abandoning their “overly cautious, centrist ideology,” and more closely approximating the policy positions of a Vermont socialist.

I suppose this could be sound political advice. Everyone has an idea of the way they’d like the world to work, and Sanders’ ideas are appealing to a great many people. You could argue–as Sanders does–that Republicans have had some success with a similar strategy following the Obama years. But, as they’re finding out, ideological purity makes for better campaign slogans than successful governing strategy.

Here’s the thing: We live in a big, diverse country. People have very different wants and needs, yet we all live under the same (federal) laws. Our priorities must sometimes compete against each other, which is why we often end up with some of what we want, but not everything. Striking that balance is tough, and by necessity leaves many people unhappy. We don’t always get it right. But when you’re talking about laws that affect 320 million people, some modesty, or if you prefer, “caution,” is in order.

Alas, Bernie is not of a similar mind. In fewer than 1,000 words, he offers no shortage of progressive bromides without mention of the accompanying price tag. It’s one thing to form a platform around medicare-for-all, higher taxes on the wealthy (their “fair share”), aggressive clean energy commitments, a trillion-dollar infrastructure plan, or free tuition at state universities and lower interest rates on student loans. But all of them? At once?!

Sanders should remember the political and economic lessons of Vermont Governor Peter Shumlin’s foray into single-payer healthcare: Government spending–and thus government activity–is constrained by the population’s tolerance for taxation (And on the other side of things, their tolerance for a deficit of public services. Looking at you, Kansas). Go too far and you risk losing support. And unless you’re willing to rule by force, as extremists often must, that will cost you your ability to shape public policy.

For what it’s worth, I don’t think the Senator’s advice would do the Democrats any favors. The Democrats didn’t move to the center-left because there was widespread and untapped support for endless government programs in America. They did it because they collided with the political and economic reality of governance in our country. Americans are willing to pay for some government programs, but not at the rate Europeans pay to have much more expansive governments. The left, therefore, shouldn’t play an all-or-nothing game, but instead think about what it does well and how it can appeal to, rather than alienate, the rest of the country. That’s going to involve compromise.

Update: Following Jon Ossoff’s narrow defeat in a Georgia special election, there’s been a lot of discussion about whether a more progressive candidate would have fared better. Personally, I find it hard to believe centrism and fiscal conservatism worked against Ossoff in a historically Republican district. Much more believable is Matt Yglesias’ related-but-different take that Ossoff’s reluctance to talk policy left a void for the opposition to exploit, allowing them to cast him as an outsider.

One thing seems certain: the rift within the Democratic party isn’t going away anytime soon.

No, the Interest on Your Student Loan Isn’t Too High. In fact…

It seems like more often than not I’m opening these blog posts with an apology for a multi-week hiatus. Since nobody’s emailed to check on my well-being, I can only infer my readership has gotten on fine without my wonk-Jr. takes on public policy and other matters of high import. Fair enough; but don’t think your demonstrated lack of interest will spare you from a quick update.

Actually, it’s all good news: I’ve been having fun learning R (a statistical language), looking for a new apartment, and testing the limits of a 27-year-old liver. I saw Chance the Rapper and a Pirates game in Pittsburgh, which was awesome. The last article I wrote had some real success and was republished in several places, even earning a shout-out from John Stossel:

The big update is that my stint as a (purely) freelance writer has mercifully drawn to a close; I now write for a non-partisan public policy group. In fact, this very blog was one of my strongest selling points, according to my manager. It just goes to show you, kids: if you toil in anonymity for two years, eventually something will go your way.

*

Okay, enough about me. Let’s talk about a topic close to the heart of many millennials: student loans. More specifically, I want to talk about the interest rates charged on undergraduate student loans.

That interest rates are too high is, unsurprisingly, a common gripe among borrowers. If I had a nickle for every twenty-something I’ve overheard complain that the federal government shouldn’t profit off student loans…well, it still wouldn’t cover one month’s interest. However, this sentiment isn’t limited to overqualified baristas; popular politicians like Elizabeth Warren and Bernie Sanders–and even unpopular politicians–have publicly called for loans to be refinanced at lower rates and decried the “profiteering” of the federal government. From Bernie Sanders’ website:

Over the next decade, it has been estimated that the federal government will make a profit of over $110 billion on student loan programs. This is morally wrong and it is bad economics. Sen. Sanders will fight to prevent the federal government from profiteering on the backs of college students and use this money instead to significantly lower student loan interest rates.

Under the Sanders plan, the formula for setting student loan interest rates would go back to where it was in 2006. If this plan were in effect today, interest rates on undergraduate loans would drop from 4.29% to just 2.37%.

It makes no sense that you can get an auto loan today with an interest rate of 2.5%, but millions of college graduates are forced to pay interest rates of 5-7% or more for decades. Under the Sanders plan, Americans would be able to refinance their student loans at today’s low interest rates.

As one of those debt-saddled graduates, and one of the chumps who took loans at a higher rate of interest, I would obviously be amenable to handing over less of my hard-earned money to the federal government. But as a person concerned with the larger picture, I have to say this is a really bad idea. In fact, rates should be higher, not lower.

First of all, the progressive case for loan refinancing or forgiveness only holds up under the lowest level of scrutiny. Such a policy would overwhelmingly benefit borrowers from wealthy families, who hold the majority of student loan debt. Conversely, most defaulters hold relatively small amounts of debt. Fiddling with interest rates shouldn’t be confused with programs that target low-income students, like the Pell Grant, which are another matter entirely and not the subject of my criticism.

More to the point, the federal government probably isn’t making any money on student loans. Contrary to the claims of Senators Warren and Sanders, which rely on estimates from the Government Accountability Office (GAO) and put federal profit on student loans at $135 billion from 2015-2024, the Congressional Box Office (CBO), using fair-value estimation, shows student loans costing the federal government $88 billion over the same period.

The discrepancy between the CBO and GAO figures comes from the former’s inclusion of macroeconomic forecasts. Essentially, the CBO thinks the risk of default on student loans is higher than the GAO does, due to forces beyond individuals’ control.

Evidence suggests it’s unwise to underestimate the risk associated with student loans. According to a study by the liberal think tank Demos, nearly 40% of federal student loan borrowers are in default or more than 90 days delinquent. Add to that the fact that student loans are unsecured (not backed by collateral or repossessable assets, like a car or house), and they start to look like an incredibly risky venture for the federal government, and ultimately, taxpayers.

That conclusion is deeply unpleasant, but not really surprising if you think about it. Ever notice how the interest rates on private student loans–approximately 10% of the market–are much higher? That’s not because private lenders are greedy; it’s because they can’t lend at the rate of the federal government without losing money.

This is all important because the money that finances student loans has to come from somewhere. Be it infrastructure upgrades, federal support for primary education, or Shrimp Fight Club, the money spent on student loans isn’t available for competing priorities. This is even more important when you consider the loss the federal government is taking on these loans, the cost of which is passed onto future taxpayers in the form of higher taxes or lower spending. Since higher education is only one among infinite human desires, we need to decide how much of our finite resources to devote to it. Properly set interest rates are one way (probably the best way) to figure that out.

The irony, of course, is that doing so would require the government to act more like a private lender–the very thing it’s designed not to do! Our student loan system ensures virtually anyone who wants to study has the money to do so, regardless of the likelihood they’ll be able to repay. One of the nasty side effects of this indiscriminate lending is a large amount of distressed borrowers, who now find themselves in the uncomfortable position of digging out from under a mountain of debt they likely shouldn’t have been able to take on.

More so than other forms of government spending, student loans have specific, discernible beneficiaries: the students who get an expensive education financed by the taxpayer at below-market rates. Sure, you can argue there’s some spillover; society does benefit from having more highly-trained workers. But most of the time, highly skilled labor is rewarded with higher wages. That being the case, is it really too much to ask for borrowers to pay a level of interest that reflects the actual cost of issuing their loans?

Yes, this would be discouraging for some: particularly those who want to pursue non-remunerative fields of study. That’s not such a bad thing; higher interest rates would steer people away from obtaining degrees with low salary expectations, which would–by my reckoning–reduce rates of delinquency and default over the long term. They would also help mitigate some of the pain of defaults when they do happen.

But–you might protest–you can’t run the government like a business! And sure, a lot of the time, you’d be right. However, I really think this is one area where doing so is appropriate–even desirable. Hear me out.

When the government can fund itself through profitable investments rather than zero-sum transfers, it should. If we’re going to have a government of any size (and few suggest that we shouldn’t), then we need to pay for it. Which sounds like the preferable way for that to happen: voluntary, productive, and mutually beneficial investments in society; or the forceful appropriation of private resources? I’m not suggesting the former could entirely replace the latter, but when it can, I think it absolutely should.

Astute readers will realize if the government decides to lend profitably, it will have to compete with private lenders, which would cut into its margins and make its presence in the market redundant. So maybe it’s just a pipe dream. But if profitable lending isn’t possible, the federal government should at least try to minimize losses. One way or another, that means higher interest rates.

Insurance Coverage Numbers Are Important, But Not All-Important

Whether you’re into this sort of thing or not, you’ve probably been hearing a lot about healthcare policy these days. Public debate has roiled as Republican lawmakers attempt to make good on their seven-year promise to repeal and replace the Affordable Care Act (ACA). As the debate rages on, one metric in particular appears to hold outsize importance for the American people: the number of Americans covered by health insurance.

Analysis by the Congressional Budget Office, which showed that 14 million more Americans could lose coverage by 2018 under the Republican replacement, caused intense public outcry and was frequently cited as a rationale for not abandoning the ACA. There is immense political pressure not to take actions that will lead to a large loss of coverage.

But here’s the thing: the relevant metric by which to judge Obamacare isn’t insurance coverage numbers. To do so is to move the goal posts and place undue importance on a number that might not be as significant as we imagine.

The ultimate point of health insurance, and the implied rationale for manipulating insurance markets to cover sicker people, is that people will use insurance as a means by which to improve their health, not just carry a plastic card in their wallets.

Health Insurance ≠ Health

The impulse to use insurance coverage as a proxy for health is misguided but understandable. For one thing, it’s a simple, single number that has dropped precipitously since the implementation of the ACA; that makes it a great marketing piece for supporters. For another, health insurance is the mechanism by which most of us pay for most of our healthcare.

And yet in 2015 the uninsured rate fell to 10.5% (down from 16.4% in 2005) while age-adjusted mortality increased for the first time in a decade.

It turns out a nominal increase in the amount of insured Americans doesn’t necessarily translate into improved health outcomes for those individuals. A newly released paper from the National Bureau of Economic Research (NBER) finds that while the ACA has improved access to healthcare, “no statistically significant effects on risky behaviors or self-assessed health” can be detected among the population (beyond a slight uptick in self-reported health in patients over 65).

These results are consistent with other studies, like the Oregon Medicaid Experiment, which found no improvement in patients’ blood pressure, cholesterol, or cardiovascular risk after enrolling them in medicaid, even though they were far more likely to see a doctor. There were, however, some notable-but-mild psychic benefits, such as a reduction in depression and stress in enrollees.

In short, despite gains in coverage, we haven’t much improved the physical health of the average American, which is ostensibly the objective of the ACA.

Why Not?

To be fair, the ACA is relatively young; most of its provisions didn’t go into effect until 2014. It may well be that more time needs to pass before we start to see a positive effect on people’s health. But there are a few reasons to think those health benefits may never materialize–at least, not to a great extent.

A lot of what plagues modern Americans (especially the poorest Americans) has more to do with behavior and environment than access to a doctor. Health insurance can be a lifesaver if you need help paying for antiretroviral medication, but it won’t stop you from living in a neighborhood with a high rate of violent crime. It won’t make you exercise, or change your diet, or stop you from smoking. It won’t force you to take your medicine or stop you from abusing opioids, and it certainly won’t change how you commute to work (that’s a reference to the rapid increase in traffic deaths in 2015).

Here’s something to consider: A lot of the variables that correlate to health–like income and education–also correlate to the likelihood of having health insurance. If we want healthier Americans, there may be more efficient ways to achieve that than expanding insurance coverage, like improving employment and educational opportunities. Maybe something creative, like Oklahoma City’s quest to become more walker-friendly, could yield better results?

Of course, all things being equal, more insurance coverage is better. But nothing comes without cost, and as a society we want to be sure that benefits justify costs. So far, that’s not clear. This poses an existential question about our current pursuit of universal coverage, and, by extension, the relevance of coverage as a metric for the success of healthcare policy: If insurance isn’t the cure, why are we prescribing it with such zeal?

What’s Up with U.S. Public Education?

*I wrote this a while ago, but didn’t publish. I was on vacation–sue me. I know the internet has the attention span of a five-year-old, and people aren’t really talking about DeVos anymore, but I’m hoping this is still interesting to someone.

The confirmation of Betsy DeVos as Secretary of Education was perhaps the hardest-won victory of President Trump’s nascent administration. Opposition to the DeVos ran deep enough to require Vice President Pence to cast a historic tie-breaking vote.

To hear it from those on the Left, DeVos is uniquely unqualified for the position. Her lack of personal experience with the public school system, coupled with her one-sided approach to education and purported ignorance of education policy make her unsuited to the position, they argue.

On the Right, the response has been to call into question the political motivations behind opposition to DeVos. Teachers’ unions, after all, are some of the biggest spenders in U.S. politics and their economic interests are threatened by the kind of reforms DeVos’ appointment might foreshadow.

It’s hard to know if either or both sides are being overly cynical. I don’t pretend to have any deep knowledge of DeVos or her new mantle. But one thing seems empirically true: the status quo of public education isn’t above reproach.

More Money, Same (Math, Science, Literacy) Problems

According to data from the National Center for Education Statistics (NCES), per pupil spending on public education has increased roughly 1.7% annually since 1980. Student performance, however, has largely stagnated over the same period by various metrics. To somewhat immodestly quote myself:

The statistics are damning: Literacy rates among 17-year-old Americans peaked in 1971. Standardized testing reveals that math scores peaked in 1986. Test scores show a lack of improvement in math, science, and reading, in which respectively 25%, 22%, and 37% of American students are proficient.

This kind of stagnation isn’t typical among other nations; the United States showed much smaller levels of inter-generational improvement than other OECD nations. Up until about 1975, Americans were scoring significantly higher in math and literacy than Americans born before them. Since 1975, scores have plateaued, even adjusting for race and foreign-born status of students. As [Gallup’s] study states, this implicates the entire US school system.

Test scores aren’t the only indicators of educational dysfunction. Fully 60% of first-year college students need to take remedial courses in either math or English (to be fair, you might attribute this in part to college admission policies). Companies are also reporting longer vacancies for STEM positions and increasingly are forced to delay projects or look outside the U.S. for workers.

To be clear, it’s not that US public schools are producing particularly terrible outcomes (though they’re admittedly middling among the developed world). The real problem is spending on public education is becoming increasingly inefficient; we’re putting more and more resources into it and receiving little or no additional benefit. This is a long-term trend that should be addressed immediately to avoid throwing good money after bad.

In fairness, I have to point out that speaking of public schools in national terms risks obscuring that some public schools–usually found in high-income neighborhoods–perform incredibly well. However, unequal educational outcomes are often considered a bug, rather than a feature, of the public school system, which charter schools have in some cases been able to address with varying degrees of success (though there are charges that this is only possible because charters are given greater latitude in selecting their students).

The Status Quo Is Hard on Teachers, Too

There is a perception among some that public school teachers are profiting hand over fist as a result of teachers’ unions, to the expense of students. But the truth is a little more complicated.

On one hand, strong teachers’ unions have engendered some policies that arguably favor educators over students. Teacher firing rates, for example, are extremely low. This is especially true for tenured teachers, of which an average of 0.2 are dismissed per district for poor performance annually, according to the National Center for Education Statistics.

This is made possible (at least in part) by what effectively amounts to state-sanctioned local monopolies on education. Constraints on demand impede normal market mechanisms from weeding out inefficient suppliers (at least, that’s the theory embraced by school choice advocates). This isn’t illogical, and it explains the somewhat rare rift between black parents and the Democratic party line on school choice.

Consider a thought experiment: Imagine families were forced to shop for food only in their own neighborhoods. What might we expect to happen to the quality of food consumed by people in poor areas? What if we put limits on the amount of new stores that could open?

In this light, it might be accurate to say that policies that require students to attend schools in their district prioritize the school system over the scholars.

On the other hand, a lot of teachers are being harmed by the current system–particularly the young and good ones.

Under current agreements, teacher compensation rates are in large part determined by longevity, both within the profession and teaching district. Young teachers–especially women teaching young children–are often underpaid relative to other professions.

screen-shot-2017-02-15-at-1-43-59-pm
Source: No Recovery, Gallup 2016

Additionally, collective bargaining agreements have led to pay compression (a narrowing of the pay gap between high and low performers) among teachers, which penalizes high performing teachers and benefits low performing teachers. Correspondingly, there has been a detectable decline in standardized test scores of new teachers since the 1960s.¹

The combination of longevity-driven pay and salary compression has made teaching a less attractive profession for the best candidates, who can earn more in other comparable fields. A 2014 survey by the American Federation of Teachers revealed merely 15% of teachers report high levels of enthusiasm about their profession, despite 89% feeling highly enthusiastic at the beginning of their careers.

*

What might we say about an education system that grows increasingly expensive without improvement for students or teachers? We might say that it needs work and we should be open to new ideas, in whatever form they might come. It might also be wise to proceed with caution; for better or worse, this is the system we have right now.

I don’t know if Mrs. DeVos’ agenda will result in improvements. The divergent problems of climbing spending and poor teacher incentive could prove difficult to address simultaneously, especially in the current political climate. But we should all remember the true goal of an education system–public, private, or somewhere in between–is to efficiently increase human capital. How that happens should be of secondary concern.

  1. The study I cited found these results to be true only among female teachers. For some reason, scores of incoming male teachers improved slightly over this period. If anyone has any theories as to why this might be, I’d love to hear them.