State of Sin

States are becoming increasingly permissive of various “sinful” economic activities and goods — those understood to be harmful for consumers — allured, at least in part, by the promise of tax revenue they represent. This has certainly been part of the rationale in my home state of Massachusetts, where within the last year the first full casino — MGM Springfield, located a few blocks from my apartment — and recreational marijuana dispensaries opened. Since the fiscal year just ended, now seems like a good time to assess how things are going in that regard.

First, the casino: Before opening its doors, MGM Springfield told state regulators it expected $418 million in gambling revenue over its first full twelve months of operation — $34.8 million per month. According to the Massachusetts Gaming Commission’s June 2019 report, it hasn’t come within $7 million of that mark yet.

MGM revenue

Since September, its first full month of operation, the casino has generated nearly $223 million in gambling revenue. The state’s take is a quarter of that, about $55.7 million. That’s two-thirds of what was estimated. MGM Springfield’s President attributes its lower-than-expected revenue to a poor projection of the casino’s clientele — fewer “high rollers” from the Boston area and more from up and down the I-91 corridor.

The introduction of new avenues for gambling is well known to cannibalize existing revenue sources. So add to MGM Springfield’s list of woes that the much flashier Wynn Casino recently opened in Everett, MA, a quick trip from Boston, and that neighboring East Windsor, CT is opening another casino next year.

casinos

Massachusetts’ venture into marijuana has been slightly more successful. Sales were supposed to begin in July 2018, the start of the fiscal year, but were delayed until November. Still, the State Revenue Commissioner estimated Massachusetts would collect between $44 million and $82 million from the combined 17% tax (Massachusetts’ normal 6.25% sales tax plus a 10.75% excise tax) over fiscal year 2019. If my math is right, that works out to an expected range of about $32 million to $60 million in sales every month for the remaining eight months of the fiscal year, a threshold met for the first time in May, according to sales data from the Massachusetts Cannabis Control Commission.

Marijuana revenue

As of June 26, the last time the data were updated, marijuana sales totaled $176 million, which would put tax revenue somewhere around $22 million this fiscal year. Not bad, but not a great show either — and a bit surprising to me, given the traffic I’ve had to wade through passing a dispensary on my way to work. Furthermore, the state is probably constrained in its ability to raise the excise tax on marijuana, since that could push buyers back into the informal market. And as more states in the region legalize, there’s a good chance sales will drop off somewhat.

On the other hand, sales of marijuana are clearly ramping up as more stores open, and making projections about a brand new industry can’t be easy. I think people more knowledgeable about the regulatory rollout would also contend that Massachusetts bureaucrats are at least partly responsible for the relatively poor sales. The first shops were concentrated in low-population areas of the state, and the closest one to Boston didn’t open until March. Still, the state was off on this one, too. (I thought I was the only one to notice this, but I guess not.)

*

A few admittedly tangential reflections on this: The positive spin on the commercialization of marijuana and the proliferation of casinos is that the state is growing more respectful of individual autonomy, abandoning harmful and ultimately unsuccessful prohibitive policy and allowing market forces to dictate what forms of entertainment are viable. If the state should make a few bucks in the process, all the better. Right?

Well, maybe. My natural sympathies lie with the above assessment, but the state’s financial incentive complicates the picture — especially insofar as new sin taxes are attractive alternatives to the prospect of raising traditional taxes.

Taxing the consumption of vices is a markedly regressive form of revenue generation. The most salient example of this is tobacco: its use is more common among those with less education and those below the poverty line, and among smokers, those populations suffer greater negative health effects. But it’s also broadly true that the majority of profits derived from the sale of vices tend to be concentrated among a relatively small group of “power users.” The top tenth of drinkers consume half of all alcohol sold in the United States, for example. I don’t have any data on this at the moment, but if I had to guess, the pathological consumption of vices is probably negatively correlated with the propensity to vote.

The cynical take, therefore, is that the newfound permissiveness of the state is a financially motivated abdication of the state’s most fundamental obligations, a mutually beneficial pact between “limbic capitalists” and politicians.

Ironically, sin taxes have notable limitations as revenue-raisers. For one, unlike other taxes, sin taxes are supposed to accomplish two contradictory goals: curbing consumption and raising revenue. Attention to the former usually requires that tax rates be imposed at higher than revenue-maximizing points. This can also encourage regulators, as with alcohol and tobacco, to tax on a per-unit basis, tying revenue growth to consumption patterns. While they may be tempting stop-gaps, sin taxes are not a long-term budgetary fix, and analysis of their social costs and fiscal benefits should bear that in mind.

Advertisements

Business Is Getting Political—and Personal

As anyone reading this blog is undoubtedly aware, Sarah Huckabee Sanders, the current White House Press Secretary, was asked last month by the owner of a restaurant to leave the establishment on the basis that she and her staff felt a moral imperative to refuse service to a member of the Trump administration. The incident, and the ensuing turmoil, highlights the extent to which business has become another political battleground—a concept that makes many anxious.

Whether or not businesses should take on political and social responsibilities is a fraught question—but not a new one. Writing for the New York Times in 1970, Milton Friedman famously argued that businesses should avoid the temptation go out of their way to be socially responsible and instead focus on maximizing profits within the legal and ethical framework erected by government and society. To act otherwise at the expense profitability, he reasoned, is to spend other people’s money—that of shareholders, employees, or customers—robbing them of their agency.

Though nearing fifty years of age, much of Milton Friedman’s windily and aptly titled essay, The Social Responsibility of Business Is to Increase Profits, feels like it could have been written today. Many of the hypotheticals he cites of corporate social responsibility—“providing employment, eliminating discrimination, avoiding pollution”—are charmingly relevant in the era of automation anxiety, BDS, and one-star campaigns. His solution, that businesses sidestep the whole mess, focus on what they do best, and play by the rules set forth by the public, is elegant and simple—and increasingly untenable.

One reason for this is that businesses and the governments Friedman imagined would reign them in have grown much closer, even as the latter have grown comparatively weaker. In sharp contrast to the get-government-out-of-business attitude that prevailed in the boardrooms of the 1970s, modern industry groups collectively spend hundreds of millions to get the ears of lawmakers, hoping to obtain favorable legislation or stave off laws that would hurt them. Corporate (and other) lobbyists are known to write and edit bills, sometimes word for word.

You could convincingly argue that this is done in pursuit of profit: Boeing, for example, spent $17 million lobbying federal politicians in 2016 and received $20 million in federal subsidies the same year. As of a 2014 report by Good Jobs First, an organization that tracks corporate subsidies, Boeing had received over $13 billion of subsidies and loans from various levels of government. Nevertheless, this is wildly divergent from Friedman’s idea of business as an adherent to, not architect of, policy.

As business has influenced policy, so too have politics made their mark on business. Far more so than in the past, today’s customers expect brands to take stands on social and political issues. A report by Edelman, a global communications firm, finds a whopping 60% of American Millennials (and 30% of consumers worldwide) are “belief-driven” buyers.

This, the report states, is the new normal for businesses—like it or not. Brands that refrain from speaking out on social and political issues now increasingly risk consumer indifference, which, I am assured by the finest minds in marketing, is not good. In an age of growing polarization, every purchase is becoming a political act. Of course, when you take a stand on a controversial issue, you also risk alienating people who think you’re wrong: 57% of consumers now say they will buy or boycott a brand based on its position on an issue.

This isn’t limited to merely how corporations talk. Firms are under increasing social pressure to hire diversity officers, change where they do business, and reduce their environmental impact, among other things. According to a 2017 KPMG survey on corporate social responsibility, 90% of the world’s largest companies now publish reports on their non-business responsibilities. This reporting rate, the survey says, is being driven by pressure from investors and government regulators alike.

It turns out that a well marketed stance on social responsibility can be a powerful recruiting tool. A 2003 study by the Stanford Graduate School of Business found 90% of graduating MBAs in the United States and Europe prioritize working for organizations committed to social responsibility. Often, these social objectives can be met in ways that employees enjoy: for example, cutting a company’s carbon footprint by letting employees work from home.

In light of all this, the choice between social and political responsibility and profitability seems something of a false dichotomy. The stakes are too high now for corporations to sit on the sidelines of policy, politics, and society, and businesses increasingly find themselves taking on such responsibilities in pursuit of profitability. Whether that’s good or bad is up for debate. But as businesses have grown more powerful and felt the need to transcend their formerly transactional relationships with consumers, it seems to be the new way of things.

Occupational Licensing Versus the American Dream

Imagine: You’re one of the 6.1 million unemployed Americans. Try as you might, you can’t find a job. But you’ve always been great at something—cutting hair, giving manicures, or maybe hanging drywall—so great, in fact, that you reckon you could actually make some real money doing it. What’s the first thing you do?

If your answer was something other than, “Find out how to obtain the state’s permission,” you’re in for a surprise.

A shocking amount of occupations require workers to seek permission from the government before they can legally practice. This includes not just the obvious, like doctors and lawyers, whose services, if rendered inadequately, might do consumers life-threatening harm, but also barbers, auctioneers, locksmiths, and interior designers.

This phenomenon is known as occupational licensing. State governments set up barriers to entry for certain occupations, ostensibly to the benefit and protection of consumers. They range from the onerous—years of education and thousands of dollars in fees—to trivialities like registering in a government database. At their most extreme, such regulations make work without a permit illegal.

As the United States transitioned from a manufacturing to a service-based economy, occupational licensing filled the “rules void” left by the ebb of labor unions. In the past six decades, the share of jobs requiring some form of license has soared, going from five percent in the 1950s to around 30 percent today. Put another way: over a quarter of today’s workforce requires government permission to earn a living.

There’s little proof that licensing does what it’s supposed to. For one, the potential impact to public safety seems wholly incidental to the burden of compliance for a given job. In most states, it takes 12 times as long to become a licensed barber as an EMT. In a 2015 Brookings Institution paper, University of Minnesota Professor Morris Kleiner, who has written extensively on the subject, states: “…economic studies have demonstrated far more cases where occupational licensing has reduced employment and increased prices and wages of licensed workers than where it has improved the quality and safety of services.”

Ironically, the presence of strict licensing regulations also seems to encourage consumers to seek lower-quality services—sometimes at great personal risk. When prices are high or labor is scarce, consumers take a DIY approach or forego services entirely. A 1981 study on the effects of occupational licensing found evidence for this in the form of a negative correlation between electricians per capita and accidental electrocutions.

A less morbid, but perhaps more salient, observation is that licensing often creates burdens that are unequally borne. Licensing requirements make it difficult for immigrants to work. In many states, anyone with a criminal conviction can be outright denied one, regardless of the conviction’s relevance to their aspirations. These policies, coupled with the potential costs of money and time, can make it harder for poorer people, in particular, to find work.

But surely, you might say, there must be some benefit to licensing. And technically, you’d be right.

Excessive licensing requirements are a huge boon to licensed workers. They restrict the supply of available labor in an occupation, limiting competition and in some cases raising wages. There’s little doubt that occupational licensing, often the result of industry lobbying, functions mainly as a form of protectionism. A 1975 Department of Labor study found a positive correlation between the rates of unemployment and failures on licensing exams.

Yet even licensed workers can’t escape the insanity unscathed. Because licenses don’t transfer from state to state; workers whose livelihoods depend on having a license face limited mobility, which ultimately hurts their earning potential.

Though licensure reform is typically thought of as a libertarian fascination—the libertarian-leaning law firm Institute for Justice literally owns occupationallicensing.com—it also has the attention of more mainstream political thinkers. The Obama Administration released a report in 2015 outlining suggestions on how the states might ease the burden of occupational licensing, and in January of this year, Labor Secretary Alexander Acosta made a similar call for reform.

Thankfully, there seems to be some real momentum on this issue. According to the Institute for Justice, 15 states have reformed licensing laws to “make it easier for ex-offenders to work in state-licensed fields” since 2015. Louisiana and Nebraska both made some big changes this year as well. That’s a great start, but there’s still much work to be done.

This article originally appeared on Merion West

Human Mobility is Key to Fighting Poverty

Some sixty years into the “war on poverty,” government welfare programs remain the subject of much scrutiny. As the Trump administration unveils a new tax plan, fresh off numerous attempts to repeal and replace the Affordable Care Act, perennial questions about whether the government is doing enough to reduce poverty have resurfaced.

This debate often focuses almost exclusively on poor Americans, and solutions mostly center around the redistribution of resources via government transfers. On many levels, this makes sense; on the first count, non-Americans don’t vote, and politicians tend not to pay much attention to groups that cannot help them win elections. Secondly, the government’s ability to act on poverty is somewhat limited — it can try to create policies that facilitate wealth, but it cannot actually produce wealth on its own. Spreading around some of the surplus is therefore an attractive option.

But from a utilitarian and humanitarian perspective, this debate represents a missed opportunity. Limiting the conversation to wealth transfers within an already wealthy nation encourages inefficient solutions at the expense of ideas that might do a lot more good for a lot more people: namely, freeing those people, who are not at maximum productivity, to pursue opportunity.

Between the EITC, TANF, SNAP, SSI, Medicaid, and other programs, the United States spent over $700 billion at the federal level in the name of alleviating poverty in 2015. A 2014 census report estimates that Social Security payments alone reduced the number of poor Americans by nearly 27 million the previous year. Whatever your stance on the long-run effects of welfare programs, it’s safe to say that in the short term, government transfers provide substantial material benefits to recipients.

Yet if the virtue of welfare programs is their ability to improve living standards for the needy, their value pales in comparison to the potential held by labor relocation.

Political boundaries are funny things. By crossing them, workers moving from poor to rich nations can increase their productivity dramatically. That’s not necessarily because they can make more products or offer better services — although that is sometimes the case as well — but rather because what they produce is more economically valuable. This is what economists refer to as the “place premium,” and it’s partly created by differences in opportunity costs between consumers in each location.

Median wages of foreign-born US workers from 42 developing countries are shown to be 4.1 times higher than those of their observably identical counterparts in their country of origin. Some enthusiasts even speculate that the elimination of immigration restrictions alone could double global GDP. The place premium effect can be powerful enough to make low-skilled positions in rich countries economically preferable to high-skill immigrants from poor nations.

We have a lot of inequality in the United States, and that often masks the fact that we have very little absolute poverty. Even someone who is poor by American standards (an annual pre-transfer income of about $12,000 or less for a single-person household) can have an income that exceeds that of the global median household. Even with relatively generous government transfers, we probably would not increase their incomes by more than triple.

On the other hand, because they start with lower incomes, this same effect allows low-earning immigrants to proportionally increase their standard of living in a way that can’t be matched by redistribution within a relatively wealthy population. For example, the average hourly wage in the US manufacturing sector is slightly over $20; in Mexico, it’s around $2.30. Assuming a manufacturer from Mexico could find a similar position in the United States, their income would increase by around 900%. To provide the same proportional benefit to a severely poor American — defined as a person or household with an income under half the poverty threshold — could cost up to $54,000.

What’s true across national borders is true within them. Americans living in economically desolate locations could improve their prospects by relocating to more prosperous and opportune areas. Indeed, this is exactly what’s been happening for decades. The percentage of Americans living in cities has increased steadily, going from 45% in 1910 to nearly 81% by 2010. Nor is relocation exclusively a long-term solution. During oil rushes in Alaska and North Dakota, populations within the two states exploded as people flocked to economic activity.

Recently, however, rates of migration have been dwindling. Admittedly, there are fewer barriers to intra-national migration than immigration. But there are still things we might do to make it easier for people to move where the money is.

One obvious solution would be to encourage local governments to cut back on zoning regulations that make building new housing stock less affordable. Zoning laws contribute heavily to the rising costs of living in the most expensive cities, leading to the displacement of poorer residents and the sequestration of opportunity. As with immigration, this poses a bit of a political problem — it requires politicians to prioritize the interests of the people who would live in a city over those of the people who currently live there — the ones who vote in local elections.

Relatedly, we might consider revising our approach to the mortgage interest deduction and other incentives for homeownership. While the conventional wisdom is that homeownership is almost always desirable because it allows the buyer to build equity on an appreciable asset, some studies have found a strong positive correlation between levels of homeownership and unemployment. The upshot is that tying up most of one’s money in a home reduces the ability and desire to move for employment, leading to unemployment and downward pressure on wages. Whether or not to buy a home is the buyer’s decision, but these data cast doubt on the idea that the government should subsidize such behavior.

If the goal of policy is to promote human well being, then increasing mobility should be a priority for policy makers. As a species, as nations, as communities, and as individuals, we should strive for a more productive world. Allowing people the opportunity to relocate in the name of increasing their output is a near-free lunch in this regard.

But while the economic dream of frictionless markets is a beautiful one, we live in a world complicated by politics. It’s unrealistic to expect politicians to set aside the concerns of their constituents for the greater good. I will therefore stop short of asking for open borders, the abolition of zoning laws, or the removal of the mortgage interest deduction. Instead, I offer the humbler suggestion that we exercise restraint in such measures, striving to remove and lessen barriers to mobility whenever possible. The result will be a freer, more equal, and wealthier world.

This article originally appeared on Merion West

Universal Basic Income is Probably Not the Future of Welfare

If for no other reason, universal basic income — that is, the idea to replace the current means-tested welfare system with regular, unconditional cash payments to every citizen — is remarkable for the eclectic support it receives. The coalition for universal basic income (UBI) includes libertarians, progressives, a growing chorus of Luddites, and others still who believe a scarcity-free world is just around the corner. Based on its popularity and the growing concerns of coming economic upheaval and inequality, it’s tempting to believe the centuries-old idea is a policy whose time has finally come.

Personally, I’m not sold. There are several obstacles to establishing a meaningful universal basic income that would, in my mind, be nearly impossible to overcome as things stand now.

For one, the numbers are pretty tough to reconcile.

According to 2017 federal guidelines, the poverty level for a single-person household is about $12,000 per year. Let’s assume we’re intent on paying each American $1,000 per month in order to bring them to that level of income.

Distributing that much money to all 320 million Americans would cost $3.84 trillion, approximately the entire 2015 federal budget and far greater than the $3.18 trillion of tax revenue the federal government collected in the same year. Even if we immediately eliminated all other entitlement payments, as libertarians tend to imagine, such a program would still require the federal government to increase its income by $1.3 trillion to resist increasing the debt any further.

Speaking of eliminating those entitlement programs, hopes of doing so are probably far-fetched without a massive increase in taxation. A $1,000 monthly payment to every American — which again, would consume the entire federal budget — would require a lot of people currently benefiting from government transfers to take a painful cut. For example, the average monthly social security check is a little over $1,300. Are we really going to create a program that cuts benefits for the poor and spends a lot of money on the middle class and affluent?

In spite of the overwhelming total cost of such a program, its per capita impact would be pretty small, since all the cash would be disbursed over a much greater population than current entitlements. For this reason, its merit as an anti-poverty program would be questionable at best.

Yes, you can fiddle with the disbursement amounts and exclude segments of the population — dropping minors from the dole would reduce the cost to around $2.96 trillion — to make the numbers work a little better, but the more you do that the less universal and basic it becomes, and the more it starts to look like a modest supplement to our existing welfare programs.

*

Universal basic income’s problems go beyond the budget. If a UBI was somehow passed (which would likely require our notoriously tax-averse nation to OK trillions of additional dollars of government spending), it would set us up for a slew of contentious policy battles in the future.

Entitlement reform, already a major preoccupation for many, would become a more pressing concern in the event that a UBI of any significant size were implemented. Mandatory spending would increase as more people draw benefits for more years and continue to live longer. Like the entitlements it may or may not replace, universal basic income would probably be extremely difficult to reform in the future.

Then there’s the matter of immigration. If you think reaching consensus on immigration policy is difficult in the age of President Trump, imagine how it would look once we began offering each American a guaranteed income large enough to offer them an alternative to paid work. Bloomberg columnist Megan McArdle estimates that establishing a such a program would require the United States to “shut down immigration, or at least immigration from lower-skilled countries,” thereby leading to an increase in global poverty.

There’s also the social aspect to consider. I don’t want to get into it too much because everybody’s view of what makes people tick is different. But it seems to me that collecting money from the government doesn’t make people especially happy or fulfilled.

The point is, part of what makes universal basic income appear realistic is the political coalition backing it. But libertarians, progressives, and the rest of the groups superficially united behind this idea have very different opinions about how it would operate and very different motivations for its implementation. When you press the issue and really think through the consequences, the united front for universal basic income begins to crack.

*

Don’t get me wrong; there’s plenty about universal basic income that appeals to this author’s libertarian sensibilities. I think there’s a strong argument for reforming the welfare system in a way that renders it more similar to a basic income scheme, namely replacing in-kind payments and some subsidies with direct cash transfers. Doing so would, as advocates of UBI claim, promote the utility of the money transferred and reduce government paternalism, both goals which I find laudable.

I should also note that not all UBI programs are created equal. Universal basic income has become something of a catch-all term used to describe policies that are quite different from each other. The negative income tax plan Sam Bowman describes on the Adam Smith Institute’s website is much more realistic and well-thought-out than a system that gives a flat amount to each citizen. That it is neither unconditional nor given equally are its two greatest strengths.

However, the issues of cost and dispersion, both consequences of UBI’s defining characteristics, seem to me insurmountable. Unless the United States becomes dramatically wealthier, I don’t see us being able to afford to pay any significant amount of money to all or most people. We would need to replace a huge amount of human labor with automation before this plan can start to look even a little realistic. Even if that does happen, and I’m not sure that it will anytime soon, I think there are better things we could do with the money.

This article originally appeared on Merion West.

Obamacare’s Complicated Relationship with the Opioid Crisis

The opioid epidemic, having evolved into one of the greatest public health crises of our time, is a contentious aspect of the ongoing debate surrounding the Republican healthcare proposal.

Voices on the left worry that the repeal of Obamacare’s Medicaid expansion and essential health benefits would worsen the crisis by cutting off access to treatment for addiction and substance abuse. Mother Jones and Vox have both covered the issue. Former President Obama’s June 22nd Facebook post stated his hope that Senators looking to replace the ACA would ask themselves, “What will happen to the Americans grappling with opioid addiction who suddenly lose their coverage?”

On the other side of things, there are theories that the Affordable Care Act actually helped finance–and perhaps exacerbated–the opioid epidemic. It goes something like this: Expanded insurance coverage was taken as a primary goal of the ACA. Two of its policies that supported that goal–allowing adults up to 26 years of age to remain on their parents’ insurance and expanding Medicaid to cover a greater percentage of the population–unintentionally connected at-risk cohorts (chronically unemployed prime-age men and young, previously uninsured whites with an appetite for drugs, to name two) with the means to obtain highly addictive and liberally-prescribed pain medications at a fraction of their street price. (Some knowledge about labor force and other social trends helps paint a clearer picture on this.) Once addicted, many moved on to cheaper and more dangerous alternatives, like heroin or synthetic opioids, thus driving the growth in overdose deaths.

This is a really interesting, if tragic, narrative, so I decided to take a look. I focused on state-level analysis by comparing CDC Wonder data on drug-induced deaths with Kaiser Family Foundation data on expansion status and growth in Medicaid enrollment. The graph below plots states based on their rates of growth in Medicaid rolls and overdose deaths from 2010 to 2015, and is color-coded for expansion status: blue for states that expanded coverage before or by 2015, yellow for states that expanded during 2015, and red for states that hadn’t expanded by the end of 2015. (A note: this isn’t an original idea; for a more in-depth analysis, check out this post.)

What’s interesting is the places where overdoses have increased the quickest. The fastest growing rates of overdose deaths were mostly in states that had expanded Medicaid by 2015; the only non-expansion state to grow by more than 50% since 2010 was Virginia, which in 2015 still had a relatively low rate of 12.7 fatal overdoses per 100,000 population. For some perspective on how bad things have gotten, that rate would have been the 19th highest among states in 2005; today Virginia ranks 42nd in terms of OD rate.

On the other hand, there isn’t a noticeable correlation between increases in Medicaid coverage and increases in the rate of fatal overdoses. Additionally, the rates of overdose deaths in expansion states were increasing before many of the Affordable Care Act’s key provisions went into effect. Starting around 2010, there was a dramatic divergence between would-be expansion states and the rest. It’s possible that states with accelerating rates were more likely to expand coverage in response to increased frequency of fatal overdoses.

So what’s the deal? Did the Affordable Care Act agitate the opioid epidemic? Obviously I don’t have the answer to that, but here’s my take:

I think it would be difficult to argue it hasn’t been a factor on some level, given the far-higher rates of prescription, death, and opioid use and among Medicaid patients than the general population, as well as the state-level trends in OD rates (with acknowledgement that state-level analysis is pretty clunky in this regard; for many reasons West Virginia’s population isn’t really comparable to California’s). I think the fact that state Medicaid programs are adjusting regulations for painkiller prescriptions is an acknowledgement of that.

But if the ACA had a negative effect, I’d think it must register as a drop in the bucket. There are so many pieces to this story: lax prescription practices and the rise of “pill mills,” declining labor force participation, sophisticated distribution networks of Mexican heroin, bogus research and marketing on pain management, stark disparities between expectations and reality. It’s nice to think there’s one thing we can change to solve everything, but I don’t think we’re going to be so lucky.

Here’s another twist: Even if the Affordable Care Act had some negative impact, it could very well be that ACA repeal could make things worse. Scrapping essential benefits coverage could lead to a loss or reduction of access to addiction treatment for millions of Americans. Moreover, gaining insurance has been shown to alleviate feelings of depression and anxiety. How then, might we guess 20 million Americans will feel after losing their insurance? Given the feedback loop between pain and depression, this question deserves a lot of deliberation.

No, the Interest on Your Student Loan Isn’t Too High. In fact…

It seems like more often than not I’m opening these blog posts with an apology for a multi-week hiatus. Since nobody’s emailed to check on my well-being, I can only infer my readership has gotten on fine without my wonk-Jr. takes on public policy and other matters of high import. Fair enough; but don’t think your demonstrated lack of interest will spare you from a quick update.

Actually, it’s all good news: I’ve been having fun learning R (a statistical language), looking for a new apartment, and testing the limits of a 27-year-old liver. I saw Chance the Rapper and a Pirates game in Pittsburgh, which was awesome. The last article I wrote had some real success and was republished in several places, even earning a shout-out from John Stossel:

The big update is that my stint as a (purely) freelance writer has mercifully drawn to a close; I now write for a non-partisan public policy group. In fact, this very blog was one of my strongest selling points, according to my manager. It just goes to show you, kids: if you toil in anonymity for two years, eventually something will go your way.

*

Okay, enough about me. Let’s talk about a topic close to the heart of many millennials: student loans. More specifically, I want to talk about the interest rates charged on undergraduate student loans.

That interest rates are too high is, unsurprisingly, a common gripe among borrowers. If I had a nickle for every twenty-something I’ve overheard complain that the federal government shouldn’t profit off student loans…well, it still wouldn’t cover one month’s interest. However, this sentiment isn’t limited to overqualified baristas; popular politicians like Elizabeth Warren and Bernie Sanders–and even unpopular politicians–have publicly called for loans to be refinanced at lower rates and decried the “profiteering” of the federal government. From Bernie Sanders’ website:

Over the next decade, it has been estimated that the federal government will make a profit of over $110 billion on student loan programs. This is morally wrong and it is bad economics. Sen. Sanders will fight to prevent the federal government from profiteering on the backs of college students and use this money instead to significantly lower student loan interest rates.

Under the Sanders plan, the formula for setting student loan interest rates would go back to where it was in 2006. If this plan were in effect today, interest rates on undergraduate loans would drop from 4.29% to just 2.37%.

It makes no sense that you can get an auto loan today with an interest rate of 2.5%, but millions of college graduates are forced to pay interest rates of 5-7% or more for decades. Under the Sanders plan, Americans would be able to refinance their student loans at today’s low interest rates.

As one of those debt-saddled graduates, and one of the chumps who took loans at a higher rate of interest, I would obviously be amenable to handing over less of my hard-earned money to the federal government. But as a person concerned with the larger picture, I have to say this is a really bad idea. In fact, rates should be higher, not lower.

First of all, the progressive case for loan refinancing or forgiveness only holds up under the lowest level of scrutiny. Such a policy would overwhelmingly benefit borrowers from wealthy families, who hold the majority of student loan debt. Conversely, most defaulters hold relatively small amounts of debt. Fiddling with interest rates shouldn’t be confused with programs that target low-income students, like the Pell Grant, which are another matter entirely and not the subject of my criticism.

More to the point, the federal government probably isn’t making any money on student loans. Contrary to the claims of Senators Warren and Sanders, which rely on estimates from the Government Accountability Office (GAO) and put federal profit on student loans at $135 billion from 2015-2024, the Congressional Box Office (CBO), using fair-value estimation, shows student loans costing the federal government $88 billion over the same period.

The discrepancy between the CBO and GAO figures comes from the former’s inclusion of macroeconomic forecasts. Essentially, the CBO thinks the risk of default on student loans is higher than the GAO does, due to forces beyond individuals’ control.

Evidence suggests it’s unwise to underestimate the risk associated with student loans. According to a study by the liberal think tank Demos, nearly 40% of federal student loan borrowers are in default or more than 90 days delinquent. Add to that the fact that student loans are unsecured (not backed by collateral or repossessable assets, like a car or house), and they start to look like an incredibly risky venture for the federal government, and ultimately, taxpayers.

That conclusion is deeply unpleasant, but not really surprising if you think about it. Ever notice how the interest rates on private student loans–approximately 10% of the market–are much higher? That’s not because private lenders are greedy; it’s because they can’t lend at the rate of the federal government without losing money.

This is all important because the money that finances student loans has to come from somewhere. Be it infrastructure upgrades, federal support for primary education, or Shrimp Fight Club, the money spent on student loans isn’t available for competing priorities. This is even more important when you consider the loss the federal government is taking on these loans, the cost of which is passed onto future taxpayers in the form of higher taxes or lower spending. Since higher education is only one among infinite human desires, we need to decide how much of our finite resources to devote to it. Properly set interest rates are one way (probably the best way) to figure that out.

The irony, of course, is that doing so would require the government to act more like a private lender–the very thing it’s designed not to do! Our student loan system ensures virtually anyone who wants to study has the money to do so, regardless of the likelihood they’ll be able to repay. One of the nasty side effects of this indiscriminate lending is a large amount of distressed borrowers, who now find themselves in the uncomfortable position of digging out from under a mountain of debt they likely shouldn’t have been able to take on.

More so than other forms of government spending, student loans have specific, discernible beneficiaries: the students who get an expensive education financed by the taxpayer at below-market rates. Sure, you can argue there’s some spillover; society does benefit from having more highly-trained workers. But most of the time, highly skilled labor is rewarded with higher wages. That being the case, is it really too much to ask for borrowers to pay a level of interest that reflects the actual cost of issuing their loans?

Yes, this would be discouraging for some: particularly those who want to pursue non-remunerative fields of study. That’s not such a bad thing; higher interest rates would steer people away from obtaining degrees with low salary expectations, which would–by my reckoning–reduce rates of delinquency and default over the long term. They would also help mitigate some of the pain of defaults when they do happen.

But–you might protest–you can’t run the government like a business! And sure, a lot of the time, you’d be right. However, I really think this is one area where doing so is appropriate–even desirable. Hear me out.

When the government can fund itself through profitable investments rather than zero-sum transfers, it should. If we’re going to have a government of any size (and few suggest that we shouldn’t), then we need to pay for it. Which sounds like the preferable way for that to happen: voluntary, productive, and mutually beneficial investments in society; or the forceful appropriation of private resources? I’m not suggesting the former could entirely replace the latter, but when it can, I think it absolutely should.

Astute readers will realize if the government decides to lend profitably, it will have to compete with private lenders, which would cut into its margins and make its presence in the market redundant. So maybe it’s just a pipe dream. But if profitable lending isn’t possible, the federal government should at least try to minimize losses. One way or another, that means higher interest rates.

Insurance Coverage Numbers Are Important, But Not All-Important

Whether you’re into this sort of thing or not, you’ve probably been hearing a lot about healthcare policy these days. Public debate has roiled as Republican lawmakers attempt to make good on their seven-year promise to repeal and replace the Affordable Care Act (ACA). As the debate rages on, one metric in particular appears to hold outsize importance for the American people: the number of Americans covered by health insurance.

Analysis by the Congressional Budget Office, which showed that 14 million more Americans could lose coverage by 2018 under the Republican replacement, caused intense public outcry and was frequently cited as a rationale for not abandoning the ACA. There is immense political pressure not to take actions that will lead to a large loss of coverage.

But here’s the thing: the relevant metric by which to judge Obamacare isn’t insurance coverage numbers. To do so is to move the goal posts and place undue importance on a number that might not be as significant as we imagine.

The ultimate point of health insurance, and the implied rationale for manipulating insurance markets to cover sicker people, is that people will use insurance as a means by which to improve their health, not just carry a plastic card in their wallets.

Health Insurance ≠ Health

The impulse to use insurance coverage as a proxy for health is misguided but understandable. For one thing, it’s a simple, single number that has dropped precipitously since the implementation of the ACA; that makes it a great marketing piece for supporters. For another, health insurance is the mechanism by which most of us pay for most of our healthcare.

And yet in 2015 the uninsured rate fell to 10.5% (down from 16.4% in 2005) while age-adjusted mortality increased for the first time in a decade.

It turns out a nominal increase in the amount of insured Americans doesn’t necessarily translate into improved health outcomes for those individuals. A newly released paper from the National Bureau of Economic Research (NBER) finds that while the ACA has improved access to healthcare, “no statistically significant effects on risky behaviors or self-assessed health” can be detected among the population (beyond a slight uptick in self-reported health in patients over 65).

These results are consistent with other studies, like the Oregon Medicaid Experiment, which found no improvement in patients’ blood pressure, cholesterol, or cardiovascular risk after enrolling them in medicaid, even though they were far more likely to see a doctor. There were, however, some notable-but-mild psychic benefits, such as a reduction in depression and stress in enrollees.

In short, despite gains in coverage, we haven’t much improved the physical health of the average American, which is ostensibly the objective of the ACA.

Why Not?

To be fair, the ACA is relatively young; most of its provisions didn’t go into effect until 2014. It may well be that more time needs to pass before we start to see a positive effect on people’s health. But there are a few reasons to think those health benefits may never materialize–at least, not to a great extent.

A lot of what plagues modern Americans (especially the poorest Americans) has more to do with behavior and environment than access to a doctor. Health insurance can be a lifesaver if you need help paying for antiretroviral medication, but it won’t stop you from living in a neighborhood with a high rate of violent crime. It won’t make you exercise, or change your diet, or stop you from smoking. It won’t force you to take your medicine or stop you from abusing opioids, and it certainly won’t change how you commute to work (that’s a reference to the rapid increase in traffic deaths in 2015).

Here’s something to consider: A lot of the variables that correlate to health–like income and education–also correlate to the likelihood of having health insurance. If we want healthier Americans, there may be more efficient ways to achieve that than expanding insurance coverage, like improving employment and educational opportunities. Maybe something creative, like Oklahoma City’s quest to become more walker-friendly, could yield better results?

Of course, all things being equal, more insurance coverage is better. But nothing comes without cost, and as a society we want to be sure that benefits justify costs. So far, that’s not clear. This poses an existential question about our current pursuit of universal coverage, and, by extension, the relevance of coverage as a metric for the success of healthcare policy: If insurance isn’t the cure, why are we prescribing it with such zeal?

The CBO Feels the Love

The Congressional Budget Office isn’t known for its awesome marketing or pithy statements. It’s never been recognized by Buzz Feed for its social media use. Nevertheless, the Congressional Budget Office (CBO) is enjoying an unusual amount of love on Twitter.

Here’s how you really know they’ve made it: The title of yesterday’s National Review Morning Jolt was, “The Congressional Box Office is Very ‘In’ Right Now.”

Two nights ago, it tweeted a four-word message with a link to its analysis of the American Health Care Act (AHCA) that has received far more attention than is normal for the CBO twitter account. As of writing this post, the tweet in question has racked up 62 responses, 846 retweets, and 542 likes.

That might not sound like a lot; the truth is, it isn’t. Donal Trump’s tweets, for example, often receive tens of thousands of ‘likes.’ But relative to the usual engagement on the CBO’s tweets, it’s absolutely ridiculous.

This slideshow requires JavaScript.

Since January first of 2016, the CBO has tweeted 120 times. The median numbers of responses, retweets, and ‘likes’ to those tweets were respectively 0, 3, and 2. In fact, this latest tweet is responsible for nearly half of all the reactions garnered by the CBO’s account over that time period.

So what does this tell us?

The most obvious insight is that people are paying more attention to the CBO since the administration change. That’s not surprising; the CBO evaluates economic and budget proposals, and there are quite a few shakeups going on in that department right about now. The agency has been firing on all cylinders to keep up with demands from Congress, doubling the frequency of its tweets since Trump took office (.48 tweets/day compared with .24 tweets/day during the previous year).

In the final year of Barack Obama’s presidency, the CBO only averaged 9.5 ‘retweets’ per tweet–and that’s including a January 17th tweet that was responsible for 405 retweets alone (if you exclude that post, the account averaged 5 retweets per post). Since the beginning of the Trump administration, that average has jumped to 39.6 (7.6 if you don’t include the latest viral tweet).

Another insight: Negative feelings about the AHCA are driving the CBO’s recent popularity surge. The only two tweets with significant activity in the past year (look at the spikes in the graphs above) were about the AHCA and the effects of repealing the Affordable Care Act (ACA). A cursory glance through the responses to both tweets reveals that most of the commenters are detractors of the current administration who oppose changes to the ACA.

It would be a mistake to use this as a proxy for national consensus on the AHCA, however. Twitter often skews liberal.

*

For the record, the CBO writes a killer blog (I use the term loosely, for obvious reasons). It’s a great source of unfiltered information about economic ideas from Washington. You can sign up to receive email updates from it here. And, if the CBO is reading this, don’t forget about us when you get famous.

Science Has a Reproducibility Crisis

If your Facebook feed is anything like mine, you may have recently heard about how Bill Nye–the Science Guy himself–“slammed” Tucker Carlson on the latter’s evening show on Fox. THIS. (If you live somewhere else you may have been treated to an equally smug reaction from people claiming that Carlson “won.”)

However you feel about it, the timing, coupled with Nye’s reliance on scientific consensus as a proxy for objective correctness, is somewhat serendipitous. Mounting evidence that the results of scientific studies are often not replicable has caused Nature, a prolific scientific journal, to very publicly tighten its standards for submissions as of its latest issue.

In May of 2016, a survey by Nature revealed that over two thirds of researchers surveyed had tried and failed to reproduce the results of another scientist’s study. Over half of them had been unable to reproduce their own results. Fifty two percent of researchers polled said there was a “significant crisis” of reproducibility.

This is a big deal. The ability to replicate the results of studies is crucial to both scientific integrity and progress. Clinical researchers, for example, depend on reliable results from prior trials to form the building blocks of new drug advancements. In the field of cancer biology, merely 10% of results from published literature were found to be reproducible. Meanwhile, the credibility of scientific literature is understandably compromised by dubious, often sensational findings.

The root of the problem, according to Dame Ottoline Leyser, director of the Sainsbury Laboratory at the University of Cambridge, stems from today’s scientific culture. As quoted in BBC, she cites “a culture that promotes impact over substance, flashy findings over the dull, confirmatory work that most of science is about.”

Others blame a pressure to publish. There has also been, in recent years, doubt cast on the integrity of the peer review process, especially with regard to climate science.

Whatever the culprit, plans to combat issues of reproducibility are emerging. Nature has developed a checklist to serve as guidelines for authors submitting writing to the publication. Efforts shouldn’t end there, the journal argues. Reform at all levels of the scientific process could go a long way:

Renewed attention to reporting and transparency is a small step. Much bigger underlying issues contribute to the problem, and are beyond the reach of journals alone. Too few biologists receive adequate training in statistics and other quantitative aspects of their subject. Mentoring of young scientists on matters of rigour and transparency is inconsistent at best. In academia, the ever increasing pressures to publish and chase funds provide little incentive to pursue studies and publish results that contradict or confirm previous papers. Those who document the validity or irreproducibility of a published piece of work seldom get a welcome from journals and funders, even as money and effort are wasted on false assumptions.

Tackling these issues is a long-term endeavour that will require the commitment of funders, institutions, researchers and publishers. It is encouraging that NIH institutes have led community discussions on this topic and are considering their own recommendations. We urge others to take note of these and of our initiatives, and do whatever they can to improve research reproducibility.