Obamacare’s Complicated Relationship with the Opioid Crisis

The opioid epidemic, having evolved into one of the greatest public health crises of our time, is a contentious aspect of the ongoing debate surrounding the Republican healthcare proposal.

Voices on the left worry that the repeal of Obamacare’s Medicaid expansion and essential health benefits would worsen the crisis by cutting off access to treatment for addiction and substance abuse. Mother Jones and Vox have both covered the issue. Former President Obama’s June 22nd Facebook post stated his hope that Senators looking to replace the ACA would ask themselves, “What will happen to the Americans grappling with opioid addiction who suddenly lose their coverage?”

On the other side of things, there are theories that the Affordable Care Act actually helped finance–and perhaps exacerbated–the opioid epidemic. It goes something like this: Expanded insurance coverage was taken as a primary goal of the ACA. Two of its policies that supported that goal–allowing adults up to 26 years of age to remain on their parents’ insurance and expanding Medicaid to cover a greater percentage of the population–unintentionally connected at-risk cohorts (chronically unemployed prime-age men and young, previously uninsured whites with an appetite for drugs, to name two) with the means to obtain highly addictive and liberally-prescribed pain medications at a fraction of their street price. (Some knowledge about labor force and other social trends helps paint a clearer picture on this.) Once addicted, many moved on to cheaper and more dangerous alternatives, like heroin or synthetic opioids, thus driving the growth in overdose deaths.

This is a really interesting, if tragic, narrative, so I decided to take a look. I focused on state-level analysis by comparing CDC Wonder data on drug-induced deaths with Kaiser Family Foundation data on expansion status and growth in Medicaid enrollment. The graph below plots states based on their rates of growth in Medicaid rolls and overdose deaths from 2010 to 2015, and is color-coded for expansion status: blue for states that expanded coverage before or by 2015, yellow for states that expanded during 2015, and red for states that hadn’t expanded by the end of 2015. (A note: this isn’t an original idea; for a more in-depth analysis, check out this post.)

What’s interesting is the places where overdoses have increased the quickest. The fastest growing rates of overdose deaths were mostly in states that had expanded Medicaid by 2015; the only non-expansion state to grow by more than 50% since 2010 was Virginia, which in 2015 still had a relatively low rate of 12.7 fatal overdoses per 100,000 population. For some perspective on how bad things have gotten, that rate would have been the 19th highest among states in 2005; today Virginia ranks 42nd in terms of OD rate.

On the other hand, there isn’t a noticeable correlation between increases in Medicaid coverage and increases in the rate of fatal overdoses. Additionally, the rates of overdose deaths in expansion states were increasing before many of the Affordable Care Act’s key provisions went into effect. Starting around 2010, there was a dramatic divergence between would-be expansion states and the rest. It’s possible that states with accelerating rates were more likely to expand coverage in response to increased frequency of fatal overdoses.

So what’s the deal? Did the Affordable Care Act agitate the opioid epidemic? Obviously I don’t have the answer to that, but here’s my take:

I think it would be difficult to argue it hasn’t been a factor on some level, given the far-higher rates of prescription, death, and opioid use and among Medicaid patients than the general population, as well as the state-level trends in OD rates (with acknowledgement that state-level analysis is pretty clunky in this regard; for many reasons West Virginia’s population isn’t really comparable to California’s). I think the fact that state Medicaid programs are adjusting regulations for painkiller prescriptions is an acknowledgement of that.

But if the ACA had a negative effect, I’d think it must register as a drop in the bucket. There are so many pieces to this story: lax prescription practices and the rise of “pill mills,” declining labor force participation, sophisticated distribution networks of Mexican heroin, bogus research and marketing on pain management, stark disparities between expectations and reality. It’s nice to think there’s one thing we can change to solve everything, but I don’t think we’re going to be so lucky.

Here’s another twist: Even if the Affordable Care Act had some negative impact, it could very well be that ACA repeal could make things worse. Scrapping essential benefits coverage could lead to a loss or reduction of access to addiction treatment for millions of Americans. Moreover, gaining insurance has been shown to alleviate feelings of depression and anxiety. How then, might we guess 20 million Americans will feel after losing their insurance? Given the feedback loop between pain and depression, this question deserves a lot of deliberation.

Advertisements

In Defense of the Center

The mushy center never inspires passion like ideological purity. The spectacle of radicalism puts asses in the seats. It’s hard, on the other hand, to imagine rebellious, mask-clad youths taking to the street in the name of fine-tuning marginal tax rates.

Oh sure, you may see a protest here and there, and practically everyone grumbles about this or that issue in which they have an interest. But as the great philosopher Calvin once said: a good compromise leaves everybody mad.

calvin

Some more so than others. Opining in the New York Times, Senator Bernie Sanders suggests Democrats can reverse their political fortunes by abandoning their “overly cautious, centrist ideology,” and more closely approximating the policy positions of a Vermont socialist.

I suppose this could be sound political advice. Everyone has an idea of the way they’d like the world to work, and Sanders’ ideas are appealing to a great many people. You could argue–as Sanders does–that Republicans have had some success with a similar strategy following the Obama years. But, as they’re finding out, ideological purity makes for better campaign slogans than successful governing strategy.

Here’s the thing: We live in a big, diverse country. People have very different wants and needs, yet we all live under the same (federal) laws. Our priorities must sometimes compete against each other, which is why we often end up with some of what we want, but not everything. Striking that balance is tough, and by necessity leaves many people unhappy. We don’t always get it right. But when you’re talking about laws that affect 320 million people, some modesty, or if you prefer, “caution,” is in order.

Alas, Bernie is not of a similar mind. In fewer than 1,000 words, he offers no shortage of progressive bromides without mention of the accompanying price tag. It’s one thing to form a platform around medicare-for-all, higher taxes on the wealthy (their “fair share”), aggressive clean energy commitments, a trillion-dollar infrastructure plan, or free tuition at state universities and lower interest rates on student loans. But all of them? At once?!

Sanders should remember the political and economic lessons of Vermont Governor Peter Shumlin’s foray into single-payer healthcare: Government spending–and thus government activity–is constrained by the population’s tolerance for taxation (And on the other side of things, their tolerance for a deficit of public services. Looking at you, Kansas). Go too far and you risk losing support. And unless you’re willing to rule by force, as extremists often must, that will cost you your ability to shape public policy.

For what it’s worth, I don’t think the Senator’s advice would do the Democrats any favors. The Democrats didn’t move to the center-left because there was widespread and untapped support for endless government programs in America. They did it because they collided with the political and economic reality of governance in our country. Americans are willing to pay for some government programs, but not at the rate Europeans pay to have much more expansive governments. The left, therefore, shouldn’t play an all-or-nothing game, but instead think about what it does well and how it can appeal to, rather than alienate, the rest of the country. That’s going to involve compromise.

Update: Following Jon Ossoff’s narrow defeat in a Georgia special election, there’s been a lot of discussion about whether a more progressive candidate would have fared better. Personally, I find it hard to believe centrism and fiscal conservatism worked against Ossoff in a historically Republican district. Much more believable is Matt Yglesias’ related-but-different take that Ossoff’s reluctance to talk policy left a void for the opposition to exploit, allowing them to cast him as an outsider.

One thing seems certain: the rift within the Democratic party isn’t going away anytime soon.

Insurance Coverage Numbers Are Important, But Not All-Important

Whether you’re into this sort of thing or not, you’ve probably been hearing a lot about healthcare policy these days. Public debate has roiled as Republican lawmakers attempt to make good on their seven-year promise to repeal and replace the Affordable Care Act (ACA). As the debate rages on, one metric in particular appears to hold outsize importance for the American people: the number of Americans covered by health insurance.

Analysis by the Congressional Budget Office, which showed that 14 million more Americans could lose coverage by 2018 under the Republican replacement, caused intense public outcry and was frequently cited as a rationale for not abandoning the ACA. There is immense political pressure not to take actions that will lead to a large loss of coverage.

But here’s the thing: the relevant metric by which to judge Obamacare isn’t insurance coverage numbers. To do so is to move the goal posts and place undue importance on a number that might not be as significant as we imagine.

The ultimate point of health insurance, and the implied rationale for manipulating insurance markets to cover sicker people, is that people will use insurance as a means by which to improve their health, not just carry a plastic card in their wallets.

Health Insurance ≠ Health

The impulse to use insurance coverage as a proxy for health is misguided but understandable. For one thing, it’s a simple, single number that has dropped precipitously since the implementation of the ACA; that makes it a great marketing piece for supporters. For another, health insurance is the mechanism by which most of us pay for most of our healthcare.

And yet in 2015 the uninsured rate fell to 10.5% (down from 16.4% in 2005) while age-adjusted mortality increased for the first time in a decade.

It turns out a nominal increase in the amount of insured Americans doesn’t necessarily translate into improved health outcomes for those individuals. A newly released paper from the National Bureau of Economic Research (NBER) finds that while the ACA has improved access to healthcare, “no statistically significant effects on risky behaviors or self-assessed health” can be detected among the population (beyond a slight uptick in self-reported health in patients over 65).

These results are consistent with other studies, like the Oregon Medicaid Experiment, which found no improvement in patients’ blood pressure, cholesterol, or cardiovascular risk after enrolling them in medicaid, even though they were far more likely to see a doctor. There were, however, some notable-but-mild psychic benefits, such as a reduction in depression and stress in enrollees.

In short, despite gains in coverage, we haven’t much improved the physical health of the average American, which is ostensibly the objective of the ACA.

Why Not?

To be fair, the ACA is relatively young; most of its provisions didn’t go into effect until 2014. It may well be that more time needs to pass before we start to see a positive effect on people’s health. But there are a few reasons to think those health benefits may never materialize–at least, not to a great extent.

A lot of what plagues modern Americans (especially the poorest Americans) has more to do with behavior and environment than access to a doctor. Health insurance can be a lifesaver if you need help paying for antiretroviral medication, but it won’t stop you from living in a neighborhood with a high rate of violent crime. It won’t make you exercise, or change your diet, or stop you from smoking. It won’t force you to take your medicine or stop you from abusing opioids, and it certainly won’t change how you commute to work (that’s a reference to the rapid increase in traffic deaths in 2015).

Here’s something to consider: A lot of the variables that correlate to health–like income and education–also correlate to the likelihood of having health insurance. If we want healthier Americans, there may be more efficient ways to achieve that than expanding insurance coverage, like improving employment and educational opportunities. Maybe something creative, like Oklahoma City’s quest to become more walker-friendly, could yield better results?

Of course, all things being equal, more insurance coverage is better. But nothing comes without cost, and as a society we want to be sure that benefits justify costs. So far, that’s not clear. This poses an existential question about our current pursuit of universal coverage, and, by extension, the relevance of coverage as a metric for the success of healthcare policy: If insurance isn’t the cure, why are we prescribing it with such zeal?

The Value of Inferior Products

There’s no shortage of lessons for us to learn from the unfolding Mylan scandal. It’s practically a study in the consequences of an uncompetitive market and bureaucratic cynicism. Among all the take-aways from this episode, there’s one thing that stands out as particularly interesting: the value inferior products offer consumers.

That sounds a bit counterintuitive. We typically think of inferiorities as something to be eliminated. But in many cases the presence of inferior goods–economically defined as goods for which demand increases when a consumer’s income drops–is actually beneficial, relative to their absence.

Before I talk about how this applies to the EpiPen, I have a confession to make:

I don’t have the latest iPhone.

I don’t even have the oldest iPhone. I have a Samsung Galaxy Light.

It overheats easily, has abysmal battery life, and sends snapchats that look like flipbooks. It hangs up during calls and will never catch a single Pokémon. I once tried to purge my text archive and it took over 12 hours.

Why do I own a device that’s so pathetic by modern standards? It’s a question of personal priorities and resources. I don’t care enough about the add-ons to spend $400 on a phone: a decision surely influenced by the fact that I don’t make that much money.

And yet, even though my phone is undeniably of poorer quality than the phones of my peers, I am much better for it. If I were to disappear tomorrow, say, because someone outlawed suboptimal cell phones, I’d be upset. I would probably end up buying an iPhone, but if I didn’t have the money, I might end up without a cell phone altogether.

So what’s the scrappy alternative (the Galaxy Light, if you will) to the EpiPen? Well, we don’t know–it’s not allowed to exist.

The FDA has the power of deciding what products consumers can and can’t access. In the case of EpiPen substitutes, as well as others, it’s imposed onerous hurdles to market entry that have severely limited anyone’s ability to compete with Mylan. The Wall Street Journal writes:

But no company has been able to do so to the FDA’s satisfaction. Last year Sanofi withdrew an EpiPen rival called Auvi-Q that was introduced in 2013, after merely 26 cases in which the device malfunctioned and delivered an inaccurate dose. Though the recall was voluntary and the FDA process is not transparent, such extraordinary actions are never done without agency involvement. This suggests a regulatory motive other than patient safety.

Then in February the FDA rejected Teva’s generic EpiPen application. In June the FDA required a San Diego-based company called Adamis to expand patient trials and reliability studies for still another auto-injector rival.

Let’s be charitable and ignore that Mylan spent over $2 million lobbying in Washington in 2015. FDA risk aversion, noble or otherwise, is still hurting consumers by leaving them with fewer options and higher prices.

While the FDA has a preference (and every political incentive) for extreme vetting, it may be that some consumers of epinephrine prefer a less expensive, if less tested, model of injector.

Assuming that consumers have the same priorities as their legislators is a mistake of arrogance, and a costly one. A study by Tufts University in 2014 put the cost of getting a drug through to market at $2.56 billion–$1.4 in out-of-pocket expenses and $1.16 in time costs (the forgone ROI of that $1.4 billion over the time the approval process took).

There are reasons people buy lower-quality products. Sometimes it’s a question of personal priorities, sometimes one of finance. The reluctance to acknowledge that this is as true in healthcare as any other marketplace is wrongheaded, though understandable given human emotion and that people’s health is at stake.

But the rules governing price and availability aren’t swayed by emotion. If we want EpiPens to be less expensive, we need to let more people try to make them, even if that involves some measure of risk. Excessive caution carries risk as well: by implementing such harsh standards, the FDA has ensured that the only product remaining is not only highly effective, but also highly expensive.

Rather than accept that there exists a continuum of quality in healthcare products, as with everything, epinephrine users are forced by regulation to use the best or nothing. Because of the diffuse nature of healthcare expenses, this kind of action raises prices across the board and inhibits the development of more efficient products.

Cell phones, even smartphones, are ubiquitous today and generally affordable. When they first came out, they were strictly the province of the wealthy. The same goes for cars, HD televisions, and most other technology.

Someone like me can afford those things today because lots of different producers were able to compete against each other and figure out how to give people more for less. If, when the iPhone came out, all following smartphones were to be held to the same standards, it’s almost certain that there would have been less people making smartphones, fewer models to choose from, and that the ones that did exist would be markedly more expensive and less innovative, due to reduced competition.

Yes, it makes sense to worry about the quality of medical devices that people are using, and yes, the FDA (or something like it) can be useful in that regard. But it also makes sense to concern ourselves with the availability of those same devices. Climbing healthcare costs are dangerous, too.

Doctor Paidless? Eh, Maybe.

A new study of 24 medical schools across 12 states by Dr. Anupam B. Jena, Andrew R. Olenski, and Daniel M. Blumenthal—all of Harvard Medical School—shows that male and female doctors are often paid disparate salaries, even when accounting for several factors. The average absolute difference was around $51,000 while the adjusted average difference was about $20,000.

The New York Times quickly ran an article that all but declares the wage gap between the sexes to be a result of discrimination. It seems like an easy call to make, but the omission of certain factors gives reason to be cautious about jumping to such a conclusion.

Before I begin, I should acknowledge that the authors are far more credentialed than I am (no large feat) and probably put a lot of hard work into this study. Not that they’ll ever read this critique, but I can imagine it would be pretty annoying to read some nobody undermining your diligent study. My aim here isn’t to discredit their study, but to mention some factors that, if included, could have helped produce a more definitive picture.

*

To put it lightly, the popular discourse on the gender pay gap is littered with misinformation. Practically everyone has heard the standby that women are paid on average seventy seven cents for every dollar that men are. Such a blunt “statistic” makes a good rallying cry, but is just about useless in every other respect. And yet, it sticks.

Part of the problem is the age-old fallacy of inferring causation where mere correlation is at hand. This is an understandable reflex in this case—across cultures, women share a history of oppression and obstructed paths to the labor force. But this same kind of logical leap would never be tolerated if someone alleged that employers discriminated in favor of Asian men, who earn 117% of their white counterparts in 2015.

Another problem is that comparing men and women in the workplace is far more difficult than you might believe, making study results suspect in many instances.

Salaries are influenced by a number of factors. The authors know this. According to the abstract, they measured:

…information on sex, age, years of experience, faculty rank, specialty, scientific authorship, National Institutes of Health funding, clinical trial participation, and Medicare reimbursements (proxy for clinical revenue).

It seems like a pretty complete list, but there are a couple other factors that—if previous studies are any indication—would have a significant effect on the findings. The most important ones that I can think of are marital status, whether or not a doctor has children, shift availability, how many hours, not years, they have worked, and the length of any interruptions to their tenure. The inability or failure to measure and report these factors gives reason to be somewhat skeptical of the findings.

*

Marriage and Children

Marriage and child counts are extremely important to any discussion of salary differences between men and women. Interestingly, they usually correlate to opposite effects on the incomes of men and women, presumably due to the unequal division of domestic labor and child rearing. Married men are known to have higher salaries than comparable unmarried men for this reason–their partners are essentially investing in their earning potential. Consequently, the reverse tends to be true for women, who often divert more of their attention to household labor, thus giving up some of the effort they might otherwise spend on money-earning activities.

Having children tends to enhance this trend. From a division of labor standpoint, most couples are probably deciding that it’s more efficient to have one member carry the bulk of the remunerative workload and the other to handle a larger share of the unpaid labor. Of course, it’s not necessary that these roles be taken by men and women respectively, but that seems to be the way it plays out most of the time.

The Economist unwittingly stumbled onto this hypothesis in February, when their blog noted that lesbians often earned more money than straight women while gay men often earn less than straight men. Unfortunately, they didn’t note that lesbians are about twice as likely as gay men to get married and thus replicate the heterosexual division of labor between spouses, to some extent (lesbians are also more likely than straight women to be childless and split domestic work more equitably–all of which would contribute to higher income per capita relative to straight women).

Any discussion purporting to measure income differences as a factor solely of sex has to take these effects into account. A married man with 20 years of experience isn’t comparable to an unmarried man with 20 years experience, let alone a married mother of three with equal tenure. Because marriage and childbearing tend to have opposite effects on the incomes of men and women, the only accurate comparison to make is between childless men and women who have never been married. Some inference can be gleaned from the fact that women in their twenties earn more than similarly-aged men on average. This trend reverses around age thirty.

Interruptions to Tenure

Because knowledge and technology are constantly advancing, interruptions to tenure can be especially penalizing in a highly technical field such as medicine. The value of a computer science professional, for example, is estimated to have a halflife of only three years. A doctor with five years of experience who is coming back into the labor force after a four year gap isn’t likely to be as valuable as a comparable doctor who has spent the last five years working. We wouldn’t expect to observe the same phenomenon to the same degree in low-skilled occupations.

Therefore, simply measuring age, years of experience, and faculty rank, as this study does, doesn’t give us a complete picture of the prospects of the doctor in question. Because female employees are more likely to take time off, we can expect that on average–and especially in fields with high rates of obsolescence like medicine–they will suffer harsher penalties for absence from the workplace than male practitioners, likely depressing their average wages.

Hours and Shift Availability

Measuring years gives some idea of an employee’s commitment to a field, but a more complete picture would require the amount of hours worked and the shift availability of the physicians in question. Past research has shown that for numerous reasons, female workers tend to be more willing to trade pay for flexibility than males. If that’s true for the doctors in this study as well, it would explain some of the average pay differences that simply counting years would omit.

*

There are many more variables that contribute to one’s salary than I can begin to list here. The problem with studies like this is the propensity of readers to project their own prejudices onto the results.

Assuming that men and women care equally about income, for example, would lead one to believe that women are routinely getting the short end of the stick. But, to quote anyone with quick access to a hacky sack: money isn’t everything. There is a huge disparity in workplace deaths between genders (men are about 13 times more likely to die at work), which suggests that the people willing to trade job safety for income are more likely to be men than women. Neither a coal miner nor a part-time secretary could rightly tell the other they’re making the wrong choice. Individuals value things differently–that they are able to pursue employment that fits their criteria is a wonderful thing.

One could argue that the pay differentials between men and women in a given field are both a result of outside factors, individual preferences, and sexism, because those preferences are influenced by a society that has different expectations for men and women. What doesn’t really make sense, but is asserted quite often, is widespread employer discrimination against female employees.

There are basic economic reasons to doubt this. The amount of coordination involved among competing entities would be unthinkable. Moreover, since there is no fixed price of a physician that we would deem “correct”, underpaying women could be restated as overpaying men. Why, we might ask ourselves, would hospitals routinely and arbitrarily overpay their male employees? If female doctors were capable of performing the same exact work but could be retained for $20,000 less per year, why would any hospital hire a male doctor who insisted on being paid a premium? That would be some truly expensive bigotry.

While it would be difficult to prove that observed income differentials are the result of discrimination, it is indeed impossible to prove that discrimination doesn’t affect individuals’ income. It could be the case, and I can’t dispute the possibility. However, as the economist Thomas Sowell has taken great pains to point out, that difference in income would represent the maximum amount attributable to discrimination plus other factors that haven’t been accounted for. My guess is that the results of this study are realistically limited by the availability of certain data and individual preferences.

The Hidden Cost of Public Health

Starting on January first of next year, the City of Philadelphia plans to impose a “soda tax” of 1.5 cents per ounce. The new law—already set to be challenged in court—has proved highly controversial, even within the political left where its revenue-raising potential is pitted against concerns over its regressive nature. The political right seems fairly uniformly unenthused.

But that’s boring. What’s really interesting is that Philadelphia’s government is avoiding calling the tax a public health measure, instead choosing to focus on the additional revenue it might generate, despite soda taxes’ endemic appeal to the public health profession.

Public health officials often laud soda taxes as a means of reducing demand for sugary drinks that are linked to obesity, diabetes, tooth decay and other maladies. The underlying economics are relatively straightforward—raise the price of soda and people will consume less of it. The hope is that doing so will reduce the incidence of the aforementioned conditions and curb associated healthcare spending.

But despite the wide approval of public health professionals, it’s far from clear that a soda tax is an appropriate solution in this scenario. Not only is there reason to doubt it’s efficacy, but in a sense, such a policy blurs the line between public and what we might call ‘private’ health in a way that marks a pernicious slide away from self-determination and seems to me unethical.

Using a tax to “correct” demand is one of the classic methods of solving collective action problems, which tend to involve public goods or open-access resources and often require regulatory oversight. President Bush’s cap and trade initiative in the 1980s, meant to reduce emissions of sulfur dioxide, is a successful example of such an endeavor.

The idea is that if a resource is shared (in this scenario, air quality), then it makes sense to have a centralized agency impose regulations to account for the “social cost” associated with its degradation. If something can be proven to affect others (without requiring an onerous amount of nuance), there’s a compelling case for using coercive public policy to address it.

That’s certainly the case when air quality is concerned. But there are key differences between air pollution and obesity; even though they both affect people’s health, one is far more likely to be incurred privately. We all breathe the same air; but your neighbor drinking a Double Gulp everyday doesn’t affect your waistline. Someone else being fat doesn’t harm you. Right?

Actually, depending on how an individual’s healthcare is paid for, that last part is up for debate.

Soda drinkers tend to be poorer, (the same is true for users of tobacco, which is subject to similar tax-based deterrence) and therefore more likely to have their healthcare publically subsidized. In a not-so-tangential sense, that means it’s very much in the interest of the taxpayer that those people be deterred from such actions. After all, any tax dollars not spent on healthcare can be spent on something else or not collected.

In my view this poses an ethical challenge—does public financing of healthcare erode beneficiaries’ sovereignty over their health-related decisions? And, if it does, what sort of precedents are we setting should America switch to a universal healthcare system, which would effectively render all health public?

It does seem to be the case that as more resources are poured into social safety nets, there is increased incentive for societies to attempt to engineer the results they want through coercive means. The resulting policies range from ethically dubious taxation to outright illiberalism.

Take, for example, the rather harsh methods by which the Danish government discourages immigration and asylum seekers: seizing assets worth more than $1,450; using policy to force assimilation (in one city mandating that pork be included on municipal menus); cutting benefits to refugees by up to 45%.

A similar situation is unfolding in Sweden, where the extensive social safety net has turned immigration into a tug-of-war between classical and contemporary liberal sentiments. The Economist writes:

The biggest battle is within the Nordic mind. Is it more progressive to open the door to refugees and risk overextending the welfare state, or to close the door and leave them to languish in danger zones?

Closer to home, Canada has recently received some scrutiny for its habit of turning away immigrants with sick children so as to not overburden its single-payer healthcare system.

Some of this might sound cruel or discriminatory. Some of it is. But these are rational responses from systems forced to ration scarce resources. In a sense, it’s the ethical response, given that governments are beholden to their taxpayers.

It’s a natural goal for public health experts, economists, and others whose jobs are to optimize society to try to promote a healthier nation. Our national health and wealth would clearly be improved if obesity, diabetes, etc. were eradicated. And yes, that could conceivably be achieved by any number of forceful policies—what about a Trump-style deportation of the obese?!

But we must consider the costs as well as the benefits of such policies. Are the potential gains worth ceding dominion of our personal decisions to rooms of “experts?” Is it possible for the conversion of health from a private to public good to coincide with our liberal values?

I don’t think so, at least not in the extreme. If health becoming a public resource means that the government must take an increasingly paternalistic and protectionist role in our society, it’s not worth whatever we might gain—or lose around the midsection. After all, if people can’t be trusted to decide what food to eat, what can we be trusted with? If a soda tax is okay, what about a tax on red meat, sedentarism, or motorcycles? Surely we’d be healthier if we did less of each.

I do believe there is an appropriate role for government to play in promoting the private health of the masses, but it’s significantly more parochial than the sort of collective action scheme fetishized by academics. To loosely paraphrase the Great Ron Swanson: people deserve the right to make their own peaceful choices, even if those choices aren’t optimal.

Side note: I would also argue that there’s some pretty heavy cognitive dissonance at play here as far as soda taxes go. The federal government hands out generous subsidies—collected from taxpayers—to corn producers that make junk food and soda cheaper to consumers. If more expensive soda is the remedy, why not remove those subsidies rather than tax consumers twice?