Recent events have caused me to revise my assessment of universal basic income (UBI), which I’d previously written off as a utopian pipe dream. I’m still skeptical that it would be a good idea, but I’m more convinced than previously that it’s possible.
I had the opportunity to write an article on this. Here’s a quick summary, broken down by subject area:
The salience and popularity of UBI has increased massively in recent years. In 2011, Rasmussen Reports conducted a survey that found just 11% of adults favored a universal basic income. When they asked the same question this past April, that number had jumped to 40%.
Democratic Presidential Candidate Andrew Yang and a slew of tech executives have been proselytizing the electorate of the imminent obsolescence of human labor. Covid-19 may have exacerbated that: many of the jobs destroyed by the pandemic are not predicted to return.
The financial stimulus provided by the CARES Act has been popular, and there are calls from the public and prominent figures for its continuation. And while unemployment has risen sharply since March, poverty has actually fallen. I feel like this bodes well for the policy idea of giving people money.
I used to think any universal basic income program would be prohibitively expensive. But I think I underestimated our capacity for deficit spending, borrowing, and “printing” new reserves. I’m not saying we can do any of this without limit, but I feel less confident about where that limit is now.
Persistent low inflation, a feature of the last decade or so, is key to financing this level of public spending. As The Economistput it in July, “The absence of upward pressure on prices means there is no immediate need to slow the growth of central-bank balance-sheets or to raise short-term interest rates from their floor around zero.”
This may be the one area where the case for UBI looks worse than it did previously. In the wake of summer’s unrest, it’s become more obvious that work plays a valuable role as a method of soft social control. (This is in keeping with the theories of UBI proponents who claim that work prevents people from mass organizing, though perhaps the other side of the coin.) If I were a national politician, I might be wary of the effects of greatly diminishing the role of work in society.
But there’s at least one reason to cautiously infer some sociological benefit too. A survey conducted by the CDC in June found a shocking 10.7% of the general population—and 25.5% of 18–24 year-olds—had seriously considered suicide in the past 30 days. (The benchmarks for these figures are respectively 4.3% and 11%.) The same survey found “only” 4.7% of unemployed people had experienced suicidal ideation during the same interval. Maybe the UBI-esque conditions brought on by pandemic relief are responsible for the counter-intuitive gap?
All told, I still don’t think we’ll see a universal basic income at any large scale for a long time, if ever. But the idea can’t be dismissed out of hand so easily anymore. Who knows how the winds will blow in ten or twenty years?
The following is a bloggier, graphier version of a long-form piece I wrote for Merion West last month on the case for free vaccines. If you’re so inclined, give it a read there so the editors think I’m cool!
In July, the Trump administration announced it had committed to buying 100 million doses of an unfinished Covid-19 vaccine being developed by Pfizer and BioNTech. The vaccine entered late-stage human trials on July 27, with the hope that it will be approved before the year’s end. If and when it obtains Food and Drug Administration approval, the vaccine will be delivered at no cost to the consumer, regardless of insurance status.
That could be characterized as an interesting play for an administration that made it one of its first orders of business to weaken the Affordable Care Act—not to mention a political party that spent the Obama years railing against “socialized medicine!” And indeed, many were quick to point out that in this instance, the United States is acting very much in the image of a country with a national healthcare system, the implicit critique being that the Coronavirus pandemic exposes weaknesses of “market-based healthcare” that are present under normal conditions.1
There’s some merit to this, of course. But more importantly, I think we’ve actually hit on a rare opportunity for compromise in the national healthcare debate: free vaccines. Whether or not this ends up being the first step in the path to single-payer healthcare, I think it would be a good idea. (Note: not all reform proposals are technically for a “single-payer” healthcare system. I’m using the term as a catch-all because it has a more neutral connotation than anything else I can think of.)
Why not just do it all?
To explain why, I need to take a detour. As you may know, I’m not a proponent of a government takeover of the healthcare industry (though my resolve against it has really subsided lately). My case against full-on single-payer is pretty much utilitarian—costs versus benefits.
The “costs” part of the equation isn’t super interesting. To summarize, they are enormous. The 2016 Sanders campaign offered by far the rosiest projection, putting the cost at $13.8 trillion over ten years. Most other projections are near $30 trillion over the same period—so about $3 trillion a year, or 85% of 2019 federal revenue.
Meanwhile, the benefits—in terms of objective measures of physical health—are really uncertain! I think people take it for granted that healthcare makes people healthier, but that’s actually not super clear. A high-powered experiment on this—often referred to as the “Oregon Medicaid Experiment“—was conducted in 2008. The state used a lottery system to enroll low-income, uninsured adults in Medicaid. It also selected a control group from the same population that was not enrolled so their outcomes could be compared.
The study found that Medicaid enrollment “had no statistically significant effects on blood pressure, cholesterol, or cardiovascular risk.” It did, however, produce a marked drop in depression, about 30% relative to the control group. This is why some have derided Medicaid as a “7-trillion-dollar anti-depression program.”
Medicaid enrollment did increase healthcare utilization substantially. But is this good if it’s not paired with better health outcomes? There is a widespread belief among doctors that Americans are already over-consuming healthcare. A recent survey of over 2,000 physicians found two-thirds believed at least 15 to 30 percent of medical care was unnecessary. Likewise, doctors have observed that most patients seem to be doing “just fine” despite abstaining from visiting the doctor during the current pandemic.
So to summarize my view, single-payer healthcare will cost a lot and might not do too much for us health-wise—it might even have negative effects on healthcare delivery. Its good effects can be replicated in more direct, cheaper ways. (E.g., if people are suffering from depression because they’re stressed due to finances, just give them money.) And while the healthcare system is in obvious need of reform, I’d like to see us at least try a market-oriented approach (such as forcing hospitals to be transparent about pricing) before we do something we definitely won’t be able to undo.
There are other justifications for single-payer healthcare that are harder to refute, moral arguments and the like. But this is the lens through which I personally think about it.
Why are vaccines different?
Vaccines, however, are—forgive me—immune to some of these problems. For starters, vaccines are wildly effective when compared to other types of medicine. Vaccination allowed for the global eradication of smallpox and may soon lead to the elimination of polio. In the case of other diseases like rubella, measles, and diphtheria, the United States has achieved a near 100% reduction in cases (and deaths) following the development of vaccines.
We have much less success with chronic diseases, which also happen to be responsible for the largest chunk of healthcare spending. There are probably lots of reasons for this. One is that patients simply aren’t great about adhering to long-term pharmacotherapy regiments. Approximately 50% of patients do not take their medications as prescribed, to which the Center for Disease Control and Prevention (CDC) attributes 125,000 annual deaths and 30 to 50 percent of chronic disease treatment failures. Additionally, risky behaviors—sedentarism, poor nutrition, and smoking, for example—contribute to the exacerbation of these illnesses, making treatment a more complicated process that can involve a big commitment on the part of the patient. Obviously, none of this is a problem with vaccines.
Second, vaccines are pretty cheap, especially considering their efficacy. The 100 million doses of Coronavirus vaccine the federal government has ordered will cost $1.95 billion, an average of about $20 per dose.
As advocates for single-payer like to point out, governments are often able to secure more favorable prices for medications than private sector buyers. This is true with vaccines as well, and I assume it could be more so the case under a truly monopsonic regime. (A more rigorous consideration would have to weigh this benefit against the potential drag on innovation, which is worth thinking about. Perhaps the libertarian-socialist idea of issuing government prizes in lieu of patents could obviate this concern!)
Public versus private health consumption
Another good rationale for free-to-consumer vaccines concerns public health. There are efforts from single-payer advocates to construe all health as a public good. Sometimes this is quite a reach, but in the case of vaccines, it makes a lot of sense.
As more people are vaccinated against a disease, it reduces the disease’s ability to spread and reproduce itself. This has the second-order effect of providing some protection for people who can’t get vaccinated: very young infants, people with severe allergies, or those with compromised immune systems. To a much greater extent than other kinds of health care, then, vaccines have public health benefits in a way that doesn’t require contrivance by moral argument.
The United States already has a national vaccine program that’s been quite successful. In 1994, the CDC established the Vaccines For Children program (VFC), which pays to vaccinate children who meet certain criteria. Currently, the program provides vaccines for about half of all Americans under 18. We should just expand it to cover everyone.
For the cohort born between 1994 and 2013, the CDC has estimated routine childhood immunization will prevent 322 million illnesses and 21 million hospitalizations over the course of their lifetimes, and avert 732,000 premature deaths. In financial terms, the returns on the program have been equally impressive. Again, from the CDC: “Vaccination will potentially avert $402 billion in direct costs and $1.5 trillion in societal costs because of illnesses prevented in these birth cohorts. After accounting for $107 billion and $121 billion in direct and societal costs of routine childhood immunization, respectively, the net present values (net savings) of routine childhood immunization from the payers’ and societal perspectives were $295 billion and $1.38 trillion, respectively.”
With the Democrats officially leaving Medicare for All off their 2020 platform, the debate over systemic health care reform may rage on for some time. In the meantime, perhaps, given their impressive track record, free-to-consumer vaccines are something we can all get behind.
Scare quotes because whatever you may call the current healthcare system, it’s hardly a project of market fundamentalism.
Okay, bear with me on this (or skip the math examples and shoot down to the break).
Say you want to add the following fractions: 2/3x + 5/y. Doing so is no great difficulty, as any high school student can attest. Multiply the top and bottom of each fraction by the other’s denominator, ensuring they have a common term. Then you can add the numerators. That looks like this:
2/3x + 5/y = 2y/3xy + 15x/3xy = (2y + 15x)/3xy
However, if I’d given you the end result and asked you to reverse engineer the original expression, that would be quite a bit harder. (If you don’t believe me, try to rewrite (8x + 7)/(x2 + 3x + 2) as the sum of two fractions with constants in the numerator.) To do so, you have to use a process called “partial fraction decomposition,” which I bet few high school students or adults are familiar with.
Consider another example. Let’s say we have a function f such that f(x) = 5x2 + 3x + 10, and we want to take its derivative. To do so, decrement each exponent by one and multiply the coefficient of each term by the original exponent:
Once again, however, if I ask you to do the reverse—to find the anti-derivative of 10x + 3—it’s a different story. You have to undo the same process: divide each coefficient by 1 + the current power of each term of x (in this case, respectively 1 and 0) and reintegrate an x into each term. If you do that, you get this:
∫ 10x +3 = (10/2)x1+1 + (3/1)x0+1 = 5x2 +3x + C
But this isn’t what we started with: our original equation was 5x2 + 3x + 10! When we took its derivative, we ended up multiplying the 10 by 0 (because 10 can be written in terms of x as 10x0). In doing so, we irrevocably destroyed the information that would give us the final term of the quadratic. We know it’s something, so we write it as C per convention.
This post isn’t actually about math. I think there’s a worthwhile sociological metaphor to be had: in both cases above, it’s easier to go forward than backward, and in the second example, it’s actually impossible to completely return. There is a strain of thought that major public policy decisions should be taken with little hesitation because “if it doesn’t work, we can always try something else.”
I don’t think that’s necessarily true.
In a particularly inspired Slate Star Codex blog post, Scott Alexander offhandedly rejects the common characterization of the body as a well oiled machine. Instead, he likens it to a careful balancing act that could easily be thrown off kilter.
People always talk about the body as a beautiful well-oiled machine. But sometimes the body communicates with itself by messages written with radioactive ink on asbestos-laced paper, in the hopes that it’s killing itself slightly more slowly than it’s killing anyone who tries to send it fake messages. Honestly it is a miracle anybody manages to stay alive at all.
Scott Alexander, Maybe Your Zoloft Stopped Working Because A Liver Fluke Tried To Turn Your Nth-Great-Grandmother Into A Zombie
I’d argue that what’s true for the organism is true for the super-organism. Complex society is basically a miracle. The stars have to align for it to form, and it can degrade with comparative ease. When we alter it, there’s no guarantee that going back to the status quo is easy or even possible. Sometimes information may be lost permanently in the transition, while unintended consequences can linger for decades.
The decline of marriage rates among low-income households is a great example. The advent of means-tested welfare programs in the 1960s is widely thought to have dissuaded many lower-income women from marriage through the effective imposition of high marginal tax rates.1 (Their benefits would decrease faster than household income would increase if they married, so it made economic sense not to marry.) This has had all kinds of nasty second-order effects,2 which is why welfare reform in the 1990s explicitly attempted to counteract this unintended consequence—but largely to no avail.
Post-industrial America offers another example. The fortunes of former manufacturing towns in the northeast and mid-west have been almost uniformly bleak since de-industrialization. (If you haven’t yet, seriously, read Sam Quiñones’s DREAMLAND.) There have been and continue to be efforts to reverse this downward trajectory: innumerable economic development programs, paid relocation programs, home-buyer subsidies, corporate tax incentives. But no amount of grant funding or wealth transfers alone can replicate the conditions that created prosperity in those areas. The money was only one part of the equation.
As frustration with the societal status quo increases, people will become more pro-action biased. I don’t necessarily think this is bad—it could be great! But it would be a mistake to proceed without caution where sociological elements are concerned. Introducing a universal basic income, for example, could change… a lot. And there’s no guarantee we could ever go back.
The consequences of the decline of marriage have different valence depending on who’s doing the analysis. There are plenty reasonable-enough takes on why the decline of marriage is good. I think the most reasonable analysis is that it’s a mixed bag generally, but for lower social classes, the results have been less ambiguously destructive. Having been raised in a rather unstable cohabitating household, I’m personally rather attuned to the drawbacks, but that’s just me.
I’ve noticed media outlets are reporting that young adults make up a significant share of coronavirus cases with an air of incredulity.
My local paper posted on Facebook that “More than 50% of coronavirus cases in Massachusetts are people under the age of 50.” Very similarly, the Pittsburgh Post Gazette writes that “more than half of Pennsylvania’s confirmed COVID-19 patients are under 50 years old.” The New York Times, for its part, reports that “nearly 40 percent of patients sick enough to be hospitalized were age 20 to 54.” *
I can’t decide if this is a psyop to get young people to take the epidemic more seriously (as numerous spring-break photos show they should) or genuine surprise. If it’s the latter, I’m not sure if that’s warranted.
In each case, the age ranges in question are massive and not very meaningful without comparison to the age distribution of the general population. For example, in Massachusetts, about 63% of the population is under the age of 50. So if the incidence of coronavirus were age-independent, we might actually expect more cases among people under 50.
I think the issue, then, is that people seem to be assuming prevalence of the virus should be age-dependent to a higher degree than we’re observing. Maybe there’s a good Bayesian case to be made for this null hypothesis; I don’t know. But I feel like laypeople — local papers included — ought to be preceding with the assumption of age-independence, especially because we still don’t have much information.
Also, what’s going on with the under-20 crowd, which makes up 23% of the population but only 2.2% of MA coronavirus cases? Is Gen Z+ holding out on us?
* This isn’t as egregious as the other two examples. It’s still a huge age range: about 48% of America is between the ages of 20 and 54. But since we’re talking about the severity of symptoms and hospitalizations, it seems much more noteworthy.
States are becoming increasingly permissive of various “sinful” economic activities and goods — those understood to be harmful for consumers — allured, at least in part, by the promise of tax revenue they represent. This has certainly been part of the rationale in my home state of Massachusetts, where within the last year the first full casino — MGM Springfield, located a few blocks from my apartment — and recreational marijuana dispensaries opened. Since the fiscal year just ended, now seems like a good time to assess how things are going in that regard.
First, the casino: Before opening its doors, MGM Springfield told state regulators it expected $418 million in gambling revenue over its first full twelve months of operation — $34.8 million per month. According to the Massachusetts Gaming Commission’s June 2019 report, it hasn’t come within $7 million of that mark yet.
Since September, its first full month of operation, the casino has generated nearly $223 million in gambling revenue. The state’s take is a quarter of that, about $55.7 million. That’s two-thirds of what was estimated. MGM Springfield’s President attributes its lower-than-expected revenue to a poor projection of the casino’s clientele — fewer “high rollers” from the Boston area and more from up and down the I-91 corridor.
The introduction of new avenues for gambling is well known to cannibalize existing revenue sources. So add to MGM Springfield’s list of woes that the much flashier Wynn Casino recently opened in Everett, MA, a quick trip from Boston, and that neighboring East Windsor, CT is opening another casino next year.
Massachusetts’ venture into marijuana has been slightly more successful. Sales were supposed to begin in July 2018, the start of the fiscal year, but were delayed until November. Still, the State Revenue Commissioner estimated Massachusetts would collect between $44 million and $82 million from the combined 17% tax (Massachusetts’ normal 6.25% sales tax plus a 10.75% excise tax) over fiscal year 2019. If my math is right, that works out to an expected range of about $32 million to $60 million in sales every month for the remaining eight months of the fiscal year, a threshold met for the first time in May, according to sales data from the Massachusetts Cannabis Control Commission.
As of June 26, the last time the data were updated, marijuana sales totaled $176 million, which would put tax revenue somewhere around $22 million this fiscal year. Not bad, but not a great show either — and a bit surprising to me, given the traffic I’ve had to wade through passing a dispensary on my way to work. Furthermore, the state is probably constrained in its ability to raise the excise tax on marijuana, since that could push buyers back into the informal market. And as more states in the region legalize, there’s a good chance sales will drop off somewhat.
On the other hand, sales of marijuana are clearly ramping up as more stores open, and making projections about a brand new industry can’t be easy. I think people more knowledgeable about the regulatory rollout would also contend that Massachusetts bureaucrats are at least partly responsible for the relatively poor sales. The first shops were concentrated in low-population areas of the state, and the closest one to Boston didn’t open until March. Still, the state was off on this one, too. (I thought I was the only one to notice this, but I guess not.)
A few admittedly tangential reflections on this: The positive spin on the commercialization of marijuana and the proliferation of casinos is that the state is growing more respectful of individual autonomy, abandoning harmful and ultimately unsuccessful prohibitive policy and allowing market forces to dictate what forms of entertainment are viable. If the state should make a few bucks in the process, all the better. Right?
Well, maybe. My natural sympathies lie with the above assessment, but the state’s financial incentive complicates the picture — especially insofar as new sin taxes are attractive alternatives to the prospect of raising traditional taxes.
Taxing the consumption of vices is a markedly regressive form of revenue generation. The most salient example of this is tobacco: its use is more common among those with less education and those below the poverty line, and among smokers, those populations suffer greater negative health effects. But it’s also broadly true that the majority of profits derived from the sale of vices tend to be concentrated among a relatively small group of “power users.” The top tenth of drinkers consume half of all alcohol sold in the United States, for example. I don’t have any data on this at the moment, but if I had to guess, the pathological consumption of vices is probably negatively correlated with the propensity to vote.
The cynical take, therefore, is that the newfound permissiveness of the state is a financially motivated abdication of the state’s most fundamental obligations, a mutually beneficial pact between “limbic capitalists” and politicians.
Ironically, sin taxes have notable limitations as revenue-raisers. For one, unlike other taxes, sin taxes are supposed to accomplish two contradictory goals: curbing consumption and raising revenue. Attention to the former usually requires that tax rates be imposed at higher than revenue-maximizing points. This can also encourage regulators, as with alcohol and tobacco, to tax on a per-unit basis, tying revenue growth to consumption patterns. While they may be tempting stop-gaps, sin taxes are not a long-term budgetary fix, and analysis of their social costs and fiscal benefits should bear that in mind.
As anyone reading this blog is undoubtedly aware, Sarah Huckabee Sanders, the current White House Press Secretary, was asked last month by the owner of a restaurant to leave the establishment on the basis that she and her staff felt a moral imperative to refuse service to a member of the Trump administration. The incident, and the ensuing turmoil, highlights the extent to which business has become another political battleground—a concept that makes many anxious.
Whether or not businesses should take on political and social responsibilities is a fraught question—but not a new one. Writing for the New York Times in 1970, Milton Friedman famously argued that businesses should avoid the temptation go out of their way to be socially responsible and instead focus on maximizing profits within the legal and ethical framework erected by government and society. To act otherwise at the expense profitability, he reasoned, is to spend other people’s money—that of shareholders, employees, or customers—robbing them of their agency.
Though nearing fifty years of age, much of Milton Friedman’s windily and aptly titled essay, The Social Responsibility of Business Is to Increase Profits, feels like it could have been written today. Many of the hypotheticals he cites of corporate social responsibility—“providing employment, eliminating discrimination, avoiding pollution”—are charmingly relevant in the era of automation anxiety, BDS, and one-star campaigns. His solution, that businesses sidestep the whole mess, focus on what they do best, and play by the rules set forth by the public, is elegant and simple—and increasingly untenable.
One reason for this is that businesses and the governments Friedman imagined would reign them in have grown much closer, even as the latter have grown comparatively weaker. In sharp contrast to the get-government-out-of-business attitude that prevailed in the boardrooms of the 1970s, modern industry groups collectively spend hundreds of millions to get the ears of lawmakers, hoping to obtain favorable legislation or stave off laws that would hurt them. Corporate (and other) lobbyists are known to write and edit bills, sometimes word for word.
You could convincingly argue that this is done in pursuit of profit: Boeing, for example, spent $17 million lobbying federal politicians in 2016 and received $20 million in federal subsidies the same year. As of a 2014 report by Good Jobs First, an organization that tracks corporate subsidies, Boeing had received over $13 billion of subsidies and loans from various levels of government. Nevertheless, this is wildly divergent from Friedman’s idea of business as an adherent to, not architect of, policy.
As business has influenced policy, so too have politics made their mark on business. Far more so than in the past, today’s customers expect brands to take stands on social and political issues. A report by Edelman, a global communications firm, finds a whopping 60% of American Millennials (and 30% of consumers worldwide) are “belief-driven” buyers.
This, the report states, is the new normal for businesses—like it or not. Brands that refrain from speaking out on social and political issues now increasingly risk consumer indifference, which, I am assured by the finest minds in marketing, is not good. In an age of growing polarization, every purchase is becoming a political act. Of course, when you take a stand on a controversial issue, you also risk alienating people who think you’re wrong: 57% of consumers now say they will buy or boycott a brand based on its position on an issue.
This isn’t limited to merely how corporations talk. Firms are under increasing social pressure to hire diversity officers, change where they do business, and reduce their environmental impact, among other things. According to a 2017 KPMG survey on corporate social responsibility, 90% of the world’s largest companies now publish reports on their non-business responsibilities. This reporting rate, the survey says, is being driven by pressure from investors and government regulators alike.
It turns out that a well marketed stance on social responsibility can be a powerful recruiting tool. A 2003 study by the Stanford Graduate School of Business found 90% of graduating MBAs in the United States and Europe prioritize working for organizations committed to social responsibility. Often, these social objectives can be met in ways that employees enjoy: for example, cutting a company’s carbon footprint by letting employees work from home.
In light of all this, the choice between social and political responsibility and profitability seems something of a false dichotomy. The stakes are too high now for corporations to sit on the sidelines of policy, politics, and society, and businesses increasingly find themselves taking on such responsibilities in pursuit of profitability. Whether that’s good or bad is up for debate. But as businesses have grown more powerful and felt the need to transcend their formerly transactional relationships with consumers, it seems to be the new way of things.
Imagine: You’re one of the 6.1 million unemployed Americans. Try as you might, you can’t find a job. But you’ve always been great at something—cutting hair, giving manicures, or maybe hanging drywall—so great, in fact, that you reckon you could actually make some real money doing it. What’s the first thing you do?
If your answer was something other than, “Find out how to obtain the state’s permission,” you’re in for a surprise.
A shocking amount of occupations require workers to seek permission from the government before they can legally practice. This includes not just the obvious, like doctors and lawyers, whose services, if rendered inadequately, might do consumers life-threatening harm, but also barbers, auctioneers, locksmiths, and interior designers.
This phenomenon is known as occupational licensing. State governments set up barriers to entry for certain occupations, ostensibly to the benefit and protection of consumers. They range from the onerous—years of education and thousands of dollars in fees—to trivialities like registering in a government database. At their most extreme, such regulations make work without a permit illegal.
As the United States transitioned from a manufacturing to a service-based economy, occupational licensing filled the “rules void” left by the ebb of labor unions. In the past six decades, the share of jobs requiring some form of license has soared, going from five percent in the 1950s to around 30 percent today. Put another way: over a quarter of today’s workforce requires government permission to earn a living.
There’s little proof that licensing does what it’s supposed to. For one, the potential impact to public safety seems wholly incidental to the burden of compliance for a given job. In most states, it takes 12 times as long to become a licensed barber as an EMT. In a 2015 Brookings Institution paper, University of Minnesota Professor Morris Kleiner, who has written extensively on the subject, states: “…economic studies have demonstrated far more cases where occupational licensing has reduced employment and increased prices and wages of licensed workers than where it has improved the quality and safety of services.”
Ironically, the presence of strict licensing regulations also seems to encourage consumers to seek lower-quality services—sometimes at great personal risk. When prices are high or labor is scarce, consumers take a DIY approach or forego services entirely. A 1981 study on the effects of occupational licensing found evidence for this in the form of a negative correlation between electricians per capita and accidental electrocutions.
A less morbid, but perhaps more salient, observation is that licensing often creates burdens that are unequally borne. Licensing requirements make it difficult for immigrants to work. In many states, anyone with a criminal conviction can be outright denied one, regardless of the conviction’s relevance to their aspirations. These policies, coupled with the potential costs of money and time, can make it harder for poorer people, in particular, to find work.
But surely, you might say, there must be some benefit to licensing. And technically, you’d be right.
Excessive licensing requirements are a huge boon to licensed workers. They restrict the supply of available labor in an occupation, limiting competition and in some cases raising wages. There’s little doubt that occupational licensing, often the result of industry lobbying, functions mainly as a form of protectionism. A 1975 Department of Labor study found a positive correlation between the rates of unemployment and failures on licensing exams.
Yet even licensed workers can’t escape the insanity unscathed. Because licenses don’t transfer from state to state; workers whose livelihoods depend on having a license face limited mobility, which ultimately hurts their earning potential.
Though licensure reform is typically thought of as a libertarian fascination—the libertarian-leaning law firm Institute for Justice literally owns occupationallicensing.com—it also has the attention of more mainstream political thinkers. The Obama Administration released a report in 2015 outlining suggestions on how the states might ease the burden of occupational licensing, and in January of this year, Labor Secretary Alexander Acosta made a similar call for reform.
Thankfully, there seems to be some real momentum on this issue. According to the Institute for Justice, 15 states have reformed licensing laws to “make it easier for ex-offenders to work in state-licensed fields” since 2015. Louisiana and Nebraska both made some big changes this year as well. That’s a great start, but there’s still much work to be done.
Some sixty years into the “war on poverty,” government welfare programs remain the subject of much scrutiny. As the Trump administration unveils a new tax plan, fresh off numerous attempts to repeal and replace the Affordable Care Act, perennial questions about whether the government is doing enough to reduce poverty have resurfaced.
This debate often focuses almost exclusively on poor Americans, and solutions mostly center around the redistribution of resources via government transfers. On many levels, this makes sense; on the first count, non-Americans don’t vote, and politicians tend not to pay much attention to groups that cannot help them win elections. Secondly, the government’s ability to act on poverty is somewhat limited — it can try to create policies that facilitate wealth, but it cannot actually produce wealth on its own. Spreading around some of the surplus is therefore an attractive option.
But from a utilitarian and humanitarian perspective, this debate represents a missed opportunity. Limiting the conversation to wealth transfers within an already wealthy nation encourages inefficient solutions at the expense of ideas that might do a lot more good for a lot more people: namely, freeing those people, who are not at maximum productivity, to pursue opportunity.
Between the EITC, TANF, SNAP, SSI, Medicaid, and other programs, the United States spent over $700 billion at the federal level in the name of alleviating poverty in 2015. A 2014 census report estimates that Social Security payments alone reduced the number of poor Americans by nearly 27 million the previous year. Whatever your stance on the long-run effects of welfare programs, it’s safe to say that in the short term, government transfers provide substantial material benefits to recipients.
Yet if the virtue of welfare programs is their ability to improve living standards for the needy, their value pales in comparison to the potential held by labor relocation.
Political boundaries are funny things. By crossing them, workers moving from poor to rich nations can increase their productivity dramatically. That’s not necessarily because they can make more products or offer better services — although that is sometimes the case as well — but rather because what they produce is more economically valuable. This is what economists refer to as the “place premium,” and it’s partly created by differences in opportunity costs between consumers in each location.
Median wages of foreign-born US workers from 42 developing countries are shown to be 4.1 times higher than those of their observably identical counterparts in their country of origin. Some enthusiasts even speculate that the elimination of immigration restrictions alone could double global GDP. The place premium effect can be powerful enough to make low-skilled positions in rich countries economically preferable to high-skill immigrants from poor nations.
We have a lot of inequality in the United States, and that often masks the fact that we have very little absolute poverty. Even someone who is poor by American standards (an annual pre-transfer income of about $12,000 or less for a single-person household) can have an income that exceeds that of the global median household. Even with relatively generous government transfers, we probably would not increase their incomes by more than triple.
On the other hand, because they start with lower incomes, this same effect allows low-earning immigrants to proportionally increase their standard of living in a way that can’t be matched by redistribution within a relatively wealthy population. For example, the average hourly wage in the US manufacturing sector is slightly over $20; in Mexico, it’s around $2.30. Assuming a manufacturer from Mexico could find a similar position in the United States, their income would increase by around 900%. To provide the same proportional benefit to a severely poor American — defined as a person or household with an income under half the poverty threshold — could cost up to $54,000.
What’s true across national borders is true within them. Americans living in economically desolate locations could improve their prospects by relocating to more prosperous and opportune areas. Indeed, this is exactly what’s been happening for decades. The percentage of Americans living in cities has increased steadily, going from 45% in 1910 to nearly 81% by 2010. Nor is relocation exclusively a long-term solution. During oil rushes in Alaska and North Dakota, populations within the two states exploded as people flocked to economic activity.
Recently, however, rates of migration have been dwindling. Admittedly, there are fewer barriers to intra-national migration than immigration. But there are still things we might do to make it easier for people to move where the money is.
One obvious solution would be to encourage local governments to cut back on zoning regulations that make building new housing stock less affordable. Zoning laws contribute heavily to the rising costs of living in the most expensive cities, leading to the displacement of poorer residents and the sequestration of opportunity. As with immigration, this poses a bit of a political problem — it requires politicians to prioritize the interests of the people who would live in a city over those of the people who currently live there — the ones who vote in local elections.
Relatedly, we might consider revising our approach to the mortgage interest deduction and other incentives for homeownership. While the conventional wisdom is that homeownership is almost always desirable because it allows the buyer to build equity on an appreciable asset, some studies have found a strong positive correlation between levels of homeownership and unemployment. The upshot is that tying up most of one’s money in a home reduces the ability and desire to move for employment, leading to unemployment and downward pressure on wages. Whether or not to buy a home is the buyer’s decision, but these data cast doubt on the idea that the government should subsidize such behavior.
If the goal of policy is to promote human well being, then increasing mobility should be a priority for policy makers. As a species, as nations, as communities, and as individuals, we should strive for a more productive world. Allowing people the opportunity to relocate in the name of increasing their output is a near-free lunch in this regard.
But while the economic dream of frictionless markets is a beautiful one, we live in a world complicated by politics. It’s unrealistic to expect politicians to set aside the concerns of their constituents for the greater good. I will therefore stop short of asking for open borders, the abolition of zoning laws, or the removal of the mortgage interest deduction. Instead, I offer the humbler suggestion that we exercise restraint in such measures, striving to remove and lessen barriers to mobility whenever possible. The result will be a freer, more equal, and wealthier world.
If for no other reason, universal basic income — that is, the idea to replace the current means-tested welfare system with regular, unconditional cash payments to every citizen — is remarkable for the eclectic support it receives. The coalition for universal basic income (UBI) includes libertarians, progressives, a growing chorus of Luddites, and others still who believe a scarcity-free world is just around the corner. Based on its popularity and the growing concerns of coming economic upheaval and inequality, it’s tempting to believe the centuries-old idea is a policy whose time has finally come.
Personally, I’m not sold. There are several obstacles to establishing a meaningful universal basic income that would, in my mind, be nearly impossible to overcome as things stand now.
For one, the numbers are pretty tough to reconcile.
According to 2017 federal guidelines, the poverty level for a single-person household is about $12,000 per year. Let’s assume we’re intent on paying each American $1,000 per month in order to bring them to that level of income.
Distributing that much money to all 320 million Americans would cost $3.84 trillion, approximately the entire 2015 federal budget and far greater than the $3.18 trillion of tax revenue the federal government collected in the same year. Even if we immediately eliminated all other entitlement payments, as libertarians tend to imagine, such a program would still require the federal government to increase its income by $1.3 trillion to resist increasing the debt any further.
Speaking of eliminating those entitlement programs, hopes of doing so are probably far-fetched without a massive increase in taxation. A $1,000 monthly payment to every American — which again, would consume the entire federal budget — would require a lot of people currently benefiting from government transfers to take a painful cut. For example, the average monthly social security check is a little over $1,300. Are we really going to create a program that cuts benefits for the poor and spends a lot of money on the middle class and affluent?
In spite of the overwhelming total cost of such a program, its per capita impact would be pretty small, since all the cash would be disbursed over a much greater population than current entitlements. For this reason, its merit as an anti-poverty program would be questionable at best.
Yes, you can fiddle with the disbursement amounts and exclude segments of the population — dropping minors from the dole would reduce the cost to around $2.96 trillion — to make the numbers work a little better, but the more you do that the less universal and basic it becomes, and the more it starts to look like a modest supplement to our existing welfare programs.
Universal basic income’s problems go beyond the budget. If a UBI was somehow passed (which would likely require our notoriously tax-averse nation to OK trillions of additional dollars of government spending), it would set us up for a slew of contentious policy battles in the future.
Entitlement reform, already a major preoccupation for many, would become a more pressing concern in the event that a UBI of any significant size were implemented. Mandatory spending would increase as more people draw benefits for more years and continue to live longer. Like the entitlements it may or may not replace, universal basic income would probably be extremely difficult to reform in the future.
Then there’s the matter of immigration. If you think reaching consensus on immigration policy is difficult in the age of President Trump, imagine how it would look once we began offering each American a guaranteed income large enough to offer them an alternative to paid work. Bloomberg columnist Megan McArdle estimates that establishing a such a program would require the United States to “shut down immigration, or at least immigration from lower-skilled countries,” thereby leading to an increase in global poverty.
There’s also the social aspect to consider. I don’t want to get into it too much because everybody’s view of what makes people tick is different. But it seems to me that collecting money from the government doesn’t make people especially happy or fulfilled.
The point is, part of what makes universal basic income appear realistic is the political coalition backing it. But libertarians, progressives, and the rest of the groups superficially united behind this idea have very different opinions about how it would operate and very different motivations for its implementation. When you press the issue and really think through the consequences, the united front for universal basic income begins to crack.
Don’t get me wrong; there’s plenty about universal basic income that appeals to this author’s libertarian sensibilities. I think there’s a strong argument for reforming the welfare system in a way that renders it more similar to a basic income scheme, namely replacing in-kind payments and some subsidies with direct cash transfers. Doing so would, as advocates of UBI claim, promote the utility of the money transferred and reduce government paternalism, both goals which I find laudable.
I should also note that not all UBI programs are created equal. Universal basic income has become something of a catch-all term used to describe policies that are quite different from each other. The negative income tax plan Sam Bowman describes on the Adam Smith Institute’s website is much more realistic and well-thought-out than a system that gives a flat amount to each citizen. That it is neither unconditional nor given equally are its two greatest strengths.
However, the issues of cost and dispersion, both consequences of UBI’s defining characteristics, seem to me insurmountable. Unless the United States becomes dramatically wealthier, I don’t see us being able to afford to pay any significant amount of money to all or most people. We would need to replace a huge amount of human labor with automation before this plan can start to look even a little realistic. Even if that does happen, and I’m not sure that it will anytime soon, I think there are better things we could do with the money.
The opioid epidemic, having evolved into one of the greatest public health crises of our time, is a contentious aspect of the ongoing debate surrounding the Republican healthcare proposal.
Voices on the left worry that the repeal of Obamacare’s Medicaid expansion and essential health benefits would worsen the crisis by cutting off access to treatment for addiction and substance abuse. Mother Jones and Vox have both covered the issue. Former President Obama’s June 22nd Facebook post stated his hope that Senators looking to replace the ACA would ask themselves, “What will happen to the Americans grappling with opioid addiction who suddenly lose their coverage?”
On the other side of things, there are theories that the Affordable Care Act actually helped finance–and perhaps exacerbated–the opioid epidemic. It goes something like this: Expanded insurance coverage was taken as a primary goal of the ACA. Two of its policies that supported that goal–allowing adults up to 26 years of age to remain on their parents’ insurance and expanding Medicaid to cover a greater percentage of the population–unintentionally connected at-risk cohorts (chronically unemployed prime-age men and young, previously uninsured whites with an appetite for drugs, to name two) with the means to obtain highly addictive and liberally-prescribed pain medications at a fraction of their street price. (Some knowledge about labor force and other social trends helps paint a clearer picture on this.) Once addicted, many moved on to cheaper and more dangerous alternatives, like heroin or synthetic opioids, thus driving the growth in overdose deaths.
This is a really interesting, if tragic, narrative, so I decided to take a look. I focused on state-level analysis by comparing CDC Wonder data on drug-induced deaths with Kaiser Family Foundation data on expansion status and growth in Medicaid enrollment. The graph below plots states based on their rates of growth in Medicaid rolls and overdose deaths from 2010 to 2015, and is color-coded for expansion status: blue for states that expanded coverage before or by 2015, yellow for states that expanded during 2015, and red for states that hadn’t expanded by the end of 2015. (A note: this isn’t an original idea; for a more in-depth analysis, check out this post.)
What’s interesting is the places where overdoses have increased the quickest. The fastest growing rates of overdose deaths were mostly in states that had expanded Medicaid by 2015; the only non-expansion state to grow by more than 50% since 2010 was Virginia, which in 2015 still had a relatively low rate of 12.7 fatal overdoses per 100,000 population. For some perspective on how bad things have gotten, that rate would have been the 19th highest among states in 2005; today Virginia ranks 42nd in terms of OD rate.
On the other hand, there isn’t a noticeable correlation between increases in Medicaid coverage and increases in the rate of fatal overdoses. Additionally, the rates of overdose deaths in expansion states were increasing before many of the Affordable Care Act’s key provisions went into effect. Starting around 2010, there was a dramatic divergence between would-be expansion states and the rest. It’s possible that states with accelerating rates were more likely to expand coverage in response to increased frequency of fatal overdoses.
So what’s the deal? Did the Affordable Care Act agitate the opioid epidemic? Obviously I don’t have the answer to that, but here’s my take:
I think it would be difficult to argue it hasn’t been a factor on some level, given the far-higher rates of prescription, death, and opioid use and among Medicaid patients than the general population, as well as the state-level trends in OD rates (with acknowledgement that state-level analysis is pretty clunky in this regard; for many reasons West Virginia’s population isn’t really comparable to California’s). I think the fact that state Medicaid programs are adjusting regulations for painkiller prescriptions is an acknowledgement of that.
But if the ACA had a negative effect, I’d think it must register as a drop in the bucket. There are so many pieces to this story: lax prescription practices and the rise of “pill mills,” declining labor force participation, sophisticated distribution networks of Mexican heroin, bogus research and marketing on pain management, stark disparities between expectations and reality. It’s nice to think there’s one thing we can change to solve everything, but I don’t think we’re going to be so lucky.
Here’s another twist: Even if the Affordable Care Act had some negative impact, it could very well be that ACA repeal could make things worse. Scrapping essential benefits coverage could lead to a loss or reduction of access to addiction treatment for millions of Americans. Moreover, gaining insurance has been shown to alleviate feelings of depression and anxiety. How then, might we guess 20 million Americans will feel after losing their insurance? Given the feedback loop between pain and depression, this question deserves a lot of deliberation.