Imagine: You’re one of the 6.1 million unemployed Americans. Try as you might, you can’t find a job. But you’ve always been great at something—cutting hair, giving manicures, or maybe hanging drywall—so great, in fact, that you reckon you could actually make some real money doing it. What’s the first thing you do?
If your answer was something other than, “Find out how to obtain the state’s permission,” you’re in for a surprise.
A shocking amount of occupations require workers to seek permission from the government before they can legally practice. This includes not just the obvious, like doctors and lawyers, whose services, if rendered inadequately, might do consumers life-threatening harm, but also barbers, auctioneers, locksmiths, and interior designers.
This phenomenon is known as occupational licensing. State governments set up barriers to entry for certain occupations, ostensibly to the benefit and protection of consumers. They range from the onerous—years of education and thousands of dollars in fees—to trivialities like registering in a government database. At their most extreme, such regulations make work without a permit illegal.
As the United States transitioned from a manufacturing to a service-based economy, occupational licensing filled the “rules void” left by the ebb of labor unions. In the past six decades, the share of jobs requiring some form of license has soared, going from five percent in the 1950s to around 30 percent today. Put another way: over a quarter of today’s workforce requires government permission to earn a living.
There’s little proof that licensing does what it’s supposed to. For one, the potential impact to public safety seems wholly incidental to the burden of compliance for a given job. In most states, it takes 12 times as long to become a licensed barber as an EMT. In a 2015 Brookings Institution paper, University of Minnesota Professor Morris Kleiner, who has written extensively on the subject, states: “…economic studies have demonstrated far more cases where occupational licensing has reduced employment and increased prices and wages of licensed workers than where it has improved the quality and safety of services.”
Ironically, the presence of strict licensing regulations also seems to encourage consumers to seek lower-quality services—sometimes at great personal risk. When prices are high or labor is scarce, consumers take a DIY approach or forego services entirely. A 1981 study on the effects of occupational licensing found evidence for this in the form of a negative correlation between electricians per capita and accidental electrocutions.
A less morbid, but perhaps more salient, observation is that licensing often creates burdens that are unequally borne. Licensing requirements make it difficult for immigrants to work. In many states, anyone with a criminal conviction can be outright denied one, regardless of the conviction’s relevance to their aspirations. These policies, coupled with the potential costs of money and time, can make it harder for poorer people, in particular, to find work.
But surely, you might say, there must be some benefit to licensing. And technically, you’d be right.
Excessive licensing requirements are a huge boon to licensed workers. They restrict the supply of available labor in an occupation, limiting competition and in some cases raising wages. There’s little doubt that occupational licensing, often the result of industry lobbying, functions mainly as a form of protectionism. A 1975 Department of Labor study found a positive correlation between the rates of unemployment and failures on licensing exams.
Yet even licensed workers can’t escape the insanity unscathed. Because licenses don’t transfer from state to state; workers whose livelihoods depend on having a license face limited mobility, which ultimately hurts their earning potential.
Though licensure reform is typically thought of as a libertarian fascination—the libertarian-leaning law firm Institute for Justice literally owns occupationallicensing.com—it also has the attention of more mainstream political thinkers. The Obama Administration released a report in 2015 outlining suggestions on how the states might ease the burden of occupational licensing, and in January of this year, Labor Secretary Alexander Acosta made a similar call for reform.
Thankfully, there seems to be some real momentum on this issue. According to the Institute for Justice, 15 states have reformed licensing laws to “make it easier for ex-offenders to work in state-licensed fields” since 2015. Louisiana and Nebraska both made some big changes this year as well. That’s a great start, but there’s still much work to be done.
Some sixty years into the “war on poverty,” government welfare programs remain the subject of much scrutiny. As the Trump administration unveils a new tax plan, fresh off numerous attempts to repeal and replace the Affordable Care Act, perennial questions about whether the government is doing enough to reduce poverty have resurfaced.
This debate often focuses almost exclusively on poor Americans, and solutions mostly center around the redistribution of resources via government transfers. On many levels, this makes sense; on the first count, non-Americans don’t vote, and politicians tend not to pay much attention to groups that cannot help them win elections. Secondly, the government’s ability to act on poverty is somewhat limited — it can try to create policies that facilitate wealth, but it cannot actually produce wealth on its own. Spreading around some of the surplus is therefore an attractive option.
But from a utilitarian and humanitarian perspective, this debate represents a missed opportunity. Limiting the conversation to wealth transfers within an already wealthy nation encourages inefficient solutions at the expense of ideas that might do a lot more good for a lot more people: namely, freeing those people, who are not at maximum productivity, to pursue opportunity.
Between the EITC, TANF, SNAP, SSI, Medicaid, and other programs, the United States spent over $700 billion at the federal level in the name of alleviating poverty in 2015. A 2014 census report estimates that Social Security payments alone reduced the number of poor Americans by nearly 27 million the previous year. Whatever your stance on the long-run effects of welfare programs, it’s safe to say that in the short term, government transfers provide substantial material benefits to recipients.
Yet if the virtue of welfare programs is their ability to improve living standards for the needy, their value pales in comparison to the potential held by labor relocation.
Political boundaries are funny things. By crossing them, workers moving from poor to rich nations can increase their productivity dramatically. That’s not necessarily because they can make more products or offer better services — although that is sometimes the case as well — but rather because what they produce is more economically valuable. This is what economists refer to as the “place premium,” and it’s partly created by differences in opportunity costs between consumers in each location.
Median wages of foreign-born US workers from 42 developing countries are shown to be 4.1 times higher than those of their observably identical counterparts in their country of origin. Some enthusiasts even speculate that the elimination of immigration restrictions alone could double global GDP. The place premium effect can be powerful enough to make low-skilled positions in rich countries economically preferable to high-skill immigrants from poor nations.
We have a lot of inequality in the United States, and that often masks the fact that we have very little absolute poverty. Even someone who is poor by American standards (an annual pre-transfer income of about $12,000 or less for a single-person household) can have an income that exceeds that of the global median household. Even with relatively generous government transfers, we probably would not increase their incomes by more than triple.
On the other hand, because they start with lower incomes, this same effect allows low-earning immigrants to proportionally increase their standard of living in a way that can’t be matched by redistribution within a relatively wealthy population. For example, the average hourly wage in the US manufacturing sector is slightly over $20; in Mexico, it’s around $2.30. Assuming a manufacturer from Mexico could find a similar position in the United States, their income would increase by around 900%. To provide the same proportional benefit to a severely poor American — defined as a person or household with an income under half the poverty threshold — could cost up to $54,000.
What’s true across national borders is true within them. Americans living in economically desolate locations could improve their prospects by relocating to more prosperous and opportune areas. Indeed, this is exactly what’s been happening for decades. The percentage of Americans living in cities has increased steadily, going from 45% in 1910 to nearly 81% by 2010. Nor is relocation exclusively a long-term solution. During oil rushes in Alaska and North Dakota, populations within the two states exploded as people flocked to economic activity.
Recently, however, rates of migration have been dwindling. Admittedly, there are fewer barriers to intra-national migration than immigration. But there are still things we might do to make it easier for people to move where the money is.
One obvious solution would be to encourage local governments to cut back on zoning regulations that make building new housing stock less affordable. Zoning laws contribute heavily to the rising costs of living in the most expensive cities, leading to the displacement of poorer residents and the sequestration of opportunity. As with immigration, this poses a bit of a political problem — it requires politicians to prioritize the interests of the people who would live in a city over those of the people who currently live there — the ones who vote in local elections.
Relatedly, we might consider revising our approach to the mortgage interest deduction and other incentives for homeownership. While the conventional wisdom is that homeownership is almost always desirable because it allows the buyer to build equity on an appreciable asset, some studies have found a strong positive correlation between levels of homeownership and unemployment. The upshot is that tying up most of one’s money in a home reduces the ability and desire to move for employment, leading to unemployment and downward pressure on wages. Whether or not to buy a home is the buyer’s decision, but these data cast doubt on the idea that the government should subsidize such behavior.
If the goal of policy is to promote human well being, then increasing mobility should be a priority for policy makers. As a species, as nations, as communities, and as individuals, we should strive for a more productive world. Allowing people the opportunity to relocate in the name of increasing their output is a near-free lunch in this regard.
But while the economic dream of frictionless markets is a beautiful one, we live in a world complicated by politics. It’s unrealistic to expect politicians to set aside the concerns of their constituents for the greater good. I will therefore stop short of asking for open borders, the abolition of zoning laws, or the removal of the mortgage interest deduction. Instead, I offer the humbler suggestion that we exercise restraint in such measures, striving to remove and lessen barriers to mobility whenever possible. The result will be a freer, more equal, and wealthier world.
If for no other reason, universal basic income — that is, the idea to replace the current means-tested welfare system with regular, unconditional cash payments to every citizen — is remarkable for the eclectic support it receives. The coalition for universal basic income (UBI) includes libertarians, progressives, a growing chorus of Luddites, and others still who believe a scarcity-free world is just around the corner. Based on its popularity and the growing concerns of coming economic upheaval and inequality, it’s tempting to believe the centuries-old idea is a policy whose time has finally come.
Personally, I’m not sold. There are several obstacles to establishing a meaningful universal basic income that would, in my mind, be nearly impossible to overcome as things stand now.
For one, the numbers are pretty tough to reconcile.
According to 2017 federal guidelines, the poverty level for a single-person household is about $12,000 per year. Let’s assume we’re intent on paying each American $1,000 per month in order to bring them to that level of income.
Distributing that much money to all 320 million Americans would cost $3.84 trillion, approximately the entire 2015 federal budget and far greater than the $3.18 trillion of tax revenue the federal government collected in the same year. Even if we immediately eliminated all other entitlement payments, as libertarians tend to imagine, such a program would still require the federal government to increase its income by $1.3 trillion to resist increasing the debt any further.
Speaking of eliminating those entitlement programs, hopes of doing so are probably far-fetched without a massive increase in taxation. A $1,000 monthly payment to every American — which again, would consume the entire federal budget — would require a lot of people currently benefiting from government transfers to take a painful cut. For example, the average monthly social security check is a little over $1,300. Are we really going to create a program that cuts benefits for the poor and spends a lot of money on the middle class and affluent?
In spite of the overwhelming total cost of such a program, its per capita impact would be pretty small, since all the cash would be disbursed over a much greater population than current entitlements. For this reason, its merit as an anti-poverty program would be questionable at best.
Yes, you can fiddle with the disbursement amounts and exclude segments of the population — dropping minors from the dole would reduce the cost to around $2.96 trillion — to make the numbers work a little better, but the more you do that the less universal and basic it becomes, and the more it starts to look like a modest supplement to our existing welfare programs.
Universal basic income’s problems go beyond the budget. If a UBI was somehow passed (which would likely require our notoriously tax-averse nation to OK trillions of additional dollars of government spending), it would set us up for a slew of contentious policy battles in the future.
Entitlement reform, already a major preoccupation for many, would become a more pressing concern in the event that a UBI of any significant size were implemented. Mandatory spending would increase as more people draw benefits for more years and continue to live longer. Like the entitlements it may or may not replace, universal basic income would probably be extremely difficult to reform in the future.
Then there’s the matter of immigration. If you think reaching consensus on immigration policy is difficult in the age of President Trump, imagine how it would look once we began offering each American a guaranteed income large enough to offer them an alternative to paid work. Bloomberg columnist Megan McArdle estimates that establishing a such a program would require the United States to “shut down immigration, or at least immigration from lower-skilled countries,” thereby leading to an increase in global poverty.
There’s also the social aspect to consider. I don’t want to get into it too much because everybody’s view of what makes people tick is different. But it seems to me that collecting money from the government doesn’t make people especially happy or fulfilled.
The point is, part of what makes universal basic income appear realistic is the political coalition backing it. But libertarians, progressives, and the rest of the groups superficially united behind this idea have very different opinions about how it would operate and very different motivations for its implementation. When you press the issue and really think through the consequences, the united front for universal basic income begins to crack.
Don’t get me wrong; there’s plenty about universal basic income that appeals to this author’s libertarian sensibilities. I think there’s a strong argument for reforming the welfare system in a way that renders it more similar to a basic income scheme, namely replacing in-kind payments and some subsidies with direct cash transfers. Doing so would, as advocates of UBI claim, promote the utility of the money transferred and reduce government paternalism, both goals which I find laudable.
I should also note that not all UBI programs are created equal. Universal basic income has become something of a catch-all term used to describe policies that are quite different from each other. The negative income tax plan Sam Bowman describes on the Adam Smith Institute’s website is much more realistic and well-thought-out than a system that gives a flat amount to each citizen. That it is neither unconditional nor given equally are its two greatest strengths.
However, the issues of cost and dispersion, both consequences of UBI’s defining characteristics, seem to me insurmountable. Unless the United States becomes dramatically wealthier, I don’t see us being able to afford to pay any significant amount of money to all or most people. We would need to replace a huge amount of human labor with automation before this plan can start to look even a little realistic. Even if that does happen, and I’m not sure that it will anytime soon, I think there are better things we could do with the money.
It seems like more often than not I’m opening these blog posts with an apology for a multi-week hiatus. Since nobody’s emailed to check on my well-being, I can only infer my readership has gotten on fine without my wonk-Jr. takes on public policy and other matters of high import. Fair enough; but don’t think your demonstrated lack of interest will spare you from a quick update.
Actually, it’s all good news: I’ve been having fun learning R (a statistical language), looking for a new apartment, and testing the limits of a 27-year-old liver. I saw Chance the Rapper and a Pirates game in Pittsburgh, which was awesome. The last article I wrote had some real success and was republished in several places, even earning a shout-out from John Stossel:
The big update is that my stint as a (purely) freelance writer has mercifully drawn to a close; I now write for a non-partisan public policy group. In fact, this very blog was one of my strongest selling points, according to my manager. It just goes to show you, kids: if you toil in anonymity for two years, eventually something will go your way.
Okay, enough about me. Let’s talk about a topic close to the heart of many millennials: student loans. More specifically, I want to talk about the interest rates charged on undergraduate student loans.
That interest rates are too high is, unsurprisingly, a common gripe among borrowers. If I had a nickle for every twenty-something I’ve overheard complain that the federal government shouldn’t profit off student loans…well, it still wouldn’t cover one month’s interest. However, this sentiment isn’t limited to overqualified baristas; popular politicians like Elizabeth Warren and Bernie Sanders–and even unpopular politicians–have publicly called for loans to be refinanced at lower rates and decried the “profiteering” of the federal government. From Bernie Sanders’ website:
Over the next decade, it has been estimated that the federal government will make a profit of over $110 billion on student loan programs. This is morally wrong and it is bad economics. Sen. Sanders will fight to prevent the federal government from profiteering on the backs of college students and use this money instead to significantly lower student loan interest rates.
Under the Sanders plan, the formula for setting student loan interest rates would go back to where it was in 2006. If this plan were in effect today, interest rates on undergraduate loans would drop from 4.29% to just 2.37%.
It makes no sense that you can get an auto loan today with an interest rate of 2.5%, but millions of college graduates are forced to pay interest rates of 5-7% or more for decades. Under the Sanders plan, Americans would be able to refinance their student loans at today’s low interest rates.
As one of those debt-saddled graduates, and one of the chumps who took loans at a higher rate of interest, I would obviously be amenable to handing over less of my hard-earned money to the federal government. But as a person concerned with the larger picture, I have to say this is a really bad idea. In fact, rates should be higher, not lower.
First of all, the progressive case for loan refinancing or forgiveness only holds up under the lowest level of scrutiny. Such a policy would overwhelmingly benefit borrowers from wealthy families, who hold the majority of student loan debt. Conversely, most defaulters hold relatively small amounts of debt. Fiddling with interest rates shouldn’t be confused with programs that target low-income students, like the Pell Grant, which are another matter entirely and not the subject of my criticism.
More to the point, the federal government probably isn’t making any money on student loans. Contrary to the claims of Senators Warren and Sanders, which rely on estimates from the Government Accountability Office (GAO) and put federal profit on student loans at $135 billion from 2015-2024, the Congressional Box Office (CBO), using fair-value estimation, shows student loans costing the federal government $88 billion over the same period.
The discrepancy between the CBO and GAO figures comes from the former’s inclusion of macroeconomic forecasts. Essentially, the CBO thinks the risk of default on student loans is higher than the GAO does, due to forces beyond individuals’ control.
Evidence suggests it’s unwise to underestimate the risk associated with student loans. According to a study by the liberal think tank Demos, nearly 40% of federal student loan borrowers are in default or more than 90 days delinquent. Add to that the fact that student loans are unsecured (not backed by collateral or repossessable assets, like a car or house), and they start to look like an incredibly risky venture for the federal government, and ultimately, taxpayers.
That conclusion is deeply unpleasant, but not really surprising if you think about it. Ever notice how the interest rates on private student loans–approximately 10% of the market–are much higher? That’s not because private lenders are greedy; it’s because they can’t lend at the rate of the federal government without losing money.
This is all important because the money that finances student loans has to come from somewhere. Be it infrastructure upgrades, federal support for primary education, or Shrimp Fight Club, the money spent on student loans isn’t available for competing priorities. This is even more important when you consider the loss the federal government is taking on these loans, the cost of which is passed onto future taxpayers in the form of higher taxes or lower spending. Since higher education is only one among infinite human desires, we need to decide how much of our finite resources to devote to it. Properly set interest rates are one way (probably the best way) to figure that out.
The irony, of course, is that doing so would require the government to act more like a private lender–the very thing it’s designed not to do! Our student loan system ensures virtually anyone who wants to study has the money to do so, regardless of the likelihood they’ll be able to repay. One of the nasty side effects of this indiscriminate lending is a large amount of distressed borrowers, who now find themselves in the uncomfortable position of digging out from under a mountain of debt they likely shouldn’t have been able to take on.
More so than other forms of government spending, student loans have specific, discernible beneficiaries: the students who get an expensive education financed by the taxpayer at below-market rates. Sure, you can argue there’s some spillover; society does benefit from having more highly-trained workers. But most of the time, highly skilled labor is rewarded with higher wages. That being the case, is it really too much to ask for borrowers to pay a level of interest that reflects the actual cost of issuing their loans?
Yes, this would be discouraging for some: particularly those who want to pursue non-remunerative fields of study. That’s not such a bad thing; higher interest rates would steer people away from obtaining degrees with low salary expectations, which would–by my reckoning–reduce rates of delinquency and default over the long term. They would also help mitigate some of the pain of defaults when they do happen.
But–you might protest–you can’t run the government like a business! And sure, a lot of the time, you’d be right. However, I really think this is one area where doing so is appropriate–even desirable. Hear me out.
When the government can fund itself through profitable investments rather than zero-sum transfers, it should. If we’re going to have a government of any size (and few suggest that we shouldn’t), then we need to pay for it. Which sounds like the preferable way for that to happen: voluntary, productive, and mutually beneficial investments in society; or the forceful appropriation of private resources? I’m not suggesting the former could entirely replace the latter, but when it can, I think it absolutely should.
Astute readers will realize if the government decides to lend profitably, it will have to compete with private lenders, which would cut into its margins and make its presence in the market redundant. So maybe it’s just a pipe dream. But if profitable lending isn’t possible, the federal government should at least try to minimize losses. One way or another, that means higher interest rates.
First of all, Happy belated New Year to all 16 people who read this blog. Sorry for the lapse in posts; I’ve been busy basking in the relative success of my last article, looking for a new job, and freezing my ass off in the midst of this cruel phenomenon called New England winter.
Whilst taking shelter from the subfreezing temperatures–emerging only to go on job interviews and buy scotch–I’ve done my best to keep up with the world beyond the attic in which I reside. I know, for example, that the Federal Reserve raised rates for the first time in 2016, signalling that inflation may be returning and the US economy might finally be moving towards normalcy.
How you feel about inflation depends on where and when you live. In the developed world, hyperinflation, once a real problem, has largely been tamed by central banks, which use monetary policy to constrain the money supply when inflation starts to get out of hand. In fact, since the financial crisis inflation has been so weak that some central banks have pursued negative interest rates with the hope of staving off deflation.
In other parts of the world, hyperinflation persists. Venezuela is slated to experience 1,600% inflation in 2017, per the IMF.
The combination of cold weather and thoughts of inflation brought to mind some famous pictures of residents of the Weimar Republic using increasingly useless money for purposes other than exchange.
A woman using money for fuel because it’s cheaper than wood!
Money as wallpaper…pretty hip.
Anyway, that got me wondering: what kind of inflation would we need to see to make dollar bills an efficient heating source? To find out, I compared dollar bills to other sources of heat on a cost/megajoule basis. While the nominal cost of other heat sources would increase with inflation, it would remain constant with dollar bills because their heating value is independent from their monetary value.
Difficulties, Assumptions, and Deficiencies
In order to turn this into an answerable question I had to build some assumptions and omissions into my calculations.
First of all, all prices listed reflect only supply costs associated with different fuel types. Energy costs are typically broken down into charges for supply (rate times quantity) and delivery, which, depending on the energy source in question is either a fixed cost, a function of supply, or a function of “user profile.” For example, delivery charges for electricity are often higher during hours of “peak demand.”
Second, all calculations are based on prices from my area. I also didn’t go crazy trying to find the best deals on things like pine firewood, but instead relied on the most available source (in the case of pine wood, the Stop and Shop down the street). Different prices would obviously mean different equilibrium points.
The third assumption, assuming the price of these energy sources would not be influenced by anything other than inflation, is highly unrealistic. You can bet that if the price of natural gas doubles for 10 years in a row, people will start coming up with new heating sources or moving south for the winter. But again, for simplicity’s sake we’ll assume nothing will change in response to increasing prices.
One last assumption: that no one would be foolish enough to burn dollar bills in any denomination other than $1.
Far and away the most challenging part of this was doing the data conversions; as you can imagine, kerosene isn’t commonly purchased by the kilogram. I haven’t taken a math or science class in about 4 years. Add to that my stubborn insistence that I do my own calculations by hand and you’ve got a recipe for a very humbled author.
Nevertheless, I persisted. The second column in the table above shows lower heating values–the net amount of energy released by burning–for various fuel sources, measured in megajoules per kilogram (A friend explained this concept to me–thank you, Zane). The third lists the cost per kilogram, and the fourth displays the cost per megajoule of energy.
With the exception of dollar bills, finding heating values for these different substances was easy. I found a couple reliable websites (like this one and this one) that listed heating values. Wikipedia also lists heating values for common fuels. There were a few discrepancies when I looked to confirm values, but nothing too major.
Dollars are made of a blend of cotton and linen fibers (about 75% and 25%, respectively), so I wanted to take a weighted average of the two heating values. Unfortunately, even that information proved too difficult to find. I thought it would be easy to find similar pieces online, but I turned out to be wrong about that. I only found one article that was trying to get at the same thing and in the end, I just decided to go with the value they suggested: 4 btu per bill or 4.22 mj/kg.
If you want to cut to the moral of the story, here it is: Don’t burn dollars for heat. It’s a terrible, terrible idea.
Barring some kind of extreme and sustained change to US monetary policy, it will probably never make sense to burn dollar bills for heat. Here’s a chart showing how long it would take for dollars to become an efficient fuel source over years at 100% annual inflation (prices doubling every year):
The price of natural gas would have to double every year for 16 years before you could produce more heat by burning singles than buying natural gas. However, as Venezuelans know, prices can increase faster than that. Here’s the same graph with the Venezuelan inflation rate of 1600%:
So maybe we’ll see some Venezuelans burning currency in the near future, but hopefully not–a lot of human misery comes along with that kind of inflation. A regime change and some monetary restraint would be a much better outcome.
If anyone out there has some other heating source they’d like to see added to the graphs, leave a comment below and I’ll gladly include it.
Last week, Gallup (in cooperation with the US Council of Competitiveness) released an incredibly detailed thirty-year study on the decline of American productivity. While the report is certainly worth reading (you can download the full version here), it’s about 120 pages long–who has the time or patience?
Luckily for you, and for reasons to be touched on herein, I do. It seems unnecessary to say, but nearly everything that follows is extracted from Gallup’s report. Without further ado, here’s a quick summary of No Recovery: An Analysis of Long-Term US Productivity Decline.
First, a note: Quality-to-cost ratio
Gallup helpfully identifies the engine of economic growth as an increase in the ratio of quality to cost. In other words, growth is evidenced by increased efficiency: falling real costs or increasing quality.
To the economically inclined, this may seem obvious. But it’s inconsistent with lots of contemporary policy and thus a worthwhile observation. Indeed, Gallup puts the cost of increased federal regulation at $250 billion annually since 1981.
GDP Growth Slowdown
It’s no secret that America’s recovery from the Great Recession has been anemic. Between 2007 and 2015, GDP growth was about 1% annually. If that rate continues, Gallup estimates that by 2050 GDP per capita (currently $56,000) will only reach $79,000 by 2050. For some perspective on how much the rate of growth matters, Gallup offers that a growth rate of 1.7% over the same period would result in a GDP per capita of $101,000.
Not everyone is a fan of studying GDP; its omission of unpaid labor caused feminist economist Marilyn Waring to claim it was invented by men to “keep women in their place.” However, Gallup argues, it’s rather reliable as a proxy for human well-being as it correlates with various quality of life metrics.
Similarly, GDP growth is important in that it indicates the degree to which an economy is becoming more efficient and effective at creating value. Lack of growth indicates an economy that is not becoming more efficient, either through failure to increase the quality of goods and services available or a expanding costs without corresponding increases in quality.
Gallup attributes the slowdown in growth to dropping quality-to-cost ratios in three key sectors of the American economy–healthcare, housing, and education–which account for over 50% of inflation over the past 30 years. Without the inflation incurred in these sectors, real GDP growth would have been 3.9% between 1980 and 2015.
The United States devotes “more resources to healthcare than any other country” but receives worse outcomes than most OECD nations, the report details. By Gallup’s measurements, health outcomes have stagnated for Americans, particularly those of working age, since 1980.
Since then, Gallup estimates that 24% of inflation has been caused by increasing healthcare costs. The cost of healthcare has increased 4.8 times since 1980, while the cost of health insurance multiplied by 8.7.
For all our increased spending, Americans are seeing practically no returns. Maternal mortality increased from 12 to 28 deaths per 100,000 births (1990-2014). The age-adjusted rate of obesity, which causes and additional $170 billion in health spending, increased from 15% to 35% between 1976 and 2011. Age-adjusted rates of diabetes increased from 3.5% to 6.6% from 1980 to 2014.
Not only are Americans getting sicker, but their illnesses seem to be getting more severe. Illness or disability is now the leading reason Americans are out of the labor force. Those reporting a disability were more likely to have been out of the labor force within the past two years than in the past (78% 1988-1990 compared to 84.5% 2014-2015). Those out of the labor force are more often reporting pain than in the past and are more likely to be taking opioids.
Poor health outcomes and rising costs are having direct and indirect effects on the labor market.
The rising cost of healthcare, which is increasingly obtained through employers, has created a drag on employment, Gallup argues. Healthcare costs are acting as barriers to entry, preventing companies from opening, hiring, or expanding and holding down wages. Healthcare now composes 8.1% of employee compensation, up from 4.5% in 1980.
The study identifies some of the primary causes of healthcare inflation over the past four decades.
First, idiosyncratic private practices consume a lot of money and time. Americans spend 4.6 times as much on healthcare administration costs as the OECD average.
Federal and state regulations play a huge role in cost inflation as well. In many cases, state laws prohibit nurses from carrying out the functions of general physicians, even though evidence suggests that patients treated by nurses have equal or better outcomes than patients seen by general physicians. According to Gallup, allowing nurses full practice would save hundreds of billions of dollars. Intense regulations depress nursing hours and inflate physicians’ salaries to levels far beyond what is found in other OECD countries.
State-sanctioned hospital monopolies are also listed as a cause of healthcare inflation. According to the study, lack of competition in healthcare wastes $55 billion annually.
The study suggested a lack of business training on part of physicians as a potential cause for weak productivity gains in healthcare. Perversely, Gallup reports that 68% of healthcare innovations cost more than previous methods of treatment–even accounting for health outcomes. (As an aside, I would speculate that a fee-for-service model does little in the way of incentivizing innovative and cost-saving practices).
What not to blame: Access to care, illegal drug abuse, changing demographics, and diet and exercise patterns do not significantly account for the decline in healthcare productivity or outcomes, according to the report. However, prescription opioid use is indicated as a factor.
Gallup finds that housing has become 3.5 times more expensive since 1980. These costs have affected renters and owners, though to different degrees. In 2014, rent made up 28% of the average family’s income, compared with 19% in 1980. Home-owning costs have also increased over the same period: up to 16% of income from 12%.
Despite the increase in costs, quality has lagged. Americans are living in older homes that are farther from work and smaller in size than they were in 1980. Home ownership is at its lowest rate since 1967.
The causes of housing inflation are mostly attributable to local land-use regulations.
In relatively competitive markets, supply will normally increase to meet demand, causing prices to stabilize. What is frequently happening in the United States is that as housing and rent prices increase, developers find themselves “regulated out of the market.” This leads to a strange correlation between high price and low supply growth in counties.
The report found that from 2000 to 2010 housing density actually fell in major metropolitan areas. Meanwhile, prices have soared; the median home value in Palo Alto, where only 27% of land is zoned for residential use, is $1 million. The problem is compounded by regulatory distaste for multi-family developments. This kind of market restriction is common, according to Gallup.
The main takeaway from this is that local political forces are chiefly responsible for housing inflation through the use of zoning policies. Such policy is a form of rent seeking that increases housing values for some at the expense of those who would hypothetically live in aggressively zoned areas without actually improving housing quality.
Per pupil public spending has increased from $6,200 in 1980 to $10,800 in 2013, adjusting for inflation. Yet, like healthcare and housing, increases of per unit costs in education have greatly outpaced productivity gains.
The statistics are damning: Literacy rates among 17-year-old Americans peaked in 1971. Standardized testing reveals that math scores peaked in 1986. Test scores show a lack of improvement in math, science, and reading, in which respectively 25%, 22%, and 37% of American students are proficient.
This kind of stagnation isn’t typical among other nations; the United States showed much smaller levels of inter-generational improvement than other OECD nations. Up until about 1975, Americans were scoring significantly higher in math and literacy than Americans born before them. Since 1975, scores have plateaued, even adjusting for race and foreign-born status of students. As the study states, this implicates the entire US school system.
Higher education seems no more productive. While the cost of higher education has increased by a factor of 11 since 1980, literacy and math scores for bachelors degree holders peaked among those born in the the 1970s. United States college graduates rank 23rd among OECD countries in numeracy.
Dipping quality-to-cost ratio in education drags down employment, incomes, and GDP. Companies in America are forced to delay projects or turn to foreign workers to meet growing demand for high-skill employment. This is particularly pronounced in the sciences; 48% of scientists with a graduate degree are foreign-born. More indirectly, low educational attainment is shown to correlate with low health outcomes.
A primary cause of our inefficient education system is that teaching has become an unattractive profession.
Ironically, a lot of this has to do with the standardized tests that let us know how poorly our education system is doing. Gallup cites surveys indicating 83% of teachers believe students spend too much time on testing and New York Times reports that many schools spend an average of 60 to 80 days out of the school year preparing for tests. The study suggests there may be an unnecessary amount of testing caused by special interests; between 2009 and 2014, four standardized testing companies spent $20 million lobbying policymakers.
Teachers also cite a lack of autonomy and incentive. Salaries start and stay low, with no accountability system to reward effective teachers. Similarly, teachers’ unions have tied pay to seniority, rather than performance.
Another explanation provided by the study: teachers in the United States are less likely to have been good students than teachers in countries that do better on testing.
Where higher education is concerned, federal subsidies are more often flowing to under-performing schools where students are less likely to graduate and more likely to become delinquent borrowers. This lack of discrimination ends up increasing national student debt and funneling resources to inefficient educators.
Unsurprisingly, the explosion of non-teaching staff at colleges is also implicated by the study. Between 1988 and 2012, the ratio of faculty to students grew from 23:100 to 31:100 without any measurable increase in quality.
United States policy is in serious need of reform, argues the report. The writers state that improving market functionality and competitiveness should take precedence in forthcoming economic debates. More specific policy recommendations from Gallup are on the way.
In the meantime, anyone interested should really take a look at the full publication. It’s important and absolutely first class, plus it contains lots of awesome graphs that I didn’t include here.
One of the few issues upon which Clinton and Trump seemed capable of agreement in the second debate was that cheap steel from China was hurting America. Given how alarming Sunday’s exhibition was, it might have been a nice respite. That is, if they had not both been so wrong.
China produces about as much steel as the rest of the world combined. This is due partly to cheap labor and strong domestic demand, but mostly to heavy government subsidies. Now that China’s economic growth has slowed, markets are awash with cheap Chinese steel. This has led China’s trading partners to accuse China of “dumping” steel.
Dumping, for those not familiar with the term, refers to the act of selling a good in a foreign market for less than the cost of production. It’s against WTO rules and is penalized by tariffs implemented by importing nations. The United States recently levied a 522% tariff on Chinese cold-rolled steel, which is used for construction and to make shipping containers and cars.
The general consensus, dutifully embraced by both candidates, is that dumping is bad for the importing country and an act of aggression by the exporter. But if you think about it, this is pretty absurd.
First of all, countries don’t trade with each other. The United States doesn’t buy wine from Portugal; American companies buy wine from Portuguese companies. We’re not “getting killed” on bad trade deals as Donald Trump fears; there isn’t even a “we” in the sense that he suggests. There are only people, and people don’t habitually engage in voluntary exchanges at a loss. It should be obvious that importers (American companies, in this instance) are the ones benefiting from cheap steel from China. That’s why they prefer to buy it over more expensive steel made domestically.
It’s true that China’s not a market economy in the same way that America is; their government owns and subsidizes far more than ours. That might sound like an advantage for the Chinese, but it’s really not.
Chinese producers are able to sell steel for less because of large subsidies from their government. The people who benefit from this are the people buying and selling steel–importers and Chinese steel companies, respectively. The people who lose are non-competitive firms and those paying for the subsidies…which would be the Chinese taxpayers.
Subsidized exports are really a transfer of wealth from within a country to without. Importing parties are able to be more profitable and productive, which is precisely why Donald Trump builds with Chinese steel and why we’re all better for it. Yes, it hurts American steel companies, but whatever resources are devoted to domestic steel production can be diverted to other areas with better returns.
Conversely, import tariffs are paid by the importer, and ultimately the consumer. In other words, in order to protect us (read: domestic steel companies) from what amounts to discounted steel, our government taxes the hell out of it so that we end up paying more. Saying that this helps our economy is like claiming that rolling up your sleeves makes your arms warmer. Remember that any jobs or income generated by such tariffs comes directly at the expense of American consumers who are being forced to forgo savings or purchases they would have made with the money they saved on steel.
If millions of tons of steel fell from the sky would we draft legislation to tax the heavens? No, we’d take the free steel and build things with it. If China wants to take money out of its citizens’ pockets and use it to make steel for the rest of the world, Chinese citizens should be outraged. But why should the rest of us complain? When someone gives you a gift, the correct response is: “thank you.”
For those of you with better things to do than scroll through Paul Krugman’s twitter feed, I have news: last Tuesday the Census Bureau released its annual report on Income and Poverty, and people are stoked.
Here’s the upshot: Median household income increased 5.4% from last year after nine years of general decline. It’s now only 1.6% lower than it was in 2007, the year before the recession, and 2.4% lower than its historic peak in 1999.
While Asian households didn’t see a significant increase, black, white, and Hispanic households did. Median household incomes increased in all regions of the country and, for the first time since the recession, real income gains are distributed beyond the top earners.
Sounds like great news! It might well be, but before you celebrate there are some things to note about these statistics. The following isn’t a refutation of the conclusion that the economy is improving. Rather, it’s an indictment of the statistics that lead us to such conclusions. Here are three things to consider:
Household income data aren’t all they’re cracked up to be
All statistics have limits, but median household income is particularly misleading in the wrong hands. For years now, economists and politicians have cited median household income data to paint grim pictures of the American economic landscape. While the story is nicer this year, the logic behind the choice to measure households, rather than individuals, is still suspect.
A positive or negative change in median household income doesn’t imply a similar change in individuals. That’s because the characteristics of households vary across time and population.
Average household size has decreased from 3.6 to 2.5 people since 1940. Demographic shifts can also affect household incomes, because average household sizes differ between races.
Another limitation of household income data is that individuals aren’t equally distributed among households of different income levels. There are far more individuals–let alone workers–in the top quintile of income-earning households than the bottom. People who have vested interests in portraying an economically lopsided America tend to cite household data for this reason, without noting this.
Households expand and contract as more people are able to afford their own places. This can strangely cause median household income to rise while people are making less money. For example, if I were demoted and had to move in with my mom because I was now making half as much money, the median household income would increase as our two households merged, despite less aggregate income for the individuals involved.
The same works in reverse. When I started making enough money, I moved out of my mom’s house. Even though our combined income was greater, median household income fell.
Speaking of which…
Millennials are living at home longer and in greater numbers than previous generations
Fully 32% of 18-34 year-old Americans live with their parents, making it the most common living arrangement for that group. There are a couple of reasons for this: higher unemployment among young adults; an accompanying delay in or aversion to marriage; and a changing ethnic makeup of America, among others.
While Millennials are more likely to live with mom and dad, we’ve also become the largest generation in the workforce. A larger part of the workforce consolidating in fewer households could explain part of the rise in household income.
This probably isn’t too big of a factor, but since we’re measuring households it’s worth mentioning that about a third of people ages 18-34 are living with mom and dad.
“Low-income households” and poor people aren’t necessarily the same
This is a big one. Part of the elation about the Census data comes from the fact that lower-earning households have seen more of a bump in income than they have in recent years.
The problem is income isn’t the same as wealth. It’s closer to a derivative of wealth, like a stillframe is to a film. It’s a simplistic method of gauging standard of living, hobbled by the fact that it doesn’t consider government transfers of money, assets, or liabilities. Economists would probably argue that consumption data are more informative indicators of standard of living.
A wealthy elderly couple and a part-time minimum-wage earner might both be in the lowest income quintile in a given year. That doesn’t mean their standards of living are similar.
Rising incomes of the lowest earners might indicate lots of things: for example, that people are being forced back into the labor market after retiring. As I’ve noted here before, most poor households have no income earners, according to data from the Federal Reserve Board of San Francisco. Unless the rate of employment among the poor grew at the same time, there could be reason to believe that the increase in low-earning households is due to something other than increased income of “the poor.”
Another common assumption is that the households’ positions within income brackets are stagnant, as if we lived in a world without job churn. The households in the bottom 10% of income earners this year aren’t necessarily the same ones that were there in 2008.
We’re used to seeing data based on groups of income earners, not individuals. That’s how the Census reports. However, studying individuals tells a more relevant story.
The United States Treasury tracked individuals’ tax returns from 1990 to 2005. They found that over half of people in the bottom quintile as of 1990 had moved to a higher quintile by 2005.
The Census statistics measure exactly what they measure: nothing more. That doesn’t mean that information is useless, it just means we shouldn’t lose our heads over it. Extrapolating a verdict about America’s economic health from median household income data exposes us to opportunities to make mistakes based on a deceptively simplistic figure.
Don’t mistake my skepticism of stats for pessimism about the American economy. Where long-term trends in the American economy are concerned, optimism is never a bad idea.
A new study of 24 medical schools across 12 states by Dr. Anupam B. Jena, Andrew R. Olenski, and Daniel M. Blumenthal—all of Harvard Medical School—shows that male and female doctors are often paid disparate salaries, even when accounting for several factors. The average absolute difference was around $51,000 while the adjusted average difference was about $20,000.
The New York Times quickly ran an article that all but declares the wage gap between the sexes to be a result of discrimination. It seems like an easy call to make, but the omission of certain factors gives reason to be cautious about jumping to such a conclusion.
Before I begin, I should acknowledge that the authors are far more credentialed than I am (no large feat) and probably put a lot of hard work into this study. Not that they’ll ever read this critique, but I can imagine it would be pretty annoying to read some nobody undermining your diligent study. My aim here isn’t to discredit their study, but to mention some factors that, if included, could have helped produce a more definitive picture.
To put it lightly, the popular discourse on the gender pay gap is littered with misinformation. Practically everyone has heard the standby that women are paid on average seventy seven cents for every dollar that men are. Such a blunt “statistic” makes a good rallying cry, but is just about useless in every other respect. And yet, it sticks.
Part of the problem is the age-old fallacy of inferring causation where mere correlation is at hand. This is an understandable reflex in this case—across cultures, women share a history of oppression and obstructed paths to the labor force. But this same kind of logical leap would never be tolerated if someone alleged that employers discriminated in favor of Asian men, who earn 117% of their white counterparts in 2015.
Another problem is that comparing men and women in the workplace is far more difficult than you might believe, making study results suspect in many instances.
Salaries are influenced by a number of factors. The authors know this. According to the abstract, they measured:
…information on sex, age, years of experience, faculty rank, specialty, scientific authorship, National Institutes of Health funding, clinical trial participation, and Medicare reimbursements (proxy for clinical revenue).
It seems like a pretty complete list, but there are a couple other factors that—if previous studies are any indication—would have a significant effect on the findings. The most important ones that I can think of are marital status, whether or not a doctor has children, shift availability, how many hours, not years, they have worked, and the length of any interruptions to their tenure. The inability or failure to measure and report these factors gives reason to be somewhat skeptical of the findings.
Marriage and Children
Marriage and child counts are extremely important to any discussion of salary differences between men and women. Interestingly, they usually correlate to opposite effects on the incomes of men and women, presumably due to the unequal division of domestic labor and child rearing. Married men are known to have higher salaries than comparable unmarried men for this reason–their partners are essentially investing in their earning potential. Consequently, the reverse tends to be true for women, who often divert more of their attention to household labor, thus giving up some of the effort they might otherwise spend on money-earning activities.
Having children tends to enhance this trend. From a division of labor standpoint, most couples are probably deciding that it’s more efficient to have one member carry the bulk of the remunerative workload and the other to handle a larger share of the unpaid labor. Of course, it’s not necessary that these roles be taken by men and women respectively, but that seems to be the way it plays out most of the time.
The Economist unwittingly stumbled onto this hypothesis in February, when their blog noted that lesbians often earned more money than straight women while gay men often earn less than straight men. Unfortunately, they didn’t note that lesbians are about twice as likely as gay men to get married and thus replicate the heterosexual division of labor between spouses, to some extent (lesbians are also more likely than straight women to be childless and split domestic work more equitably–all of which would contribute to higher income per capita relative to straight women).
Any discussion purporting to measure income differences as a factor solely of sex has to take these effects into account. A married man with 20 years of experience isn’t comparable to an unmarried man with 20 years experience, let alone a married mother of three with equal tenure. Because marriage and childbearing tend to have opposite effects on the incomes of men and women, the only accurate comparison to make is between childless men and women who have never been married. Some inference can be gleaned from the fact that women in their twenties earn more than similarly-aged men on average. This trend reverses around age thirty.
Interruptions to Tenure
Because knowledge and technology are constantly advancing, interruptions to tenure can be especially penalizing in a highly technical field such as medicine. The value of a computer science professional, for example, is estimated to have a halflife of only three years. A doctor with five years of experience who is coming back into the labor force after a four year gap isn’t likely to be as valuable as a comparable doctor who has spent the last five years working. We wouldn’t expect to observe the same phenomenon to the same degree in low-skilled occupations.
Therefore, simply measuring age, years of experience, and faculty rank, as this study does, doesn’t give us a complete picture of the prospects of the doctor in question. Because female employees are more likely to take time off, we can expect that on average–and especially in fields with high rates of obsolescence like medicine–they will suffer harsher penalties for absence from the workplace than male practitioners, likely depressing their average wages.
Hours and Shift Availability
Measuring years gives some idea of an employee’s commitment to a field, but a more complete picture would require the amount of hours worked and the shift availability of the physicians in question. Past research has shown that for numerous reasons, female workers tend to be more willing to trade pay for flexibility than males. If that’s true for the doctors in this study as well, it would explain some of the average pay differences that simply counting years would omit.
There are many more variables that contribute to one’s salary than I can begin to list here. The problem with studies like this is the propensity of readers to project their own prejudices onto the results.
Assuming that men and women care equally about income, for example, would lead one to believe that women are routinely getting the short end of the stick. But, to quote anyone with quick access to a hacky sack: money isn’t everything. There is a huge disparity in workplace deaths between genders (men are about 13 times more likely to die at work), which suggests that the people willing to trade job safety for income are more likely to be men than women. Neither a coal miner nor a part-time secretary could rightly tell the other they’re making the wrong choice. Individuals value things differently–that they are able to pursue employment that fits their criteria is a wonderful thing.
One could argue that the pay differentials between men and women in a given field are both a result of outside factors, individual preferences, and sexism, because those preferences are influenced by a society that has different expectations for men and women. What doesn’t really make sense, but is asserted quite often, is widespread employer discrimination against female employees.
There are basic economic reasons to doubt this. The amount of coordination involved among competing entities would be unthinkable. Moreover, since there is no fixed price of a physician that we would deem “correct”, underpaying women could be restated as overpaying men. Why, we might ask ourselves, would hospitals routinely and arbitrarily overpay their male employees? If female doctors were capable of performing the same exact work but could be retained for $20,000 less per year, why would any hospital hire a male doctor who insisted on being paid a premium? That would be some truly expensive bigotry.
While it would be difficult to prove that observed income differentials are the result of discrimination, it is indeed impossible to prove that discrimination doesn’t affect individuals’ income. It could be the case, and I can’t dispute the possibility. However, as the economist Thomas Sowell has taken great pains to point out, that difference in income would represent the maximum amount attributable to discrimination plus other factors that haven’t been accounted for. My guess is that the results of this study are realistically limited by the availability of certain data and individual preferences.
The debate over minimum wage is one of the most confused arguments in American public policy. Although on its face minimum wage appears to be a promising and simple idea, it is, in fact, a very bad policy that has surely hurt the very people it aimed to help. Proponents of minimum wage (many of them well intentioned) often advocate for increases as a means to improve the personal welfare of workers earning the minimum. This is often accompanied by the argument that no one working full time should live in poverty.
The debate they’re having is: can we provide a minimum income/standard of living in America for workers? The debate relative to minimum wage law is, as Charles Blahous of 21st Century Economics points out: Whether government should establish a price barrier to employment, and if so how high it should be.
The answer to the first question is: yes, but it should be handled differently. The answer to the latter is simple: no.
The welfare of the poor and the prevailing minimum wage are not inextricably linked. Despite minimum wage’s self-evident virtue among certain ideological factions, there’s actually little reason to think this sledgehammer-style policy would help many people, let alone society as a whole. Before we talk about what would work better, I want to highlight some of the more egregious failings of the minimum wage.
Minimum wage forces people out of work
Because most of us grew up with the idea, it takes effort to even begin considering the minimum wage for what it really is: a price floor. Like other price floors, it has consequences beyond those desired.
One negative effect of a minimum wage is a loss of employment. This isn’t limited to people losing their jobs or having their hours cut, but also includes the destruction of future jobs that are casualties of foregone economic growth.
Artificially changing the price of something doesn’t change how much it’s worth to people; economics is tasked with grimly reminding us that prices emerge as a function of supply and demand. As long as employment remains a voluntary transaction between employer and employee, it’s hard to believe a price floor won’t compromise the ability of some workers to sell their labor.
Tragically, this usually affects workers with the lowest skills—traditionally the young, poor, undereducated etc. By eliminating their ability to charge less for their services, minimum wage laws eliminate their competitive advantage. This forces them onto the public dole and renders them a net drain on society.
2. Loss of societal surplus, deadweight loss
This concept is a bit nebulous, but bear with me.
One of the reasons people like me (handsome, rugged) are fans of free markets (a commonly maligned and misunderstood term) is their ability to maximize surplus—the excess benefits enjoyed by producers and consumers in a transaction. (That is, when we’re talking about privately consumed goods.)
Surplus is the idea that even though someone would be willing to pay more or be paid less to consume or supply a good (in this case, labor), the free-market equilibrium price ensures that both parties enjoy a better price. In the graph below, it’s represented by the triangle formed by the crossing of the supply and demand curves.
Forcing a price above or below the equilibrium diminishes the amount of surplus enjoyed by society as a whole; economists refer this to as “deadweight loss” (the green triangle in the graph on the right). It’s true that implementing a price floor above the equilibrium point can (but won’t necessarily) increase the surplus of suppliers (laborers), but this is a bad idea for two reasons:
It reduces economic growth and efficiency. The added supplier surplus comes at a direct expense to the rest of the economy. This puts undue pressure on consumers of labor, and thus demand for labor.
Consumers of labor aren’t exclusively employers; they’re also everyday customers—many of whom are the very laborers we meant to aid with the minimum wage.
In other words, though there is tendency to focus on people as either consumers or suppliers of labor, most are both. While they may benefit from minimum wage increases as an employee, they may lose in many other instances when they find themselves on the other side of the proverbial counter. Which dovetails nicely with my next point…
3. Poor people consume lots of low-wage labor
We all buy food. We all buy clothes. But we don’t all shop at the same places. Poor people are more likely to shop at places with lower prices and–you guessed it–lower costs of labor.
Consider that the average Whole Foods employee earns about $18 per hour while the average WalMart employee makes about $13. The shoppers of the corresponding stores have similar disparities in disposable income that are reflected in the prices they pay.
If the minimum wage were raised to $15 per hour, it might have a negligible effect on prices at Whole Foods. The same is not certain for Walmart. Even if prices were to increase by the same amount in both stores, the impact would be greater on the lower-income shoppers, since it would make up a larger percentage of their income.
The problem is that the money that pays for the higher price of labor doesn’t come from nowhere; too often, it comes from exactly those we’re trying to help.
4. Minimum Wage Has Sloppy Aim
A central challenge to minimum wage’s credibility as a form of poverty relief is that it only affects people with wages. It’s easy to make the assumption that poor people are the ones working low-wage jobs, but the two groups aren’t as synonymous as one might think.
First of all, in order to be considered poor, you must be from a poor household, 57% of which have no income earners (Federal Reserve of San Francisco, pg 2). The idea that we would help them by making things cost more is ludicrous.
In reality, about 22% of minimum wage earners live below the poverty line. Their median age is 25; 3/5 of them are enrolled in school; 47% of them are in the south (where costs of labor and living are lower); and 64% of them work part-time.
Fully ¾ of minimum wage-earning adults live above the poverty line.
It’s clear that we’re largely talking about two different groups of people when we discuss minimum wage earners and the poor. Given that the majority of minimum wage workers aren’t poor and that the majority of the poor are unemployed, we should consider another strategy for fighting poverty: one that doesn’t reduce employment opportunities for the unskilled.
Okay, okay…so if minimum wage isn’t a good solution, what is?
Phenomenal question! The many problems with minimum wage policies share a common root: minimum wage effects transactions before they occur. This passes the cost on to employers or customers and impacts demand. The evident solution then, is a policy that goes into effect post-market. My answer to this is a wage subsidy.
We lose more than we gain by interfering with labor markets. Instead, we should eliminate the minimum wage and—very carefully—create targeted wage subsidies for people that aren’t making enough money from their jobs to survive.
This has to be done precisely to avoid creating disincentives to work. Welfare programs can perversely discourage people from earning more money by stripping away benefits faster than wages rise (and this really is more like a welfare program than minimum wage). To give a simple example: if everyone earning under $10,000 were given an extra $5,000, it would discourage people from earning between $10,000 and $14,999, thus encouraging economic stagnation.
We want to encourage people to be as productive as possible. When we design a welfare system, we have to make sure the total benefit enjoyed by the recipient is greater for every dollar earned than the one before it. In order to accomplish this, we need to design our wage subsidy as a function of market wages (the price that employers pay) that increases at a decreasing rate until it hits a wage that we as a society find acceptable.
I chose to have the subsidized curve cross with the market wage (y=x) at $13/hour, beyond which point it will cease to be applied. Of course, we could write any equation and phase it out at any point. This subsidy curve is a concept, not a strict recommendation.
There are some profound advantages to this “after-market” approach:
The cost is borne by society instead of individual employers
I’ve spoken before about how the cost of consumption should be borne by the consumer, so you can be forgiven for feeling confused about why I feel a subsidy funded by taxes is appropriate here. However, the true price of a dishwasher (for example) is not $15 per hour. We know this because there are currently an abundance of dishwashers willing to work for far less than that. If we as a society want them to take home more money for their work, we should pay the difference.
Because of the way this subsidy curve is designed, employees will still have incentive to search for the highest paying jobs available to them. By tying subsidy receivership to work, we encourage workers to maximize their productivity. As long as these conditions are met, our subsidy won’t unnecessarily burden society with the cost of inefficient labor allocation.
No one is locked out of the labor market
Young people’s employment opportunities are eroded by high minimum wages. Keeping them out of the labor market has negative repercussions for their futures. From the Center for American Progress:
Not only is unemployment bad for young people now, but the negative effects of being unemployed have also been shown to follow a person throughout his or her career. A young person who has been unemployed for six months can expect to earn about $22,000 less over the next 10 years than they could have expected to earn had they not experienced a lengthy period of unemployment. In April 2010 the number of people ages 20–24 who were unemployed for more than six months had reached an all-time high of 967,000 people. We estimate that these young Americans will lose a total of $21.4 billion in earnings over the next 10 years.
Everyone, even the White House, recognizes that the larger implications of a “first job” for our young labor force extend far beyond the pay they receive. Absurdly, they have crafted a program that calls for $5.5 billion in grant funds to help young people get the jobs they have been priced out of by their own government.
Markets will function better
Advocates of raising the minimum wage are effectively claiming that making a market inefficient will improve outcomes. Here this fallacy is presented as: if the cost of labor is higher, workers will have more money to spend and demand will increase.
This is tempting logic, but it doesn’t hold up to scrutiny. To see how, we can substitute workers for something more specific, like carpenters. Yes, if we passed a law saying that carpenters had to be paid more it would be great for some carpenters. But any additional money spent on carpenters can’t be spent on something else. Society loses any additional benefits it might have gained from having more surplus.
If the reverse were true, it would make sense to ban power tools and all sorts of technology, thereby increasing demand for and price of human labor.
An efficient market creates more surplus, and is less burdened by the cost of those who must rely on public welfare. Additionally, the cost of supporting those people will be defrayed by their renewed ability to provide (in some part, at least) for themselves.
It’s way more targeted than a minimum wage, and could absorb other welfare programs
We could write different equations for different people who might require larger or smaller subsidies to meet their basic needs. For example, a single mom of four kids in Long Beach could receive a steeper subsidy than a childless teen living in rural Alabama, who might not need one at all.
This could theoretically absorb other welfare programs. Instead of receiving a SNAP card, a section 8 voucher, and WIC benefits, the cash needed to cover one’s expenses can be calculated in the subsidy. This has the benefit of cutting down on expensive bureaucratic systems and increases the utility of the money given through welfare while incentivizing work.
The United States is a rich country. If we spend our money wisely, there’s no reason we can’t afford some minimum standard of living for workers. Helping our poor citizens is one of the best uses for taxes and far better than a lot of the things we spend public money on.
But rather than mess with markets, we should simply give more money to the people we want to help by redistributing income after markets are allowed to produce as much wealth as they’re able. Additionally, if we’re going to combat poverty with public money, we should do it in a way that stands a chance of eventually readying people to support themselves and without sacrificing economic efficiency. Minimum wage fails both of these tasks.