States are becoming increasingly permissive of various “sinful” economic activities and goods — those understood to be harmful for consumers — allured, at least in part, by the promise of tax revenue they represent. This has certainly been part of the rationale in my home state of Massachusetts, where within the last year the first full casino — MGM Springfield, located a few blocks from my apartment — and recreational marijuana dispensaries opened. Since the fiscal year just ended, now seems like a good time to assess how things are going in that regard.
First, the casino: Before opening its doors, MGM Springfield told state regulators it expected $418 million in gambling revenue over its first full twelve months of operation — $34.8 million per month. According to the Massachusetts Gaming Commission’s June 2019 report, it hasn’t come within $7 million of that mark yet.
Since September, its first full month of operation, the casino has generated nearly $223 million in gambling revenue. The state’s take is a quarter of that, about $55.7 million. That’s two-thirds of what was estimated. MGM Springfield’s President attributes its lower-than-expected revenue to a poor projection of the casino’s clientele — fewer “high rollers” from the Boston area and more from up and down the I-91 corridor.
The introduction of new avenues for gambling is well known to cannibalize existing revenue sources. So add to MGM Springfield’s list of woes that the much flashier Wynn Casino recently opened in Everett, MA, a quick trip from Boston, and that neighboring East Windsor, CT is opening another casino next year.
Massachusetts’ venture into marijuana has been slightly more successful. Sales were supposed to begin in July 2018, the start of the fiscal year, but were delayed until November. Still, the State Revenue Commissioner estimated Massachusetts would collect between $44 million and $82 million from the combined 17% tax (Massachusetts’ normal 6.25% sales tax plus a 10.75% excise tax) over fiscal year 2019. If my math is right, that works out to an expected range of about $32 million to $60 million in sales every month for the remaining eight months of the fiscal year, a threshold met for the first time in May, according to sales data from the Massachusetts Cannabis Control Commission.
As of June 26, the last time the data were updated, marijuana sales totaled $176 million, which would put tax revenue somewhere around $22 million this fiscal year. Not bad, but not a great show either — and a bit surprising to me, given the traffic I’ve had to wade through passing a dispensary on my way to work. Furthermore, the state is probably constrained in its ability to raise the excise tax on marijuana, since that could push buyers back into the informal market. And as more states in the region legalize, there’s a good chance sales will drop off somewhat.
On the other hand, sales of marijuana are clearly ramping up as more stores open, and making projections about a brand new industry can’t be easy. I think people more knowledgeable about the regulatory rollout would also contend that Massachusetts bureaucrats are at least partly responsible for the relatively poor sales. The first shops were concentrated in low-population areas of the state, and the closest one to Boston didn’t open until March. Still, the state was off on this one, too. (I thought I was the only one to notice this, but I guess not.)
A few admittedly tangential reflections on this: The positive spin on the commercialization of marijuana and the proliferation of casinos is that the state is growing more respectful of individual autonomy, abandoning harmful and ultimately unsuccessful prohibitive policy and allowing market forces to dictate what forms of entertainment are viable. If the state should make a few bucks in the process, all the better. Right?
Well, maybe. My natural sympathies lie with the above assessment, but the state’s financial incentive complicates the picture — especially insofar as new sin taxes are attractive alternatives to the prospect of raising traditional taxes.
Taxing the consumption of vices is a markedly regressive form of revenue generation. The most salient example of this is tobacco: its use is more common among those with less education and those below the poverty line, and among smokers, those populations suffer greater negative health effects. But it’s also broadly true that the majority of profits derived from the sale of vices tend to be concentrated among a relatively small group of “power users.” The top tenth of drinkers consume half of all alcohol sold in the United States, for example. I don’t have any data on this at the moment, but if I had to guess, the pathological consumption of vices is probably negatively correlated with the propensity to vote.
The cynical take, therefore, is that the newfound permissiveness of the state is a financially motivated abdication of the state’s most fundamental obligations, a mutually beneficial pact between “limbic capitalists” and politicians.
Ironically, sin taxes have notable limitations as revenue-raisers. For one, unlike other taxes, sin taxes are supposed to accomplish two contradictory goals: curbing consumption and raising revenue. Attention to the former usually requires that tax rates be imposed at higher than revenue-maximizing points. This can also encourage regulators, as with alcohol and tobacco, to tax on a per-unit basis, tying revenue growth to consumption patterns. While they may be tempting stop-gaps, sin taxes are not a long-term budgetary fix, and analysis of their social costs and fiscal benefits should bear that in mind.
Whatever else it might accomplish, President Donald Trump’s administration has surely earned its place in history for laying to rest the myth of Republican fiscal prudence. Be they the tax dollars of today’s citizens or tomorrow’s, high ranking officials within Mr. Trump’s White House seem to have no qualms about spending them.
The latest in a long series of questionable expenses is, of course, none other than Department of Housing and Urban Development Secretary Ben Carson’s now infamous $31,000 dining set, first reported on by the New York Times.¹ Since the Times broke the story, Mr. Carson has attempted to cancel the order, having come under public scrutiny for what many understandably deem to be an overly lavish expenditure on the public dime.
At first blush, Secretary Mr. Carson’s act is egregious. As the head of HUD, he has a proposed $41 billion of taxpayer money at his disposal. Such frivolous and seemingly self-aggrandizing spending undermines public trust in his ability to use taxpayer funds wisely and invites accusations of corruption. It certainly doesn’t help the narrative that, as some liberals have noted with derision, this scandal coincides with the proposal of significant cuts to the department’s budget.
But the more I think about it, the more I’m puzzled as to why people are so worked up about this.
Let me be clear: this certainly isn’t a good look for the Secretary of an anti-poverty department with a shrinking budget, and it’s justifiable that people are irritated. At a little more than half the median annual wage, most of us would consider $31,000 an absurd sum to spend on dining room furniture. The money that pays for it does indeed come from private citizens who would probably have chosen not to buy Mr. Carson a new dining room with it.
Sometimes, it can fly under the radar simply by virtue of being bizarre. Last year, for example, the federal government spent $30,000 in the form of a National Endowment for the Arts grant to recreate William Shakespeare’s play “Hamlet” – with a cast of dogs. Other times, the purchase at hand is too unfamiliar to the public to spark outrage. In 2016, the federal government spent $1.04 billion expanding trolley service a grand total of 10.92 miles in San Diego: an average cost of $100 million per mile.
Both of those put Mr. Carson’s $31,000 dining set in a bit of perspective. It is neither as ridiculous as the play nor as great in magnitude as the trolley. So why didn’t either of those incidents receive the kind of public ire he is contending with now?
The mundanity of Mr. Carson’s purchase probably hurts him in this regard. Not many of us feel informed enough to opine on the kind of money one should spend building ten miles of trolley track, but most of us have bought a chair or table. That reference point puts things in perspective and allows room for an emotional response. It’s also likely this outrage is more than a little tied to the President’s unpopularity.
Ironically, the relatively small amount of money spent might also contribute to this effect. When amounts get large enough, like a billion dollars, we tend to lose perspective – what’s a couple million here or there? But $31,000 is an amount we can conceptualize.
So it’s possible that we’re blowing this a little out of proportion for forces that are more emotional than logical. But I still think the issue is a legitimate one that deserves more public attention than it usually gets, and it would be interesting if the public were able to apply this kind of pressure to other instances of goofy spending. Here’s hoping, anyway.
A version of this article originally appeared on Merion West
Some sixty years into the “war on poverty,” government welfare programs remain the subject of much scrutiny. As the Trump administration unveils a new tax plan, fresh off numerous attempts to repeal and replace the Affordable Care Act, perennial questions about whether the government is doing enough to reduce poverty have resurfaced.
This debate often focuses almost exclusively on poor Americans, and solutions mostly center around the redistribution of resources via government transfers. On many levels, this makes sense; on the first count, non-Americans don’t vote, and politicians tend not to pay much attention to groups that cannot help them win elections. Secondly, the government’s ability to act on poverty is somewhat limited — it can try to create policies that facilitate wealth, but it cannot actually produce wealth on its own. Spreading around some of the surplus is therefore an attractive option.
But from a utilitarian and humanitarian perspective, this debate represents a missed opportunity. Limiting the conversation to wealth transfers within an already wealthy nation encourages inefficient solutions at the expense of ideas that might do a lot more good for a lot more people: namely, freeing those people, who are not at maximum productivity, to pursue opportunity.
Between the EITC, TANF, SNAP, SSI, Medicaid, and other programs, the United States spent over $700 billion at the federal level in the name of alleviating poverty in 2015. A 2014 census report estimates that Social Security payments alone reduced the number of poor Americans by nearly 27 million the previous year. Whatever your stance on the long-run effects of welfare programs, it’s safe to say that in the short term, government transfers provide substantial material benefits to recipients.
Yet if the virtue of welfare programs is their ability to improve living standards for the needy, their value pales in comparison to the potential held by labor relocation.
Political boundaries are funny things. By crossing them, workers moving from poor to rich nations can increase their productivity dramatically. That’s not necessarily because they can make more products or offer better services — although that is sometimes the case as well — but rather because what they produce is more economically valuable. This is what economists refer to as the “place premium,” and it’s partly created by differences in opportunity costs between consumers in each location.
Median wages of foreign-born US workers from 42 developing countries are shown to be 4.1 times higher than those of their observably identical counterparts in their country of origin. Some enthusiasts even speculate that the elimination of immigration restrictions alone could double global GDP. The place premium effect can be powerful enough to make low-skilled positions in rich countries economically preferable to high-skill immigrants from poor nations.
We have a lot of inequality in the United States, and that often masks the fact that we have very little absolute poverty. Even someone who is poor by American standards (an annual pre-transfer income of about $12,000 or less for a single-person household) can have an income that exceeds that of the global median household. Even with relatively generous government transfers, we probably would not increase their incomes by more than triple.
On the other hand, because they start with lower incomes, this same effect allows low-earning immigrants to proportionally increase their standard of living in a way that can’t be matched by redistribution within a relatively wealthy population. For example, the average hourly wage in the US manufacturing sector is slightly over $20; in Mexico, it’s around $2.30. Assuming a manufacturer from Mexico could find a similar position in the United States, their income would increase by around 900%. To provide the same proportional benefit to a severely poor American — defined as a person or household with an income under half the poverty threshold — could cost up to $54,000.
What’s true across national borders is true within them. Americans living in economically desolate locations could improve their prospects by relocating to more prosperous and opportune areas. Indeed, this is exactly what’s been happening for decades. The percentage of Americans living in cities has increased steadily, going from 45% in 1910 to nearly 81% by 2010. Nor is relocation exclusively a long-term solution. During oil rushes in Alaska and North Dakota, populations within the two states exploded as people flocked to economic activity.
Recently, however, rates of migration have been dwindling. Admittedly, there are fewer barriers to intra-national migration than immigration. But there are still things we might do to make it easier for people to move where the money is.
One obvious solution would be to encourage local governments to cut back on zoning regulations that make building new housing stock less affordable. Zoning laws contribute heavily to the rising costs of living in the most expensive cities, leading to the displacement of poorer residents and the sequestration of opportunity. As with immigration, this poses a bit of a political problem — it requires politicians to prioritize the interests of the people who would live in a city over those of the people who currently live there — the ones who vote in local elections.
Relatedly, we might consider revising our approach to the mortgage interest deduction and other incentives for homeownership. While the conventional wisdom is that homeownership is almost always desirable because it allows the buyer to build equity on an appreciable asset, some studies have found a strong positive correlation between levels of homeownership and unemployment. The upshot is that tying up most of one’s money in a home reduces the ability and desire to move for employment, leading to unemployment and downward pressure on wages. Whether or not to buy a home is the buyer’s decision, but these data cast doubt on the idea that the government should subsidize such behavior.
If the goal of policy is to promote human well being, then increasing mobility should be a priority for policy makers. As a species, as nations, as communities, and as individuals, we should strive for a more productive world. Allowing people the opportunity to relocate in the name of increasing their output is a near-free lunch in this regard.
But while the economic dream of frictionless markets is a beautiful one, we live in a world complicated by politics. It’s unrealistic to expect politicians to set aside the concerns of their constituents for the greater good. I will therefore stop short of asking for open borders, the abolition of zoning laws, or the removal of the mortgage interest deduction. Instead, I offer the humbler suggestion that we exercise restraint in such measures, striving to remove and lessen barriers to mobility whenever possible. The result will be a freer, more equal, and wealthier world.
The mushy center never inspires passion like ideological purity. The spectacle of radicalism puts asses in the seats. It’s hard, on the other hand, to imagine rebellious, mask-clad youths taking to the street in the name of fine-tuning marginal tax rates.
Oh sure, you may see a protest here and there, and practically everyone grumbles about this or that issue in which they have an interest. But as the great philosopher Calvin once said: a good compromise leaves everybody mad.
Some more so than others. Opining in the New York Times, Senator Bernie Sanders suggests Democrats can reverse their political fortunes by abandoning their “overly cautious, centrist ideology,” and more closely approximating the policy positions of a Vermont socialist.
I suppose this could be sound political advice. Everyone has an idea of the way they’d like the world to work, and Sanders’ ideas are appealing to a great many people. You could argue–as Sanders does–that Republicans have had some success with a similar strategy following the Obama years. But, as they’re finding out, ideological purity makes for better campaign slogans than successful governing strategy.
Here’s the thing: We live in a big, diverse country. People have very different wants and needs, yet we all live under the same (federal) laws. Our priorities must sometimes compete against each other, which is why we often end up with some of what we want, but not everything. Striking that balance is tough, and by necessity leaves many people unhappy. We don’t always get it right. But when you’re talking about laws that affect 320 million people, some modesty, or if you prefer, “caution,” is in order.
Alas, Bernie is not of a similar mind. In fewer than 1,000 words, he offers no shortage of progressive bromides without mention of the accompanying price tag. It’s one thing to form a platform around medicare-for-all, higher taxes on the wealthy (their “fair share”), aggressive clean energy commitments, a trillion-dollar infrastructure plan, or free tuition at state universities and lower interest rates on student loans. But all of them? At once?!
Sanders should remember the political and economic lessons of Vermont Governor Peter Shumlin’s foray into single-payer healthcare: Government spending–and thus government activity–is constrained by the population’s tolerance for taxation (And on the other side of things, their tolerance for a deficit of public services. Looking at you, Kansas). Go too far and you risk losing support. And unless you’re willing to rule by force, as extremists often must, that will cost you your ability to shape public policy.
For what it’s worth, I don’t think the Senator’s advice would do the Democrats any favors. The Democrats didn’t move to the center-left because there was widespread and untapped support for endless government programs in America. They did it because they collided with the political and economic reality of governance in our country. Americans are willing to pay for some government programs, but not at the rate Europeans pay to have much more expansive governments. The left, therefore, shouldn’t play an all-or-nothing game, but instead think about what it does well and how it can appeal to, rather than alienate, the rest of the country. That’s going to involve compromise.
Update: Following Jon Ossoff’s narrow defeat in a Georgia special election, there’s been a lot of discussion about whether a more progressive candidate would have fared better. Personally, I find it hard to believe centrism and fiscal conservatism worked against Ossoff in a historically Republican district. Much more believable is Matt Yglesias’ related-but-different take that Ossoff’s reluctance to talk policy left a void for the opposition to exploit, allowing them to cast him as an outsider.
One thing seems certain: the rift within the Democratic party isn’t going away anytime soon.
It seems like more often than not I’m opening these blog posts with an apology for a multi-week hiatus. Since nobody’s emailed to check on my well-being, I can only infer my readership has gotten on fine without my wonk-Jr. takes on public policy and other matters of high import. Fair enough; but don’t think your demonstrated lack of interest will spare you from a quick update.
Actually, it’s all good news: I’ve been having fun learning R (a statistical language), looking for a new apartment, and testing the limits of a 27-year-old liver. I saw Chance the Rapper and a Pirates game in Pittsburgh, which was awesome. The last article I wrote had some real success and was republished in several places, even earning a shout-out from John Stossel:
The big update is that my stint as a (purely) freelance writer has mercifully drawn to a close; I now write for a non-partisan public policy group. In fact, this very blog was one of my strongest selling points, according to my manager. It just goes to show you, kids: if you toil in anonymity for two years, eventually something will go your way.
Okay, enough about me. Let’s talk about a topic close to the heart of many millennials: student loans. More specifically, I want to talk about the interest rates charged on undergraduate student loans.
That interest rates are too high is, unsurprisingly, a common gripe among borrowers. If I had a nickle for every twenty-something I’ve overheard complain that the federal government shouldn’t profit off student loans…well, it still wouldn’t cover one month’s interest. However, this sentiment isn’t limited to overqualified baristas; popular politicians like Elizabeth Warren and Bernie Sanders–and even unpopular politicians–have publicly called for loans to be refinanced at lower rates and decried the “profiteering” of the federal government. From Bernie Sanders’ website:
Over the next decade, it has been estimated that the federal government will make a profit of over $110 billion on student loan programs. This is morally wrong and it is bad economics. Sen. Sanders will fight to prevent the federal government from profiteering on the backs of college students and use this money instead to significantly lower student loan interest rates.
Under the Sanders plan, the formula for setting student loan interest rates would go back to where it was in 2006. If this plan were in effect today, interest rates on undergraduate loans would drop from 4.29% to just 2.37%.
It makes no sense that you can get an auto loan today with an interest rate of 2.5%, but millions of college graduates are forced to pay interest rates of 5-7% or more for decades. Under the Sanders plan, Americans would be able to refinance their student loans at today’s low interest rates.
As one of those debt-saddled graduates, and one of the chumps who took loans at a higher rate of interest, I would obviously be amenable to handing over less of my hard-earned money to the federal government. But as a person concerned with the larger picture, I have to say this is a really bad idea. In fact, rates should be higher, not lower.
First of all, the progressive case for loan refinancing or forgiveness only holds up under the lowest level of scrutiny. Such a policy would overwhelmingly benefit borrowers from wealthy families, who hold the majority of student loan debt. Conversely, most defaulters hold relatively small amounts of debt. Fiddling with interest rates shouldn’t be confused with programs that target low-income students, like the Pell Grant, which are another matter entirely and not the subject of my criticism.
More to the point, the federal government probably isn’t making any money on student loans. Contrary to the claims of Senators Warren and Sanders, which rely on estimates from the Government Accountability Office (GAO) and put federal profit on student loans at $135 billion from 2015-2024, the Congressional Box Office (CBO), using fair-value estimation, shows student loans costing the federal government $88 billion over the same period.
The discrepancy between the CBO and GAO figures comes from the former’s inclusion of macroeconomic forecasts. Essentially, the CBO thinks the risk of default on student loans is higher than the GAO does, due to forces beyond individuals’ control.
Evidence suggests it’s unwise to underestimate the risk associated with student loans. According to a study by the liberal think tank Demos, nearly 40% of federal student loan borrowers are in default or more than 90 days delinquent. Add to that the fact that student loans are unsecured (not backed by collateral or repossessable assets, like a car or house), and they start to look like an incredibly risky venture for the federal government, and ultimately, taxpayers.
That conclusion is deeply unpleasant, but not really surprising if you think about it. Ever notice how the interest rates on private student loans–approximately 10% of the market–are much higher? That’s not because private lenders are greedy; it’s because they can’t lend at the rate of the federal government without losing money.
This is all important because the money that finances student loans has to come from somewhere. Be it infrastructure upgrades, federal support for primary education, or Shrimp Fight Club, the money spent on student loans isn’t available for competing priorities. This is even more important when you consider the loss the federal government is taking on these loans, the cost of which is passed onto future taxpayers in the form of higher taxes or lower spending. Since higher education is only one among infinite human desires, we need to decide how much of our finite resources to devote to it. Properly set interest rates are one way (probably the best way) to figure that out.
The irony, of course, is that doing so would require the government to act more like a private lender–the very thing it’s designed not to do! Our student loan system ensures virtually anyone who wants to study has the money to do so, regardless of the likelihood they’ll be able to repay. One of the nasty side effects of this indiscriminate lending is a large amount of distressed borrowers, who now find themselves in the uncomfortable position of digging out from under a mountain of debt they likely shouldn’t have been able to take on.
More so than other forms of government spending, student loans have specific, discernible beneficiaries: the students who get an expensive education financed by the taxpayer at below-market rates. Sure, you can argue there’s some spillover; society does benefit from having more highly-trained workers. But most of the time, highly skilled labor is rewarded with higher wages. That being the case, is it really too much to ask for borrowers to pay a level of interest that reflects the actual cost of issuing their loans?
Yes, this would be discouraging for some: particularly those who want to pursue non-remunerative fields of study. That’s not such a bad thing; higher interest rates would steer people away from obtaining degrees with low salary expectations, which would–by my reckoning–reduce rates of delinquency and default over the long term. They would also help mitigate some of the pain of defaults when they do happen.
But–you might protest–you can’t run the government like a business! And sure, a lot of the time, you’d be right. However, I really think this is one area where doing so is appropriate–even desirable. Hear me out.
When the government can fund itself through profitable investments rather than zero-sum transfers, it should. If we’re going to have a government of any size (and few suggest that we shouldn’t), then we need to pay for it. Which sounds like the preferable way for that to happen: voluntary, productive, and mutually beneficial investments in society; or the forceful appropriation of private resources? I’m not suggesting the former could entirely replace the latter, but when it can, I think it absolutely should.
Astute readers will realize if the government decides to lend profitably, it will have to compete with private lenders, which would cut into its margins and make its presence in the market redundant. So maybe it’s just a pipe dream. But if profitable lending isn’t possible, the federal government should at least try to minimize losses. One way or another, that means higher interest rates.
We’re coming off one hell of a Friday. Released material of both the Democratic and Republican nominees confirms the fears of their respective less-than-fervent supporters: namely that Clinton is an opportunistic liar and that Trump exhibits a moral deficiency that should and will render him unelectable.
A third, fourth, or fifth voice in tomorrow’s debate would be pretty nice right about now.
Unfortunately for We the People, that’s not really up to us. The decision falls solely within the purview of the Commission on Presidential Debates: a non-profit run by the Democratic and Republican parties–because why not? The CPD sets a threshold of 15% that a prospective candidate must reach in 5 national polls, which are conducted in part by some heavily partisan organizations and may not include third party candidates’ names at all.
Somehow, in the same country where we have more types of shoes and deodorant than Bernie Sanders can shake a stick at, we’re left with a binary choice (in practical terms) when it comes to electing a national leader.
For those of us who equate choice with wealth, it’s no shock that severe barriers to entry have left us politically poorer than we should be. Most Americans, after all, are not members of either major party and probably hold an eclectic set of views. Neither candidate is viewed favorably by the public.
But alas, this is life under political duopoly where the players are also the referees. It’s understandable that the two major parties wouldn’t be excited about the prospect of ceding some of their influence. What’s less understandable is the willingness of American “intelligentsia” to play along.
The New York Times has gone into full attack mode to dissuade voters from seeking alternative presidential options. From the opinion section, Paul Krugman and Charles Blow submit missives that malign voters whose opinions diverge from their own (infuriatingly, Krugman’s column, entitled “Vote as if It Matters”, tells voters that “nobody cares” if they use their votes in protest).
Less forgivably, writing in the Politics section can be found using discredited scare tactics to frighten voters away from making their own choices:
And, in what is one of the most difficult barriers for Mrs. Clinton to break through, young people often display little understanding of how a protest vote for a third-party candidate, or not voting at all, can alter the outcome of a close election.
The vast majority of millennials were not old enough to vote in 2000, when Ralph Nader ran as the Green Party nominee and, with the strong backing of young voters, helped cost Vice President Al Gore the presidency.
Hypothesis easily turns to axiom in a feedback loop. Instead of looking inward (300,000 registered Democrats voted for Bush in Florida in 2000), partisans choose to punch down at political minorities (Nader had 90,000 votes in Florida, only 24,000 of which were from registered Democrats) because that absolves them of the responsibility to produce better candidates.
Like any cartel, the political establishment excels at serving itself while being unresponsive to clients (voters). As long as the electorate is willing to swallow the idea that they must choose between the options laid out for them by Democrats and Republicans, that isn’t going to change. The truth is we do have a choice; it’s just a matter of exercising it.
The debate over minimum wage is one of the most confused arguments in American public policy. Although on its face minimum wage appears to be a promising and simple idea, it is, in fact, a very bad policy that has surely hurt the very people it aimed to help. Proponents of minimum wage (many of them well intentioned) often advocate for increases as a means to improve the personal welfare of workers earning the minimum. This is often accompanied by the argument that no one working full time should live in poverty.
The debate they’re having is: can we provide a minimum income/standard of living in America for workers? The debate relative to minimum wage law is, as Charles Blahous of 21st Century Economics points out: Whether government should establish a price barrier to employment, and if so how high it should be.
The answer to the first question is: yes, but it should be handled differently. The answer to the latter is simple: no.
The welfare of the poor and the prevailing minimum wage are not inextricably linked. Despite minimum wage’s self-evident virtue among certain ideological factions, there’s actually little reason to think this sledgehammer-style policy would help many people, let alone society as a whole. Before we talk about what would work better, I want to highlight some of the more egregious failings of the minimum wage.
Minimum wage forces people out of work
Because most of us grew up with the idea, it takes effort to even begin considering the minimum wage for what it really is: a price floor. Like other price floors, it has consequences beyond those desired.
One negative effect of a minimum wage is a loss of employment. This isn’t limited to people losing their jobs or having their hours cut, but also includes the destruction of future jobs that are casualties of foregone economic growth.
Artificially changing the price of something doesn’t change how much it’s worth to people; economics is tasked with grimly reminding us that prices emerge as a function of supply and demand. As long as employment remains a voluntary transaction between employer and employee, it’s hard to believe a price floor won’t compromise the ability of some workers to sell their labor.
Tragically, this usually affects workers with the lowest skills—traditionally the young, poor, undereducated etc. By eliminating their ability to charge less for their services, minimum wage laws eliminate their competitive advantage. This forces them onto the public dole and renders them a net drain on society.
2. Loss of societal surplus, deadweight loss
This concept is a bit nebulous, but bear with me.
One of the reasons people like me (handsome, rugged) are fans of free markets (a commonly maligned and misunderstood term) is their ability to maximize surplus—the excess benefits enjoyed by producers and consumers in a transaction. (That is, when we’re talking about privately consumed goods.)
Surplus is the idea that even though someone would be willing to pay more or be paid less to consume or supply a good (in this case, labor), the free-market equilibrium price ensures that both parties enjoy a better price. In the graph below, it’s represented by the triangle formed by the crossing of the supply and demand curves.
Forcing a price above or below the equilibrium diminishes the amount of surplus enjoyed by society as a whole; economists refer this to as “deadweight loss” (the green triangle in the graph on the right). It’s true that implementing a price floor above the equilibrium point can (but won’t necessarily) increase the surplus of suppliers (laborers), but this is a bad idea for two reasons:
It reduces economic growth and efficiency. The added supplier surplus comes at a direct expense to the rest of the economy. This puts undue pressure on consumers of labor, and thus demand for labor.
Consumers of labor aren’t exclusively employers; they’re also everyday customers—many of whom are the very laborers we meant to aid with the minimum wage.
In other words, though there is tendency to focus on people as either consumers or suppliers of labor, most are both. While they may benefit from minimum wage increases as an employee, they may lose in many other instances when they find themselves on the other side of the proverbial counter. Which dovetails nicely with my next point…
3. Poor people consume lots of low-wage labor
We all buy food. We all buy clothes. But we don’t all shop at the same places. Poor people are more likely to shop at places with lower prices and–you guessed it–lower costs of labor.
Consider that the average Whole Foods employee earns about $18 per hour while the average WalMart employee makes about $13. The shoppers of the corresponding stores have similar disparities in disposable income that are reflected in the prices they pay.
If the minimum wage were raised to $15 per hour, it might have a negligible effect on prices at Whole Foods. The same is not certain for Walmart. Even if prices were to increase by the same amount in both stores, the impact would be greater on the lower-income shoppers, since it would make up a larger percentage of their income.
The problem is that the money that pays for the higher price of labor doesn’t come from nowhere; too often, it comes from exactly those we’re trying to help.
4. Minimum Wage Has Sloppy Aim
A central challenge to minimum wage’s credibility as a form of poverty relief is that it only affects people with wages. It’s easy to make the assumption that poor people are the ones working low-wage jobs, but the two groups aren’t as synonymous as one might think.
First of all, in order to be considered poor, you must be from a poor household, 57% of which have no income earners (Federal Reserve of San Francisco, pg 2). The idea that we would help them by making things cost more is ludicrous.
In reality, about 22% of minimum wage earners live below the poverty line. Their median age is 25; 3/5 of them are enrolled in school; 47% of them are in the south (where costs of labor and living are lower); and 64% of them work part-time.
Fully ¾ of minimum wage-earning adults live above the poverty line.
It’s clear that we’re largely talking about two different groups of people when we discuss minimum wage earners and the poor. Given that the majority of minimum wage workers aren’t poor and that the majority of the poor are unemployed, we should consider another strategy for fighting poverty: one that doesn’t reduce employment opportunities for the unskilled.
Okay, okay…so if minimum wage isn’t a good solution, what is?
Phenomenal question! The many problems with minimum wage policies share a common root: minimum wage effects transactions before they occur. This passes the cost on to employers or customers and impacts demand. The evident solution then, is a policy that goes into effect post-market. My answer to this is a wage subsidy.
We lose more than we gain by interfering with labor markets. Instead, we should eliminate the minimum wage and—very carefully—create targeted wage subsidies for people that aren’t making enough money from their jobs to survive.
This has to be done precisely to avoid creating disincentives to work. Welfare programs can perversely discourage people from earning more money by stripping away benefits faster than wages rise (and this really is more like a welfare program than minimum wage). To give a simple example: if everyone earning under $10,000 were given an extra $5,000, it would discourage people from earning between $10,000 and $14,999, thus encouraging economic stagnation.
We want to encourage people to be as productive as possible. When we design a welfare system, we have to make sure the total benefit enjoyed by the recipient is greater for every dollar earned than the one before it. In order to accomplish this, we need to design our wage subsidy as a function of market wages (the price that employers pay) that increases at a decreasing rate until it hits a wage that we as a society find acceptable.
I chose to have the subsidized curve cross with the market wage (y=x) at $13/hour, beyond which point it will cease to be applied. Of course, we could write any equation and phase it out at any point. This subsidy curve is a concept, not a strict recommendation.
There are some profound advantages to this “after-market” approach:
The cost is borne by society instead of individual employers
I’ve spoken before about how the cost of consumption should be borne by the consumer, so you can be forgiven for feeling confused about why I feel a subsidy funded by taxes is appropriate here. However, the true price of a dishwasher (for example) is not $15 per hour. We know this because there are currently an abundance of dishwashers willing to work for far less than that. If we as a society want them to take home more money for their work, we should pay the difference.
Because of the way this subsidy curve is designed, employees will still have incentive to search for the highest paying jobs available to them. By tying subsidy receivership to work, we encourage workers to maximize their productivity. As long as these conditions are met, our subsidy won’t unnecessarily burden society with the cost of inefficient labor allocation.
No one is locked out of the labor market
Young people’s employment opportunities are eroded by high minimum wages. Keeping them out of the labor market has negative repercussions for their futures. From the Center for American Progress:
Not only is unemployment bad for young people now, but the negative effects of being unemployed have also been shown to follow a person throughout his or her career. A young person who has been unemployed for six months can expect to earn about $22,000 less over the next 10 years than they could have expected to earn had they not experienced a lengthy period of unemployment. In April 2010 the number of people ages 20–24 who were unemployed for more than six months had reached an all-time high of 967,000 people. We estimate that these young Americans will lose a total of $21.4 billion in earnings over the next 10 years.
Everyone, even the White House, recognizes that the larger implications of a “first job” for our young labor force extend far beyond the pay they receive. Absurdly, they have crafted a program that calls for $5.5 billion in grant funds to help young people get the jobs they have been priced out of by their own government.
Markets will function better
Advocates of raising the minimum wage are effectively claiming that making a market inefficient will improve outcomes. Here this fallacy is presented as: if the cost of labor is higher, workers will have more money to spend and demand will increase.
This is tempting logic, but it doesn’t hold up to scrutiny. To see how, we can substitute workers for something more specific, like carpenters. Yes, if we passed a law saying that carpenters had to be paid more it would be great for some carpenters. But any additional money spent on carpenters can’t be spent on something else. Society loses any additional benefits it might have gained from having more surplus.
If the reverse were true, it would make sense to ban power tools and all sorts of technology, thereby increasing demand for and price of human labor.
An efficient market creates more surplus, and is less burdened by the cost of those who must rely on public welfare. Additionally, the cost of supporting those people will be defrayed by their renewed ability to provide (in some part, at least) for themselves.
It’s way more targeted than a minimum wage, and could absorb other welfare programs
We could write different equations for different people who might require larger or smaller subsidies to meet their basic needs. For example, a single mom of four kids in Long Beach could receive a steeper subsidy than a childless teen living in rural Alabama, who might not need one at all.
This could theoretically absorb other welfare programs. Instead of receiving a SNAP card, a section 8 voucher, and WIC benefits, the cash needed to cover one’s expenses can be calculated in the subsidy. This has the benefit of cutting down on expensive bureaucratic systems and increases the utility of the money given through welfare while incentivizing work.
The United States is a rich country. If we spend our money wisely, there’s no reason we can’t afford some minimum standard of living for workers. Helping our poor citizens is one of the best uses for taxes and far better than a lot of the things we spend public money on.
But rather than mess with markets, we should simply give more money to the people we want to help by redistributing income after markets are allowed to produce as much wealth as they’re able. Additionally, if we’re going to combat poverty with public money, we should do it in a way that stands a chance of eventually readying people to support themselves and without sacrificing economic efficiency. Minimum wage fails both of these tasks.