As anyone reading this blog is undoubtedly aware, Sarah Huckabee Sanders, the current White House Press Secretary, was asked last month by the owner of a restaurant to leave the establishment on the basis that she and her staff felt a moral imperative to refuse service to a member of the Trump administration. The incident, and the ensuing turmoil, highlights the extent to which business has become another political battleground—a concept that makes many anxious.
Whether or not businesses should take on political and social responsibilities is a fraught question—but not a new one. Writing for the New York Times in 1970, Milton Friedman famously argued that businesses should avoid the temptation go out of their way to be socially responsible and instead focus on maximizing profits within the legal and ethical framework erected by government and society. To act otherwise at the expense profitability, he reasoned, is to spend other people’s money—that of shareholders, employees, or customers—robbing them of their agency.
Though nearing fifty years of age, much of Milton Friedman’s windily and aptly titled essay, The Social Responsibility of Business Is to Increase Profits, feels like it could have been written today. Many of the hypotheticals he cites of corporate social responsibility—“providing employment, eliminating discrimination, avoiding pollution”—are charmingly relevant in the era of automation anxiety, BDS, and one-star campaigns. His solution, that businesses sidestep the whole mess, focus on what they do best, and play by the rules set forth by the public, is elegant and simple—and increasingly untenable.
One reason for this is that businesses and the governments Friedman imagined would reign them in have grown much closer, even as the latter have grown comparatively weaker. In sharp contrast to the get-government-out-of-business attitude that prevailed in the boardrooms of the 1970s, modern industry groups collectively spend hundreds of millions to get the ears of lawmakers, hoping to obtain favorable legislation or stave off laws that would hurt them. Corporate (and other) lobbyists are known to write and edit bills, sometimes word for word.
You could convincingly argue that this is done in pursuit of profit: Boeing, for example, spent $17 million lobbying federal politicians in 2016 and received $20 million in federal subsidies the same year. As of a 2014 report by Good Jobs First, an organization that tracks corporate subsidies, Boeing had received over $13 billion of subsidies and loans from various levels of government. Nevertheless, this is wildly divergent from Friedman’s idea of business as an adherent to, not architect of, policy.
As business has influenced policy, so too have politics made their mark on business. Far more so than in the past, today’s customers expect brands to take stands on social and political issues. A report by Edelman, a global communications firm, finds a whopping 60% of American Millennials (and 30% of consumers worldwide) are “belief-driven” buyers.
This, the report states, is the new normal for businesses—like it or not. Brands that refrain from speaking out on social and political issues now increasingly risk consumer indifference, which, I am assured by the finest minds in marketing, is not good. In an age of growing polarization, every purchase is becoming a political act. Of course, when you take a stand on a controversial issue, you also risk alienating people who think you’re wrong: 57% of consumers now say they will buy or boycott a brand based on its position on an issue.
This isn’t limited to merely how corporations talk. Firms are under increasing social pressure to hire diversity officers, change where they do business, and reduce their environmental impact, among other things. According to a 2017 KPMG survey on corporate social responsibility, 90% of the world’s largest companies now publish reports on their non-business responsibilities. This reporting rate, the survey says, is being driven by pressure from investors and government regulators alike.
It turns out that a well marketed stance on social responsibility can be a powerful recruiting tool. A 2003 study by the Stanford Graduate School of Business found 90% of graduating MBAs in the United States and Europe prioritize working for organizations committed to social responsibility. Often, these social objectives can be met in ways that employees enjoy: for example, cutting a company’s carbon footprint by letting employees work from home.
In light of all this, the choice between social and political responsibility and profitability seems something of a false dichotomy. The stakes are too high now for corporations to sit on the sidelines of policy, politics, and society, and businesses increasingly find themselves taking on such responsibilities in pursuit of profitability. Whether that’s good or bad is up for debate. But as businesses have grown more powerful and felt the need to transcend their formerly transactional relationships with consumers, it seems to be the new way of things.
*I wrote this a while ago, but didn’t publish. I was on vacation–sue me. I know the internet has the attention span of a five-year-old, and people aren’t really talking about DeVos anymore, but I’m hoping this is still interesting to someone.
The confirmation of Betsy DeVos as Secretary of Education was perhaps the hardest-won victory of President Trump’s nascent administration. Opposition to the DeVos ran deep enough to require Vice President Pence to cast a historic tie-breaking vote.
To hear it from those on the Left, DeVos is uniquely unqualified for the position. Her lack of personal experience with the public school system, coupled with her one-sided approach to education and purported ignorance of education policy make her unsuited to the position, they argue.
On the Right, the response has been to call into question the political motivations behind opposition to DeVos. Teachers’ unions, after all, are some of the biggest spenders in U.S. politics and their economic interests are threatened by the kind of reforms DeVos’ appointment might foreshadow.
It’s hard to know if either or both sides are being overly cynical. I don’t pretend to have any deep knowledge of DeVos or her new mantle. But one thing seems empirically true: the status quo of public education isn’t above reproach.
More Money, Same (Math, Science, Literacy) Problems
The statistics are damning: Literacy rates among 17-year-old Americans peaked in 1971. Standardized testing reveals that math scores peaked in 1986. Test scores show a lack of improvement in math, science, and reading, in which respectively 25%, 22%, and 37% of American students are proficient.
This kind of stagnation isn’t typical among other nations; the United States showed much smaller levels of inter-generational improvement than other OECD nations. Up until about 1975, Americans were scoring significantly higher in math and literacy than Americans born before them. Since 1975, scores have plateaued, even adjusting for race and foreign-born status of students. As [Gallup’s] study states, this implicates the entire US school system.
Test scores aren’t the only indicators of educational dysfunction. Fully 60% of first-year college students need to take remedial courses in either math or English (to be fair, you might attribute this in part to college admission policies). Companies are also reporting longer vacancies for STEM positions and increasingly are forced to delay projects or look outside the U.S. for workers.
To be clear, it’s not that US public schools are producing particularly terrible outcomes (though they’re admittedly middling among the developed world). The real problem is spending on public education is becoming increasingly inefficient; we’re putting more and more resources into it and receiving little or no additional benefit. This is a long-term trend that should be addressed immediately to avoid throwing good money after bad.
In fairness, I have to point out that speaking of public schools in national terms risks obscuring that some public schools–usually found in high-income neighborhoods–perform incredibly well. However, unequal educational outcomes are often considered a bug, rather than a feature, of the public school system, which charter schools have in some cases been able to address with varying degrees of success (though there are charges that this is only possible because charters are given greater latitude in selecting their students).
The Status Quo Is Hard on Teachers, Too
There is a perception among some that public school teachers are profiting hand over fist as a result of teachers’ unions, to the expense of students. But the truth is a little more complicated.
On one hand, strong teachers’ unions have engendered some policies that arguably favor educators over students. Teacher firing rates, for example, are extremely low. This is especially true for tenured teachers, of which an average of 0.2 are dismissed per district for poor performance annually, according to the National Center for Education Statistics.
This is made possible (at least in part) by what effectively amounts to state-sanctioned local monopolies on education. Constraints on demand impede normal market mechanisms from weeding out inefficient suppliers (at least, that’s the theory embraced by school choice advocates). This isn’t illogical, and it explains the somewhat rare rift between black parents and the Democratic party line on school choice.
Consider a thought experiment: Imagine families were forced to shop for food only in their own neighborhoods. What might we expect to happen to the quality of food consumed by people in poor areas? What if we put limits on the amount of new stores that could open?
In this light, it might be accurate to say that policies that require students to attend schools in their district prioritize the school system over the scholars.
On the other hand, a lot of teachers are being harmed by the current system–particularly the young and good ones.
Under current agreements, teacher compensation rates are in large part determined by longevity, both within the profession and teaching district. Young teachers–especially women teaching young children–are often underpaid relative to other professions.
Additionally, collective bargaining agreements have led to pay compression (a narrowing of the pay gap between high and low performers) among teachers, which penalizes high performing teachers and benefits low performing teachers. Correspondingly, there has been a detectable decline in standardized test scores of new teachers since the 1960s.¹
The combination of longevity-driven pay and salary compression has made teaching a less attractive profession for the best candidates, who can earn more in other comparable fields. A 2014 survey by the American Federation of Teachers revealed merely 15% of teachers report high levels of enthusiasm about their profession, despite 89% feeling highly enthusiastic at the beginning of their careers.
What might we say about an education system that grows increasingly expensive without improvement for students or teachers? We might say that it needs work and we should be open to new ideas, in whatever form they might come. It might also be wise to proceed with caution; for better or worse, this is the system we have right now.
I don’t know if Mrs. DeVos’ agenda will result in improvements. The divergent problems of climbing spending and poor teacher incentive could prove difficult to address simultaneously, especially in the current political climate. But we should all remember the true goal of an education system–public, private, or somewhere in between–is to efficiently increase human capital. How that happens should be of secondary concern.
The study I cited found these results to be true only among female teachers. For some reason, scores of incoming male teachers improved slightly over this period. If anyone has any theories as to why this might be, I’d love to hear them.
There’s no shortage of lessons for us to learn from the unfolding Mylan scandal. It’s practically a study in the consequences of an uncompetitive market and bureaucratic cynicism. Among all the take-aways from this episode, there’s one thing that stands out as particularly interesting: the value inferior products offer consumers.
That sounds a bit counterintuitive. We typically think of inferiorities as something to be eliminated. But in many cases the presence of inferior goods–economically defined as goods for which demand increases when a consumer’s income drops–is actually beneficial, relative to their absence.
Before I talk about how this applies to the EpiPen, I have a confession to make:
I don’t have the latest iPhone.
I don’t even have the oldest iPhone. I have a Samsung Galaxy Light.
It overheats easily, has abysmal battery life, and sends snapchats that look like flipbooks. It hangs up during calls and will never catch a single Pokémon. I once tried to purge my text archive and it took over 12 hours.
Why do I own a device that’s so pathetic by modern standards? It’s a question of personal priorities and resources. I don’t care enough about the add-ons to spend $400 on a phone: a decision surely influenced by the fact that I don’t make that much money.
And yet, even though my phone is undeniably of poorer quality than the phones of my peers, I am much better for it. If I were to disappear tomorrow, say, because someone outlawed suboptimal cell phones, I’d be upset. I would probably end up buying an iPhone, but if I didn’t have the money, I might end up without a cell phone altogether.
So what’s the scrappy alternative (the Galaxy Light, if you will) to the EpiPen? Well, we don’t know–it’s not allowed to exist.
The FDA has the power of deciding what products consumers can and can’t access. In the case of EpiPen substitutes, as well as others, it’s imposed onerous hurdles to market entry that have severely limited anyone’s ability to compete with Mylan. The Wall Street Journal writes:
But no company has been able to do so to the FDA’s satisfaction. Last year Sanofi withdrew an EpiPen rival called Auvi-Q that was introduced in 2013, after merely 26 cases in which the device malfunctioned and delivered an inaccurate dose. Though the recall was voluntary and the FDA process is not transparent, such extraordinary actions are never done without agency involvement. This suggests a regulatory motive other than patient safety.
Then in February the FDA rejected Teva’s generic EpiPen application. In June the FDA required a San Diego-based company called Adamis to expand patient trials and reliability studies for still another auto-injector rival.
Let’s be charitable and ignore that Mylan spent over $2 million lobbying in Washington in 2015. FDA risk aversion, noble or otherwise, is still hurting consumers by leaving them with fewer options and higher prices.
While the FDA has a preference (and every political incentive) for extreme vetting, it may be that some consumers of epinephrine prefer a less expensive, if less tested, model of injector.
Assuming that consumers have the same priorities as their legislators is a mistake of arrogance, and a costly one. A study by Tufts University in 2014 put the cost of getting a drug through to market at $2.56 billion–$1.4 in out-of-pocket expenses and $1.16 in time costs (the forgone ROI of that $1.4 billion over the time the approval process took).
There are reasons people buy lower-quality products. Sometimes it’s a question of personal priorities, sometimes one of finance. The reluctance to acknowledge that this is as true in healthcare as any other marketplace is wrongheaded, though understandable given human emotion and that people’s health is at stake.
But the rules governing price and availability aren’t swayed by emotion. If we want EpiPens to be less expensive, we need to let more people try to make them, even if that involves some measure of risk. Excessive caution carries risk as well: by implementing such harsh standards, the FDA has ensured that the only product remaining is not only highly effective, but also highly expensive.
Rather than accept that there exists a continuum of quality in healthcare products, as with everything, epinephrine users are forced by regulation to use the best or nothing. Because of the diffuse nature of healthcare expenses, this kind of action raises prices across the board and inhibits the development of more efficient products.
Cell phones, even smartphones, are ubiquitous today and generally affordable. When they first came out, they were strictly the province of the wealthy. The same goes for cars, HD televisions, and most other technology.
Someone like me can afford those things today because lots of different producers were able to compete against each other and figure out how to give people more for less. If, when the iPhone came out, all following smartphones were to be held to the same standards, it’s almost certain that there would have been less people making smartphones, fewer models to choose from, and that the ones that did exist would be markedly more expensive and less innovative, due to reduced competition.
Yes, it makes sense to worry about the quality of medical devices that people are using, and yes, the FDA (or something like it) can be useful in that regard. But it also makes sense to concern ourselves with the availability of those same devices. Climbing healthcare costs are dangerous, too.
Starting on January first of next year, the City of Philadelphia plans to impose a “soda tax” of 1.5 cents per ounce. The new law—already set to be challenged in court—has proved highly controversial, even within the political left where its revenue-raising potential is pitted against concerns over its regressive nature. The political right seems fairly uniformly unenthused.
But that’s boring. What’s really interesting is that Philadelphia’s government is avoiding calling the tax a public health measure, instead choosing to focus on the additional revenue it might generate, despite soda taxes’ endemic appeal to the public health profession.
Public health officials often laud soda taxes as a means of reducing demand for sugary drinks that are linked to obesity, diabetes, tooth decay and other maladies. The underlying economics are relatively straightforward—raise the price of soda and people will consume less of it. The hope is that doing so will reduce the incidence of the aforementioned conditions and curb associated healthcare spending.
But despite the wide approval of public health professionals, it’s far from clear that a soda tax is an appropriate solution in this scenario. Not only is there reason to doubt it’s efficacy, but in a sense, such a policy blurs the line between public and what we might call ‘private’ health in a way that marks a pernicious slide away from self-determination and seems to me unethical.
Using a tax to “correct” demand is one of the classic methods of solving collective action problems, which tend to involve public goods or open-access resources and often require regulatory oversight. President Bush’s cap and trade initiative in the 1980s, meant to reduce emissions of sulfur dioxide, is a successful example of such an endeavor.
The idea is that if a resource is shared (in this scenario, air quality), then it makes sense to have a centralized agency impose regulations to account for the “social cost” associated with its degradation. If something can be proven to affect others (without requiring an onerous amount of nuance), there’s a compelling case for using coercive public policy to address it.
That’s certainly the case when air quality is concerned. But there are key differences between air pollution and obesity; even though they both affect people’s health, one is far more likely to be incurred privately. We all breathe the same air; but your neighbor drinking a Double Gulp everyday doesn’t affect your waistline. Someone else being fat doesn’t harm you. Right?
Actually, depending on how an individual’s healthcare is paid for, that last part is up for debate.
Soda drinkers tend to be poorer, (the same is true for users of tobacco, which is subject to similar tax-based deterrence) and therefore more likely to have their healthcare publically subsidized. In a not-so-tangential sense, that means it’s very much in the interest of the taxpayer that those people be deterred from such actions. After all, any tax dollars not spent on healthcare can be spent on something else or not collected.
In my view this poses an ethical challenge—does public financing of healthcare erode beneficiaries’ sovereignty over their health-related decisions? And, if it does, what sort of precedents are we setting should America switch to a universal healthcare system, which would effectively render all health public?
It does seem to be the case that as more resources are poured into social safety nets, there is increased incentive for societies to attempt to engineer the results they want through coercive means. The resulting policies range from ethically dubious taxation to outright illiberalism.
Take, for example, the rather harsh methods by which the Danish government discourages immigration and asylum seekers: seizing assets worth more than $1,450; using policy to force assimilation (in one city mandating that pork be included on municipal menus); cutting benefits to refugees by up to 45%.
A similar situation is unfolding in Sweden, where the extensive social safety net has turned immigration into a tug-of-war between classical and contemporary liberal sentiments. The Economist writes:
The biggest battle is within the Nordic mind. Is it more progressive to open the door to refugees and risk overextending the welfare state, or to close the door and leave them to languish in danger zones?
Closer to home, Canada has recently received some scrutiny for its habit of turning away immigrants with sick children so as to not overburden its single-payer healthcare system.
Some of this might sound cruel or discriminatory. Some of it is. But these are rational responses from systems forced to ration scarce resources. In a sense, it’s the ethical response, given that governments are beholden to their taxpayers.
It’s a natural goal for public health experts, economists, and others whose jobs are to optimize society to try to promote a healthier nation. Our national health and wealth would clearly be improved if obesity, diabetes, etc. were eradicated. And yes, that could conceivably be achieved by any number of forceful policies—what about a Trump-style deportation of the obese?!
But we must consider the costs as well as the benefits of such policies. Are the potential gains worth ceding dominion of our personal decisions to rooms of “experts?” Is it possible for the conversion of health from a private to public good to coincide with our liberal values?
I don’t think so, at least not in the extreme. If health becoming a public resource means that the government must take an increasingly paternalistic and protectionist role in our society, it’s not worth whatever we might gain—or lose around the midsection. After all, if people can’t be trusted to decide what food to eat, what can we be trusted with? If a soda tax is okay, what about a tax on red meat, sedentarism, or motorcycles? Surely we’d be healthier if we did less of each.
I do believe there is an appropriate role for government to play in promoting the private health of the masses, but it’s significantly more parochial than the sort of collective action scheme fetishized by academics. To loosely paraphrase the Great Ron Swanson: people deserve the right to make their own peaceful choices, even if those choices aren’t optimal.
Side note: I would also argue that there’s some pretty heavy cognitive dissonance at play here as far as soda taxes go. The federal government hands out generous subsidies—collected from taxpayers—to corn producers that make junk food and soda cheaper to consumers. If more expensive soda is the remedy, why not remove those subsidies rather than tax consumers twice?
The debate over minimum wage is one of the most confused arguments in American public policy. Although on its face minimum wage appears to be a promising and simple idea, it is, in fact, a very bad policy that has surely hurt the very people it aimed to help. Proponents of minimum wage (many of them well intentioned) often advocate for increases as a means to improve the personal welfare of workers earning the minimum. This is often accompanied by the argument that no one working full time should live in poverty.
The debate they’re having is: can we provide a minimum income/standard of living in America for workers? The debate relative to minimum wage law is, as Charles Blahous of 21st Century Economics points out: Whether government should establish a price barrier to employment, and if so how high it should be.
The answer to the first question is: yes, but it should be handled differently. The answer to the latter is simple: no.
The welfare of the poor and the prevailing minimum wage are not inextricably linked. Despite minimum wage’s self-evident virtue among certain ideological factions, there’s actually little reason to think this sledgehammer-style policy would help many people, let alone society as a whole. Before we talk about what would work better, I want to highlight some of the more egregious failings of the minimum wage.
Minimum wage forces people out of work
Because most of us grew up with the idea, it takes effort to even begin considering the minimum wage for what it really is: a price floor. Like other price floors, it has consequences beyond those desired.
One negative effect of a minimum wage is a loss of employment. This isn’t limited to people losing their jobs or having their hours cut, but also includes the destruction of future jobs that are casualties of foregone economic growth.
Artificially changing the price of something doesn’t change how much it’s worth to people; economics is tasked with grimly reminding us that prices emerge as a function of supply and demand. As long as employment remains a voluntary transaction between employer and employee, it’s hard to believe a price floor won’t compromise the ability of some workers to sell their labor.
Tragically, this usually affects workers with the lowest skills—traditionally the young, poor, undereducated etc. By eliminating their ability to charge less for their services, minimum wage laws eliminate their competitive advantage. This forces them onto the public dole and renders them a net drain on society.
2. Loss of societal surplus, deadweight loss
This concept is a bit nebulous, but bear with me.
One of the reasons people like me (handsome, rugged) are fans of free markets (a commonly maligned and misunderstood term) is their ability to maximize surplus—the excess benefits enjoyed by producers and consumers in a transaction. (That is, when we’re talking about privately consumed goods.)
Surplus is the idea that even though someone would be willing to pay more or be paid less to consume or supply a good (in this case, labor), the free-market equilibrium price ensures that both parties enjoy a better price. In the graph below, it’s represented by the triangle formed by the crossing of the supply and demand curves.
Forcing a price above or below the equilibrium diminishes the amount of surplus enjoyed by society as a whole; economists refer this to as “deadweight loss” (the green triangle in the graph on the right). It’s true that implementing a price floor above the equilibrium point can (but won’t necessarily) increase the surplus of suppliers (laborers), but this is a bad idea for two reasons:
It reduces economic growth and efficiency. The added supplier surplus comes at a direct expense to the rest of the economy. This puts undue pressure on consumers of labor, and thus demand for labor.
Consumers of labor aren’t exclusively employers; they’re also everyday customers—many of whom are the very laborers we meant to aid with the minimum wage.
In other words, though there is tendency to focus on people as either consumers or suppliers of labor, most are both. While they may benefit from minimum wage increases as an employee, they may lose in many other instances when they find themselves on the other side of the proverbial counter. Which dovetails nicely with my next point…
3. Poor people consume lots of low-wage labor
We all buy food. We all buy clothes. But we don’t all shop at the same places. Poor people are more likely to shop at places with lower prices and–you guessed it–lower costs of labor.
Consider that the average Whole Foods employee earns about $18 per hour while the average WalMart employee makes about $13. The shoppers of the corresponding stores have similar disparities in disposable income that are reflected in the prices they pay.
If the minimum wage were raised to $15 per hour, it might have a negligible effect on prices at Whole Foods. The same is not certain for Walmart. Even if prices were to increase by the same amount in both stores, the impact would be greater on the lower-income shoppers, since it would make up a larger percentage of their income.
The problem is that the money that pays for the higher price of labor doesn’t come from nowhere; too often, it comes from exactly those we’re trying to help.
4. Minimum Wage Has Sloppy Aim
A central challenge to minimum wage’s credibility as a form of poverty relief is that it only affects people with wages. It’s easy to make the assumption that poor people are the ones working low-wage jobs, but the two groups aren’t as synonymous as one might think.
First of all, in order to be considered poor, you must be from a poor household, 57% of which have no income earners (Federal Reserve of San Francisco, pg 2). The idea that we would help them by making things cost more is ludicrous.
In reality, about 22% of minimum wage earners live below the poverty line. Their median age is 25; 3/5 of them are enrolled in school; 47% of them are in the south (where costs of labor and living are lower); and 64% of them work part-time.
Fully ¾ of minimum wage-earning adults live above the poverty line.
It’s clear that we’re largely talking about two different groups of people when we discuss minimum wage earners and the poor. Given that the majority of minimum wage workers aren’t poor and that the majority of the poor are unemployed, we should consider another strategy for fighting poverty: one that doesn’t reduce employment opportunities for the unskilled.
Okay, okay…so if minimum wage isn’t a good solution, what is?
Phenomenal question! The many problems with minimum wage policies share a common root: minimum wage effects transactions before they occur. This passes the cost on to employers or customers and impacts demand. The evident solution then, is a policy that goes into effect post-market. My answer to this is a wage subsidy.
We lose more than we gain by interfering with labor markets. Instead, we should eliminate the minimum wage and—very carefully—create targeted wage subsidies for people that aren’t making enough money from their jobs to survive.
This has to be done precisely to avoid creating disincentives to work. Welfare programs can perversely discourage people from earning more money by stripping away benefits faster than wages rise (and this really is more like a welfare program than minimum wage). To give a simple example: if everyone earning under $10,000 were given an extra $5,000, it would discourage people from earning between $10,000 and $14,999, thus encouraging economic stagnation.
We want to encourage people to be as productive as possible. When we design a welfare system, we have to make sure the total benefit enjoyed by the recipient is greater for every dollar earned than the one before it. In order to accomplish this, we need to design our wage subsidy as a function of market wages (the price that employers pay) that increases at a decreasing rate until it hits a wage that we as a society find acceptable.
I chose to have the subsidized curve cross with the market wage (y=x) at $13/hour, beyond which point it will cease to be applied. Of course, we could write any equation and phase it out at any point. This subsidy curve is a concept, not a strict recommendation.
There are some profound advantages to this “after-market” approach:
The cost is borne by society instead of individual employers
I’ve spoken before about how the cost of consumption should be borne by the consumer, so you can be forgiven for feeling confused about why I feel a subsidy funded by taxes is appropriate here. However, the true price of a dishwasher (for example) is not $15 per hour. We know this because there are currently an abundance of dishwashers willing to work for far less than that. If we as a society want them to take home more money for their work, we should pay the difference.
Because of the way this subsidy curve is designed, employees will still have incentive to search for the highest paying jobs available to them. By tying subsidy receivership to work, we encourage workers to maximize their productivity. As long as these conditions are met, our subsidy won’t unnecessarily burden society with the cost of inefficient labor allocation.
No one is locked out of the labor market
Young people’s employment opportunities are eroded by high minimum wages. Keeping them out of the labor market has negative repercussions for their futures. From the Center for American Progress:
Not only is unemployment bad for young people now, but the negative effects of being unemployed have also been shown to follow a person throughout his or her career. A young person who has been unemployed for six months can expect to earn about $22,000 less over the next 10 years than they could have expected to earn had they not experienced a lengthy period of unemployment. In April 2010 the number of people ages 20–24 who were unemployed for more than six months had reached an all-time high of 967,000 people. We estimate that these young Americans will lose a total of $21.4 billion in earnings over the next 10 years.
Everyone, even the White House, recognizes that the larger implications of a “first job” for our young labor force extend far beyond the pay they receive. Absurdly, they have crafted a program that calls for $5.5 billion in grant funds to help young people get the jobs they have been priced out of by their own government.
Markets will function better
Advocates of raising the minimum wage are effectively claiming that making a market inefficient will improve outcomes. Here this fallacy is presented as: if the cost of labor is higher, workers will have more money to spend and demand will increase.
This is tempting logic, but it doesn’t hold up to scrutiny. To see how, we can substitute workers for something more specific, like carpenters. Yes, if we passed a law saying that carpenters had to be paid more it would be great for some carpenters. But any additional money spent on carpenters can’t be spent on something else. Society loses any additional benefits it might have gained from having more surplus.
If the reverse were true, it would make sense to ban power tools and all sorts of technology, thereby increasing demand for and price of human labor.
An efficient market creates more surplus, and is less burdened by the cost of those who must rely on public welfare. Additionally, the cost of supporting those people will be defrayed by their renewed ability to provide (in some part, at least) for themselves.
It’s way more targeted than a minimum wage, and could absorb other welfare programs
We could write different equations for different people who might require larger or smaller subsidies to meet their basic needs. For example, a single mom of four kids in Long Beach could receive a steeper subsidy than a childless teen living in rural Alabama, who might not need one at all.
This could theoretically absorb other welfare programs. Instead of receiving a SNAP card, a section 8 voucher, and WIC benefits, the cash needed to cover one’s expenses can be calculated in the subsidy. This has the benefit of cutting down on expensive bureaucratic systems and increases the utility of the money given through welfare while incentivizing work.
The United States is a rich country. If we spend our money wisely, there’s no reason we can’t afford some minimum standard of living for workers. Helping our poor citizens is one of the best uses for taxes and far better than a lot of the things we spend public money on.
But rather than mess with markets, we should simply give more money to the people we want to help by redistributing income after markets are allowed to produce as much wealth as they’re able. Additionally, if we’re going to combat poverty with public money, we should do it in a way that stands a chance of eventually readying people to support themselves and without sacrificing economic efficiency. Minimum wage fails both of these tasks.
In his opus, Economics in One Lesson, Henry Hazlitt devotes an entire chapter to minimum wage laws. He’s quick to identify a semantic problem that lies at the heart of the debate on minimum wage.
“…for a wage is, in fact, a price. It is unfortunate for the clarity of economic thinking that the price of labor’s services should have received an entirely different name from other prices. This has prevented most people from realizing that the same principles govern both.
Thinking has become so emotional and so politically biased on the subject of wages that in most discussions of them the plainest principles are ignored”
Today Hazlitt’s gripe still rings true.
Presidential candidates Clinton and Sanders are calling for huge increases in the federal minimum wage (Clinton recently echoed Sanders’ call for a $15 federal wage floor). California and New York scheduled incremental increases in the state minimum wages to $15/hour by 2022 and 2021 (with New York’s timing of increase stratified by county). All this is sold to the public as a means to help poor workers, with rarely a mention of the costs of such policy, or who would bear those costs.
Despite a wealth of study on the subject and large consensus about the effects of price floors, economists aren’t speaking out against such an aggressive price-fixing scheme as loudly as one might think.
Twenty-four percent of economists surveyed by the University of Chicago disagreed that advancing the federal minimum wage to $15/hour by 2020 would reduce employment. That is, a quarter of economists disagreed that forcing employers to pay twice as much for labor would reduce their ability or desire to employ people. Fully 38% of economists surveyed responded that they were “uncertain.”
It’s hard to imagine economists making such a statement about anything else. For example: that doubling the price of laptops would have no effect on the amount of laptops purchased. Since labor is purchased just like anything else, we can expect that making it more expensive will cause people to consume less of it.
Consider that when governments want to cut down on behaviors they deem harmful, one of their go-to tools is taxation aimed at increasing the price paid by consumers. Sanders understands that making people pay more for producing carbon means we will produce less carbon. Other politicians have proposed or implemented taxes on soda, tobacco, alcohol, and more activities in order to suppress demand for them. Yet apparently even economists fail to see the parallels between this and minimum wage.
As Hazlitt states, labor is best thought of as another good. Raising its price by mandate will yield the same effects as any other minimum price: some will be purchased for a rate higher than the free-market equilibrium, but a portion of the previously available supply will not. In other words, while some workers will get a raise, others will work less, be fired, or not hired to begin with and employers will enjoy less productivity from their workers.
No one—least of all economists—should be surprised to hear that setting the price of labor higher than people are willing to pay and accept will lead to less efficiency and productivity, nor that this would lead to slower job growth and less employment. We can even observe this happening during past increases of the minimum wage.
Minimum wage is rationalized as an intervention to alleviate poverty and give a leg up to the most vulnerable workers. However raising the minimum price of labor not only prevents consumers (employers) from buying labor beneath such a floor, but also prevents producers (employees) from selling labor below that cost. Since some people don’t have skills that are worth at least $15/hour to employers, they are going to have a much harder time finding employment under such a policy.
When we consider the people that most likely fit this description, the cynicism of minimum wage laws becomes clear. Those most unable to command premiums for labor–the young, poor, under-educated, and inexperienced—are the very people we purport to be helping! It’s no coincidence that minimum wage laws all over the world have roots in racism and ethnic nationalism. In many cases, their goal was to create unemployment among marginalized groups by eliminating their comparative advantage to native workers.
As for employers, it actually gives an advantage to bigger businesses and puts undue pressure on marginal producers (think mom and pop stores, rural and inner-city employers, etc.) who have smaller profit margins and must operate more efficiently. Quite bizarre for an election cycle marked by consternation of income inequality and skepticism of big business.
The ability to sell your labor competitively is important when you don’t have a lot to offer. We seem to understand the value of this for the affluent. No one thinks twice when a college kid takes an unpaid internship or starts volunteering to gain experience. If it’s fine to work for $0/hour, why not $1, $5, or $7?
The scale of federal minimum wage is what truly makes it a bad idea. It’s one thing to try to fix the price of a specific item in a given location (though it’s still a bad idea). But to impose a national price floor on all incarnations of labor should be unthinkable. To suggest that this won’t lead to any reduction in employment (especially in poorer places) is ridiculous.
Some proponents of minimum wage hikes seem to understand this, yet proceed regardless. Upon signing California’s minimum wage increase into effect, Governor Jerry Brown stated:
Economically, minimum wages may not make sense. But morally, socially, and politically they make every sense because it binds the community together to make sure parents can take care of their kids.
To be honest, I don’t understand the morality of pricing people out of work or making consumers spend more than they have to. Given that “57% of poor families with heads of households 18-64 have no workers”, I don’t think making them harder to employ is going to be beneficial to anyone.
It’s good to care about the poor and try to implement policies that help them, and to be clear, I’m not advocating that nothing be done. But economic policies should make economic sense, rather than being rooted in feel-good or politically expedient gestures. Minimum wages help some (often the wrong people) at the expense of others, who, now unemployable, are unable to gain experience that might lead them to prosperity or at least self-sufficiency. At the same time, the rest of society is robbed of the potential productivity of those victims of the wage floor.
After-market transactions (which I’ll get into next essay) are a much better method of helping the poor, precisely because they don’t distort labor markets or reduce demand for labor. Hopefully, our economists will soon get back to the dismal science and stop playing politics.
Last Thursday, Nicholas Kristof penned an article for the New York Times entitled “Drugs, Greed and a Dead Boy.” The piece provides a dismal account of an industry rife with predatory marketing schemes, ineffective treatments, and captained by covetous sociopaths who care more about making money than they do about public health and are prepared to circumvent FDA regulations in order to do so. Whatever your convictions, Kristof makes a compelling case for regulation based on historical evidence. It’s not until the last paragraph that he writes something that makes me pause:
So if you agree with today’s politicians thundering against regulation, or if you think that pharmaceutical companies should enjoy a free speech right to peddle drugs, then talk to a family fighting opiate addiction. Or a parent of a thalidomide child. Or consult the grieving family of Andrew Francesco.
I certainly have no problem admitting that the pharmaceutical industry, much like any industry, doesn’t always act in the best interest of the public. Nor do I have a problem accepting that parents are probably over-medicating their children (and likely themselves) in today’s hypersensitive world. What I take issue with is his thoughtless, implicit dismissal of regulation reform advocates who are seeking to improve a poorly designed system.
I make a point of not agreeing with any thundering politicians, however, as someone who finds fault with the regulatory structure of the FDA, I feel that the argument for reform warrants some defense. The following argument isn’t exactly original–I’ve heard it made by others before, probably better than I am going to make it now–but it is a concept worth defending.
The first thing to realize is that nobody is arguing against regulation (well, some might be, but I’m not) or a vetting process for new drugs. I would find it hard to believe that anyone seriously believes drugs should be less safe for consumers and would fight to craft policies that reflect such a notion. The real point of contention here is who should be doing the regulating, and how.
Kristof appears to be falling victim to a false syllogism: The FDA is a regulator; people want to get rid of the FDA; therefore, people want to get rid of regulation. Not so. What those who challenge the FDA process are protesting is a monopoly on regulation that invariably leads to an inefficient process by which drugs are taken to market, and thus eliminates less human suffering than would otherwise be possible.
To understand why this happens, you have to understand the unique predicament of an organization like the FDA and what kinds of incentives that predicament creates.
There are basically two ways in which the FDA can be said to be performing its function optimally (granted, this is probably an oversimplification). Scenario number one–it takes a good drug to market quickly and efficiently after ensuring that the product is safe for consumption. Scenario number two–it stops a bad drug from making it to market after determining that it is not fit for human consumption. When either of these scenarios are realized, that’s great, and we are all better off for it.
Similarly, there are two ways in which the FDA can be said to be underperforming. In the first, it takes a bad drug to market and risks the lives of consumers. In the second, it engages in an unduly lengthy regulatory process that delays the emergence of new drugs, also risking consumers’ lives. This is where things get tricky. While both outcomes result in human loss and are undesirable, one of these is far less appealing to the FDA.
Moving bad drugs to market is a sure way to shake faith in the regulatory body and create a panic. More importantly, the negative effects of such a screw-up are overt and blame is easily assigned to the organization responsible.
Delaying good drugs due to overly cautious behaviour, although potentially just as deadly, is not nearly as conspicuous. It’s much harder to measure the patients that hypothetically would not have died had the regulatory process been more efficient than the patients that did die due to bad drugs, but that doesn’t mean the effects are any less real. For example, if drug x saves 1,000 lives per year, and spends 4 years tied up in the regulatory process when it could have passed in 2, one could very easily make the argument that 2,000 lives were lost to this inefficiency.
Given the choice between the two, it’s not hard to understand why the FDA would opt to take the path that obfuscates the negative consequences of their decision making. At least in some cases, this must lead to an overly stringent approval process for good drugs. Because they have a monopoly on the regulatory process, there is no one to pressure the system to be more efficient.
A little competition might go a long way in solving these problems. Private regulatory companies might sound strange, but the idea is actually quite logical. This might work something like this:
Drug companies develop drugs and submit them to a regulatory company for quality assurance: a service for which they pay.
The regulatory company tests the drug to its standards by its processes. If it doesn’t pass, the story ends here.
If it does pass, however, the regulatory company approves it and affixes their logo to the medication, much like the FDA does now.
The drug goes to market where consumers use the logos as signals of quality. The more they trust the logo, the more they trust the drug. Logos gain trust by screening accurately and lose clout by rushing drugs to market with bad consequences.
The most trusted logos are in higher demand by people making (and consuming) good drugs, because they want consumers to trust that they’re safe. This in turn re-enforces the incentive to provide strict testing.
Furthermore, drug companies would be willing to pay a premium to have their testing done quickly as well as accurately. This remedies the inherent flaw in the FDA’s design: the zero-sum game between accuracy and timeliness. From the consumer side of things, that means more lives saved/improved.
This solution would do a lot to align incentives within the process of getting safe drugs on shelves, so to speak. If the point of the FDA is ultimately to save lives and improve the quality of medicine–and we should presume it to be such–then why not create a system which could save and improve more lives? If it can be done by private companies, so much the better. All this is not to say that the FDA should be dismantled, but if we are able to admit that allowing for a competitive space can improve services to consumers, then what might be the benefit of having the FDA remain in such a position?
Perhaps it could serve in some overseer capacity–ensuring that no fraud or collusion occurs. Again, the point is not to eliminate the FDA or regulation; the point is to improve the regulatory process and save lives.
There is a semantic problem with Kristof’s (and many others’) understanding of regulation in that he understands it to be inherently and exclusively a purview of the government. Evidently, he has never paused while using Yelp, Uber, or Rotten Tomatoes to consider that these too are forms of regulation that let us know when something is fit for consumption.
Kristof and other “pro-regulators” usually understand the negative impacts of monopolies–inefficiency, unresponsiveness to consumers–and call for regulations to break them up. Competition is the way things get better–no industry is immune to this reality. Ironically, people who want to reform drug regulation for the same reason are met with staunch criticism by regulatory enthusiasts.
Once again, semantics plays a part in obscuring the fact that having a singular body in an industry is a monopoly, even if that body is a government entity. Monopolies are for private corporations; regulations are for governments. Again, not so.
About a month ago, I was talking with some friends on the beach and the topic of environmental regulation came up. When I mentioned that I disagreed with strict environmental regulations and subsidies, it became less of a conversation and more of a melee. I found myself frustratingly inarticulate (3 hours of sleep and a bottle of wine) and was unable to give my argument the explanation I thought it deserved. This essay fulfills my promise to clarify some of my ramblings and, more importantly, it details what I believe to be the best strategy to address environmental concerns. For arguments sake, let’s set aside any disagreement over the disputed realities of climate change and its respective causes.
As I see it, there are two types of motives for conservation: one is economic, the other political (though we might also consider it emotional). The former is aimed at establishing that natural resources have utility beyond that which can be obtained by harvesting them. For example, a specific fish population is valuable to us not only as food, but also in the ocean, since it plays a part in the larger ecosystem upon which we depend. The latter might be described as conservation for conservation’s sake. It is not only common, but expected for world leaders to address climate issues by wielding political power (the UN recently met for its 21st conference on climate change since 1995). This has nothing to do with promoting efficiency and everything to do with satisfying an agenda. I feel comfortable saying this because there is a very clear, simple solution to environmental degradation—at least on the national level.
A sound environment is valuable to all of us. There is no doubt that rapacious consumption of natural resources would lead to adverse and possibly catastrophic consequences. The difficulty lies in the fact that the costs of environmental degradation are not always apparent, nor are they isolated to specific locations or populations. When people consume natural resources or conduct activity that pollutes, and don’t compensate society for the entirety thereof, they are externalizing part of their costs. This externalization is tantamount to a subsidy, and a particularly difficult one to measure at that. For this reason, we cannot rely strictly on a free-market system to sort out environmental issues. In this, government can be a useful tool because it has the unique ability to tax.
I am of the mind that environmental protection is the most worthwhile capacity in which the government can be involved in the economy. There is, however, a right and a wrong way to go about achieving such goals. The wrong ways are through subsidies and quota-based regulations that encourage market inefficiencies and prioritize certain industries unfairly. The right way is through corrective taxation: a process by which environmental costs can be accounted for and passed on to the consumer. This can be achieved by traditional pricing methods and non-market valuation.
It would be disingenuous of me to present this as a novel idea. Most of us have heard of corrective taxation, though perhaps not by that name. The most notable examples are probably carbon tax proposals, whereby producers would be held accountable for the cost of carbon output during their production, ultimately passing that cost on to the consumers (it is also used for non-environmental purposes, such as in the case of so-called “sin taxes” applied to curb consumption of alcohol, tobacco, etc.).
A bleeding heart may object that no price can be put on clean drinking water, the rainforests, atmospheric carbon content, etc. They are, of course, incorrect. An easy way to do this would be to calculate the cost of correcting the damage caused. For example, let us stipulate that the consumption of 1 gallon of gasoline produced negative environmental effects that would cost $0.50 to fix. Adding a $0.50/gallon corrective tax to the final cost of gasoline would compensate for these impacts. This would accomplish 4 goals:
Internalizing costs that were previously externalized
Procuring funds to alleviate the impacts of environment-harming consumption
Curbing demand for environmentally harmful products
Promoting the most efficient products—thus increasing quality of life
The market is the most valuable tool we have in the effort for conservation. It incorporates the best rationing mechanism humanity has yet to devise: price. The answer is not to try to circumvent this process, but rather to help it more accurately reflect reality and then use it as a tool to determine how resources should be consumed. Unfortunately, much of the current rhetoric surrounding environmental protection pays little heed to the dismal science. Rather than address these issues through the price system, some prefer arbitrary quotas and subsidies. The inferiorities of such policies are legion, but I’ll have to content myself with addressing their main deficiencies.
The purpose of economics is to promote the most efficient use of resources, natural or otherwise. Creating a quota or cap that cannot be surpassed distorts this process. The reason for this is that circumstance may dictate that an object’s value changes from one time to another. Quotas do not–and cannot–take this into account because they aren’t receptive to demand. No matter how well calculated a quota is, it can’t respond to shifts in the economy or ecosystem.
For a thought experiment, pretend that with the intention of preserving forestry, there is a forest of 500 trees that cannot be cut down. No one would deny that those trees have utility in the forest: they provide shelter to animals, filter water and carbon, and much more.
However, when something changes, those same trees could be better utilized in another capacity. If, for example, a nearby railroad track which was the only means by which to reach a city were to be destroyed it may very well be that some of the trees are more valuable as railroad ties, and thus should be extracted from the forest and put to that purpose.
How can we know if this is the case? If there is a strict prohibition on harvesting the trees, it’s a non-starter. No amount of demand can warrant the removal of the trees from the forest. However, if we have a complete account for the value of the trees in the forest and can incorporate that into the price of the wood (passing that cost on to the consumers as an internalization of the environmental loss), we can form a clearer picture. All we would need to do is weigh the value of the trees in the forest against their value as railroad ties—ultimately a means by which to get goods–upon which some of them may be dependent–to people.
What is the forest’s integrity worth? What if there is urgently needed medicine on board? What if a store is waiting on a shipment of laptops? Where is our tipping point in this decision? These are questions we can answer with non-market valuation (the process by which we will determine the value of the trees) and price, if given the opportunity.
It may sound implausible or perhaps even unethical; I assure you that it is neither. This is the way that we have been deciding how resources are consumed and procured for centuries. There is a huge network of cooperation in which materials (or anything) are directed by the price people are willing to pay for them. This system is one of the greatest advantages of a market economy, and it is so far unmatched by any centralized model.
The purpose of this thought experiment is to underline that decisions are made on an equilibrium; there is a point at which even a very large cost becomes the more attractive of two options. This same scenario can be extrapolated to Arctic drilling or mining coal. These aren’t things that people (or corporations) just do for no reason. To deride them for being “profit-hungry” is to miss the point. They can make money doing those things because those things are the means to achieve an end that people value; in this case, everything from turning on the lights to staying warm in the winter. However, we should make sure that the full cost of such activity is passed on to the consumer.
The point I’m trying to make here is that strict regulations that implement quotas or prohibitions take whole options off the table—and that’s not a good thing. The concept of utility is utterly absent from such approaches. They lead to inefficient allocation of resources, which can mean anything from waste to starvation to loss of life. Our conservation efforts should be aimed at promoting the greatest quality of life for people, while acknowledging that a sound environment plays a part in that equation.
Subsidies are equally villainous, even when used for industries that we consider to be “good”. Instead of propping up inefficient industries (if they were huge successes, as people claim, they wouldn’t be reliant on subsidies) while sometimes aggravatingly continuing to subsidize the “bad” alternative, we should let the market do its job, once we have put effective environmental taxes into place. The government should not be picking winners and losers. Doing so precludes the development of innovative technologies in the future and reduces industry competition.
Think about it. If the cost of oil plus a corrective tax to produce a certain amount of energy is still exceeded by the cost of producing that amount of energy through solar panels, why not take the first option? The point of technology like solar panels is to be more efficient and ultimately lower costs. If it isn’t, it’s not doing its job and it needs to become more competitive. Human labor and capital, represented in this exchange by money, are also valuable resources and there is no sense in wasting them. After all, we want our lives to improve, not worsen.
This is politics, though. Instead of a rational approach to curbing emissions, we are treated (read: subjected) to bureaucratic tendencies to measure input instead of output when crafting policy. Indeed, as Ira Stroll points out, Clinton’s plan to set up half a billion solar panels across the country conflates a means with an end. Having a lot of solar panels is a ridiculous goal. Lowering emissions is a smart goal, the solution to which can rely partly on solar development, as well as other industries–some of which may not even exist yet. There is no reason to divert resources to a politically favored product that isn’t yet competitive. As Stroll points out, that would be a great way to prematurely litter our country with inefficient infrastructure.
Subsidies are additionally invidious because they often involve perverse transactions of wealth from the poor to the wealthy. If the government is going to be involved in redistribution, it should be to support the poor. When users of advanced, expensive technologies enjoy tax credits and direct subsidies, they are doing so at the expense of less wealthy tax/rate payers. The kind of corrective tax I am proposing would also be of a regressive nature, as is any tax on consumption. In order to offset any damaging effects on the poorer taxpayers, tax credits could be issued to cover some of the accrued costs. Or perhaps our progressive tax system will be enough to offset any adverse effects spurred by this consumption tax.
Perhaps the greatest advantage of a such a program would be its simplicity. I think it’s a safe assumption that a program like this could eliminate a lot of the overhead cost associated with environmental overhaul. Importantly, it eliminates the top-down central model that ignores basic economic realities and puts progress in the hands of individuals. History has a lot to say on the benefits of markets. Who knows how much time, effort, and money could be saved by circumventing the regulatory web woven by ever increasing branches of federal and state governments?