No, Voting Third Party Isn’t a Waste

Given the historically unpopular candidates presented to us, 2016 should be the year Americans are encouraged to expand our political horizons.

Instead, people interested in a non-binary choice this election face a litany of derisions and insistences that their political preferences should take a back seat to a greater mission of ensuring that either Clinton or Trump not take the White House.

Underneath it all is the accusation—outright or implied—that voting for a third party candidate is a waste of time: a selfish, brazen gesture best left for a less pivotal year.

This is a terrible argument. There’s no such thing as wasting your vote if you’re doing what you want with it: it’s yours! Vote for candidate A, B, or C; don’t vote; write in your uncle’s name. It doesn’t matter. The only real way to waste your vote is to let someone else tell you how to use it.

Once you’ve been dutifully informed by some clairvoyant pundit that you’re wasting your vote by using it the way you want to, the dismissal of third party candidates based on their remote chances of victory is never far behind. This is profoundly confused; a vote isn’t a bet. There’s no prize for picking the candidate that ends up winning.

The point of a representative democracy–other than to elect leaders–is to convey national preferences to politicians (does anyone believe Ralph Nader’s relative success as a Green Party candidate had no impact on the Democrats’ current environmental stances?). The best way to do this is for everyone to vote for candidates and ideas that appeal to them. The worst thing voters can do is reward unresponsive parties with loyalty: that only begets more unresponsiveness.

Increased interest in third parties let’s Democrats and Republicans know they’re off track. In that sense, and especially to anyone interested in the integrity and evolution of our political discourse, third parties have an important role as the barometers of American political attitudes, if not yet heavyweight contenders for the presidency.

Remember: only a very small minority of our country has actually voted for Clinton or Trump at this point! There’s no reason for the rest of us, who have actively or passively declared our disinterest in both, to feel pressured to line up behind either of them. Some will, and that’s fine if it’s what they want at the end of the day; in fact, I’m happy they’ve found something they can believe in. But it’s not unreasonable for the rest of us to pursue options that we find more personally appealing.

Pluralism and diversity are, at least ostensibly, integral to the American political experience. I can think of nothing worse for our nation than a fear-driven dichotomy whereby we are encouraged to re-imagine second worst as synonymous with best. If we want to be happy with the results of our electoral process, we should start by being more honest about what we want from politicians. The best way to do that is in the voting booth.

Brexit Doesn’t Have to Spell Disaster

The votes are counted and Britain has officially decided to leave the EU. Experts and elites on both sides of the Atlantic are roiling over the decision and predicting a sort of Valyrian Doom unfolding. Before we lose our heads, we should at least entertain the possibility that things won’t completely deteriorate. Let’s talk about how things could go right.

Leaving the EU doesn’t have to mean a rejection of internationalism. In the best case scenario, British policymakers will take the opportunity to free themselves from cumbersome EU regulations while working to strengthen their international presence and keep their markets open. Britain will remain a member of NATO and can maintain strong international ties irrespective of its EU membership status.

From an economic perspective, the best parts of the EU are increased mobility of capital and labor and open trade. In theory, both Britain and the remaining 27 members of the EU would realize that these arrangements are mutually beneficial and that erecting economic barriers would hurt both parties. London is the world’s top financial center—one of only two European cities on the top ten list. The EU would be doing its members a great disservice by cutting them off from such a resource.

An arrangement like NAFTA, where countries forego many economic barriers but remain politically independent, might make sense. Something that allows for personal mobility throughout the continent and Britain would be ideal, but is probably wishful thinking at this point. One of the main motivations for Britain to leave the EU was to regain control over its border (motivated in part by perceptions of welfare spending on immigrants and refugees, which has me feeling a bit prescient). It’s unlikely that EU countries won’t be tempted to reciprocate.

At the end of the day, the repercussions of Brexit will hinge on politics, not economics. The economic benefits of open trade are widely acknowledged, but if EU membership is to continue to mean something beyond onerous dues and ceding sovereignty, it might be hard for members to resist punishing Britain, and themselves, for its departure.

We’ll see what happens. Britain has two years to make its departure. Until then, don’t take too many cues from financial markets–they’re nervous at the moment, but that could easily change over time.

The Hidden Cost of Public Health

Starting on January first of next year, the City of Philadelphia plans to impose a “soda tax” of 1.5 cents per ounce. The new law—already set to be challenged in court—has proved highly controversial, even within the political left where its revenue-raising potential is pitted against concerns over its regressive nature. The political right seems fairly uniformly unenthused.

But that’s boring. What’s really interesting is that Philadelphia’s government is avoiding calling the tax a public health measure, instead choosing to focus on the additional revenue it might generate, despite soda taxes’ endemic appeal to the public health profession.

Public health officials often laud soda taxes as a means of reducing demand for sugary drinks that are linked to obesity, diabetes, tooth decay and other maladies. The underlying economics are relatively straightforward—raise the price of soda and people will consume less of it. The hope is that doing so will reduce the incidence of the aforementioned conditions and curb associated healthcare spending.

But despite the wide approval of public health professionals, it’s far from clear that a soda tax is an appropriate solution in this scenario. Not only is there reason to doubt it’s efficacy, but in a sense, such a policy blurs the line between public and what we might call ‘private’ health in a way that marks a pernicious slide away from self-determination and seems to me unethical.

Using a tax to “correct” demand is one of the classic methods of solving collective action problems, which tend to involve public goods or open-access resources and often require regulatory oversight. President Bush’s cap and trade initiative in the 1980s, meant to reduce emissions of sulfur dioxide, is a successful example of such an endeavor.

The idea is that if a resource is shared (in this scenario, air quality), then it makes sense to have a centralized agency impose regulations to account for the “social cost” associated with its degradation. If something can be proven to affect others (without requiring an onerous amount of nuance), there’s a compelling case for using coercive public policy to address it.

That’s certainly the case when air quality is concerned. But there are key differences between air pollution and obesity; even though they both affect people’s health, one is far more likely to be incurred privately. We all breathe the same air; but your neighbor drinking a Double Gulp everyday doesn’t affect your waistline. Someone else being fat doesn’t harm you. Right?

Actually, depending on how an individual’s healthcare is paid for, that last part is up for debate.

Soda drinkers tend to be poorer, (the same is true for users of tobacco, which is subject to similar tax-based deterrence) and therefore more likely to have their healthcare publically subsidized. In a not-so-tangential sense, that means it’s very much in the interest of the taxpayer that those people be deterred from such actions. After all, any tax dollars not spent on healthcare can be spent on something else or not collected.

In my view this poses an ethical challenge—does public financing of healthcare erode beneficiaries’ sovereignty over their health-related decisions? And, if it does, what sort of precedents are we setting should America switch to a universal healthcare system, which would effectively render all health public?

It does seem to be the case that as more resources are poured into social safety nets, there is increased incentive for societies to attempt to engineer the results they want through coercive means. The resulting policies range from ethically dubious taxation to outright illiberalism.

Take, for example, the rather harsh methods by which the Danish government discourages immigration and asylum seekers: seizing assets worth more than $1,450; using policy to force assimilation (in one city mandating that pork be included on municipal menus); cutting benefits to refugees by up to 45%.

A similar situation is unfolding in Sweden, where the extensive social safety net has turned immigration into a tug-of-war between classical and contemporary liberal sentiments. The Economist writes:

The biggest battle is within the Nordic mind. Is it more progressive to open the door to refugees and risk overextending the welfare state, or to close the door and leave them to languish in danger zones?

Closer to home, Canada has recently received some scrutiny for its habit of turning away immigrants with sick children so as to not overburden its single-payer healthcare system.

Some of this might sound cruel or discriminatory. Some of it is. But these are rational responses from systems forced to ration scarce resources. In a sense, it’s the ethical response, given that governments are beholden to their taxpayers.

It’s a natural goal for public health experts, economists, and others whose jobs are to optimize society to try to promote a healthier nation. Our national health and wealth would clearly be improved if obesity, diabetes, etc. were eradicated. And yes, that could conceivably be achieved by any number of forceful policies—what about a Trump-style deportation of the obese?!

But we must consider the costs as well as the benefits of such policies. Are the potential gains worth ceding dominion of our personal decisions to rooms of “experts?” Is it possible for the conversion of health from a private to public good to coincide with our liberal values?

I don’t think so, at least not in the extreme. If health becoming a public resource means that the government must take an increasingly paternalistic and protectionist role in our society, it’s not worth whatever we might gain—or lose around the midsection. After all, if people can’t be trusted to decide what food to eat, what can we be trusted with? If a soda tax is okay, what about a tax on red meat, sedentarism, or motorcycles? Surely we’d be healthier if we did less of each.

I do believe there is an appropriate role for government to play in promoting the private health of the masses, but it’s significantly more parochial than the sort of collective action scheme fetishized by academics. To loosely paraphrase the Great Ron Swanson: people deserve the right to make their own peaceful choices, even if those choices aren’t optimal.

Side note: I would also argue that there’s some pretty heavy cognitive dissonance at play here as far as soda taxes go. The federal government hands out generous subsidies—collected from taxpayers—to corn producers that make junk food and soda cheaper to consumers. If more expensive soda is the remedy, why not remove those subsidies rather than tax consumers twice?

Sanders Supporters: Why Fall in Line?

On June 6, 2016, the New York Times ran this article claiming that Clinton had clinched the nomination the day before the California and five other states head to the polls to vote in the primaries. The article, based on a poll by the Associated Press, claims that Clinton has secured enough superdelegate votes to effectively guarantee her the ticket, regardless of the turnout yesterday. The timing was…serendipitous, shall we say. All in all a fitting end to the Democratic primaries.

Anyone following the election will be familiar with the growing sentiment that our political process has been hijacked by elites. To paraphrase candidates Trump and Sanders: that we have a rigged system. This surprise announcement—that voters in six states have been rendered obsolete by the markedly undemocratic superdelegate system—will surely do nothing to alleviate such disquiet.

I haven’t been shy of critiquing Sanders’ ideas from my little soapbox. He made the economy a cornerstone of his campaign and then displayed approximately zero economic acumen (in my opinion at least–plenty of people find him compelling). But for all of the eye-roll-inducing statements he made over the past year, his campaign has been a breath of fresh air. It brought to light the extent to which establishment Democrats are perceived to have failed the working class (Trump did the same for the Republicans) and underscored that there are big ideological divisions within the Democratic Party.

It also brought a troublesome revelation for many longtime Democratic voters: some of those “Washington insiders” against whom they rallied to the beat of Sanders’ war drum have a “D” prefixed to their state. That disillusionment is sure to haunt the Party as it charges into November under the banner of a candidate under federal investigation for at least the fourth time.

Now Sanders and his supporters will be told (in truth, continue to be told) that it’s time to turn back into a pumpkin and fall in line. My advice to them: don’t.

I won’t go on a diatribe here—Clinton has plenty of merit as a candidate and is certainly “qualified” to be president, to whatever extent one can be qualified for a unique position. But she and her awkward, halting coronation represent everything wrong with American politics: the presumptuous attitude of entitlement; the ethos of a benevolent dictator; the impunity of the well-connected; the fallacy that less terrible is synonymous with good.

In 1964, Malcolm X observed that while Democrats were getting into office on the black vote, black political support was being taken for granted. I’d say the same point applies to any demographic or individual. If a voter is really into Sanders’ ideas, most of which are rooted in some spirit of protectionism, how do they rationalize supporting a pronounced neoliberal like Clinton?

Vote (or don’t vote) for whomever, for whatever reason you find compelling. But Bernie supporters shouldn’t reward a political party that persistently refused to take their candidate seriously out of a sense of obligation to “party unity.”

Of Course Minimum Wage Reduces Employment

In his opus, Economics in One Lesson, Henry Hazlitt devotes an entire chapter to minimum wage laws. He’s quick to identify a semantic problem that lies at the heart of the debate on minimum wage.

“…for a wage is, in fact, a price. It is unfortunate for the clarity of economic thinking that the price of labor’s services should have received an entirely different name from other prices. This has prevented most people from realizing that the same principles govern both.

Thinking has become so emotional and so politically biased on the subject of wages that in most discussions of them the plainest principles are ignored”

Today Hazlitt’s gripe still rings true.

Presidential candidates Clinton and Sanders are calling for huge increases in the federal minimum wage (Clinton recently echoed Sanders’ call for a $15 federal wage floor). California and New York scheduled incremental increases in the state minimum wages to $15/hour by 2022 and 2021 (with New York’s timing of increase stratified by county). All this is sold to the public as a means to help poor workers, with rarely a mention of the costs of such policy, or who would bear those costs.

Despite a wealth of study on the subject and large consensus about the effects of price floors, economists aren’t speaking out against such an aggressive price-fixing scheme as loudly as one might think.

Twenty-four percent of economists surveyed by the University of Chicago disagreed that advancing the federal minimum wage to $15/hour by 2020 would reduce employment. That is, a quarter of economists disagreed that forcing employers to pay twice as much for labor would reduce their ability or desire to employ people. Fully 38% of economists surveyed responded that they were “uncertain.”

It’s hard to imagine economists making such a statement about anything else. For example: that doubling the price of  laptops would have no effect on the amount of laptops purchased. Since labor is purchased just like anything else, we can expect that making it more expensive will cause people to consume less of it.

Consider that when governments want to cut down on behaviors they deem harmful, one of their go-to tools is taxation aimed at increasing the price paid by consumers. Sanders understands that making people pay more for producing carbon means we will produce less carbon. Other politicians have proposed or implemented taxes on soda, tobacco, alcohol, and more activities in order to suppress demand for them. Yet apparently even economists fail to see the parallels between this and minimum wage.

As Hazlitt states, labor is best thought of as another good. Raising its price by mandate will yield the same effects as any other minimum price: some will be purchased for a rate higher than the free-market equilibrium, but a portion of the previously available supply will not. In other words, while some workers will get a raise, others will work less, be fired, or not hired to begin with and employers will enjoy less productivity from their workers.

No one—least of all economists—should be surprised to hear that setting the price of labor higher than people are willing to pay and accept will lead to less efficiency and productivity, nor that this would lead to slower job growth and less employment. We can even observe this happening during past increases of the minimum wage.

Minimum wage is rationalized as an intervention to alleviate poverty and give a leg up to the most vulnerable workers. However raising the minimum price of labor not only prevents consumers (employers) from buying labor beneath such a floor, but also prevents producers (employees) from selling labor below that cost. Since some people don’t have skills that are worth at least $15/hour to employers, they are going to have a much harder time finding employment under such a policy.

When we consider the people that most likely fit this description, the cynicism of minimum wage laws becomes clear. Those most unable to command premiums for labor–the young, poor, under-educated, and inexperienced—are the very people we purport to be helping! It’s no coincidence that minimum wage laws all over the world have roots in racism and ethnic nationalism. In many cases, their goal was to create unemployment among marginalized groups by eliminating their comparative advantage to native workers.

As for employers, it actually gives an advantage to bigger businesses and puts undue pressure on marginal producers (think mom and pop stores, rural and inner-city employers, etc.) who have smaller profit margins and must operate more efficiently. Quite bizarre for an election cycle marked by consternation of income inequality and skepticism of big business.

The ability to sell your labor competitively is important when you don’t have a lot to offer. We seem to understand the value of this for the affluent. No one thinks twice when a college kid takes an unpaid internship or starts volunteering to gain experience. If it’s fine to work for $0/hour, why not $1, $5, or $7?

The scale of federal minimum wage is what truly makes it a bad idea. It’s one thing to try to fix the price of a specific item in a given location (though it’s still a bad idea). But to impose a national price floor on all incarnations of labor should be unthinkable. To suggest that this won’t lead to any reduction in employment (especially in poorer places) is ridiculous.

Some proponents of minimum wage hikes seem to understand this, yet proceed regardless. Upon signing California’s minimum wage increase into effect, Governor Jerry Brown stated:

Economically, minimum wages may not make sense. But morally, socially, and politically they make every sense because it binds the community together to make sure parents can take care of their kids.

To be honest, I don’t understand the morality of pricing people out of work or making consumers spend more than they have to. Given that “57% of poor families with heads of households 18-64 have no workers”, I don’t think making them harder to employ is going to be beneficial to anyone.

It’s good to care about the poor and try to implement policies that help them, and to be clear, I’m not advocating that nothing be done. But economic policies should make economic sense, rather than being rooted in feel-good or politically expedient gestures. Minimum wages help some (often the wrong people) at the expense of others, who, now unemployable, are unable to gain experience that might lead them to prosperity or at least self-sufficiency. At the same time, the rest of society is robbed of the potential productivity of those victims of the wage floor.

After-market transactions (which I’ll get into next essay) are a much better method of helping the poor, precisely because they don’t distort labor markets or reduce demand for labor. Hopefully, our economists will soon get back to the dismal science and stop playing politics.

 

Science and Politics: An Abusive Relationship

Decades before Luis Pasteur fostered scientific consensus on germ theory, Ignaz Semmelweis was imploring obstetricians to wash their hands after handling corpses. His work did little to inspire his fellow medical practitioners. On the contrary, he was met with indignation and disbelief at almost every turn. Though aided by his increasingly erratic behavior and political inelegance, there is no doubt that his alienation from the medical community was due in part to his then-heretical proposals.

We’ve come a long way since the Roman Inquisition locked Galileo under house arrest for advancing the theory of heliocentricity. Yet still, skepticism is a trait that can inspire zealous culture warriors to brand others “deniers” or deride them as being “anti-science.”

Of course, there’s nothing more scientific than scrutinizing an accepted norm. The scientific process is dependent on constant refinement by people attempting to prove each other wrong. Indeed, science needs skepticism to sharpen its ham-fisted hypotheses into acute theories.

Our devotion to explaining the universe through rational observation and rigorous testing has catapulted us from a species-wide state of destitution to one of unimaginable wealth. That’s largely due to thousands of years of continued knowledge expansion and the pursuit of logical explanation. If science is the vehicle that brought us this far, then the fuel is undoubtedly…well, doubt.

This unique feature stands in sharp contrast to another primary way humans have explained the world: religion, which asks us to accept without questioning. Doubt may have been bad for Thomas, but Copernicus did wonderful things with it. There are few things as amusing as the rabid atheist who has not so much embraced doubt as become a cynic. Remember that uncertainty, regardless of its target, is the very heart of science.

A cursory glance at the past is all one needs to find examples of misplaced faith in science of the day. As the story of Dr. Semmelweis illustrates, there was a time when nearly all doctors were pretty damn sure that they didn’t need to wash their hands after handling dead bodies. In fact, they were offended by the notion.

More recently, Brian Nosek of the University of Virginia tried to replicate 100 studies appearing in top psychological journals; he and his team were unable to replicate about two thirds of them.

Treating scientific consensus axiomatically is a step in the wrong direction. We need to keep gathering information, and that information has to include research by iconoclasts in order to be well rounded. Remember that many widely held beliefs started out as heresies. Behind each of them was someone willing to come out against conventional wisdom, sometimes at great personal or professional risk.

The greatest minds of humanity used to believe in a static universe, phrenology, and many more things that we might find ridiculous today. So if skepticism is so demonstrably useful and deserved, why do people demonize each other for failure to follow the herd?

It’s politics, stupid.

Like basically anything today, science often finds itself mired in the ostentatious game of political signaling. Opinions and interpretations of scientific research are as much a part of political identity as a bumper sticker or a lawn sign. This is hugely unfortunate because it leads people to adopt dogmatic approaches to a process that should be objective.

Politics ruin science (and pretty much everything else) because everything is reduced to a zero-sum game: an us versus them scenario where concession is likened to defeat. They also reduce diversity of opinion and promote groupthink.

If you think I’m exaggerating, consider this: as people’s scientific literacy increases, their opinions on climate change polarize depending on their political affiliation. But that’s not all. According to the same study, conservatives who are more scientifically literate are also more likely to believe that there is a scientific consensus on global warming. Dan Kahan writes:

Accordingly, as relatively “right-leaning” individuals become progressively more proficient in making sense of scientific information (a facility reflected in their scores on the Ordinary Science Intelligence assessment, which puts a heavy emphasis on critical reasoning skills), they become simultaneously more likely to believe there is “scientific consensus” on human-caused climate change but less likely to “believe” in it themselves! 

While skepticism of climate change science is a markedly right-wing prejudice, those on the left are more likely to display similarly rock-ribbed opinions on fracking, GMO safety, and other areas that conflict with scientific consensus.

Politics are an inevitable part of living in a republic, but scientific debate loses integrity when we let our politics decide how we feel about science instead of the other way around. It’s divisive, but worse: it’s lazy and positively unscientific.

In an increasingly polarized country, we would do well to remember the humanity of our detractors. We also might make a conscious effort to both admit and overcome our biases, even as we argue with conviction.

Perhaps most importantly, we should stop acting like morality and argumentative position are inextricably linked. Doing so makes it that much easier to demonize people with differing opinions (If my opinion is moral and yours is different, it is less moral. Therefore, since you are putting forth an immoral opinion, you are evil)  and makes us far less capable of changing our own.

Leave the crusades in the 15th century.

The Opportunity Cost of Sensitivity

Outrage as an argumentative deterrent has become so commonplace that it is in many instances left unquestioned. Across campuses in the United States, speech and course material are being constrained in the name of sensitivity. There seems to be a mounting pressure on intellectuals to conform to conventional wisdom, even in the face of data. This, of course, comes at a loss to anyone who would like to pursue rational thought and values integrity over pleasantry–and to society as a whole. What we are losing is not immediately visible, thus I have dubbed it the “opportunity cost of sensitivity”.

Over the weekend, two particular events occurred which inspired me to write this essay. One involving a Fox News guest, Gavin McInnes, and the other involving Jerry Hough, a professor at Duke University.

Gavin McInnes’ comments on why women earn less than men were certainly…inspiring. That they have been met with indignation is no surprise to me. However, while I understand the public aversion to his rather simplistic assertion, I am reminded of an unfortunate pattern that I observe all too frequently in contemporary American politics/media; this being the tendency to focus on the aspects of a statement that are offensive and using them to justify disregarding the entire spirit of the comment.

McInnes’ stated:

“The big picture here is, women do earn less in America because they choose to. They would rather go to their daughter’s piano recital than stay all night at work, working on a proposal so they end up earning less. They’re less ambitious, and I think this is sort of God’s way, this is nature’s way of saying women should be at home with the kids — they’re happier there.”

His comment is laden with exasperating generalizations bordering on simple ignorance. The idea that women are less ambitious than men purports that there are respective levels of ambition that vary between the sexes (ie a male level of ambition and a female level of ambition). This is the sad consequence of trying to generalize a quality that is by its very nature, intangible. Ambition can manifest itself in many ways, it is not exclusively linked to career aspirations. It could very easily be argued that raising a family is more ambitious than punching a clock 40 hours a week.

However, McInnes’ proposal threatens to broach on a more cerebral hypothesis: That men and women prioritize differently and that this is reflected in their income. This could be an interesting conversation—it could be observed, tested, proven or disproven. We could have a very worthwhile national discourse on this, were we able to substitute some emotion for critical thought.

Similarly, earlier this week Jerry Hough, a professor of political science at Duke University, began receiving severely negative feedback due to a six-paragraph comment he left on a New York Times editorial. This has so far culminated in him being put on leave by the university. By far the most inflammatory excerpt:

“I am a professor at Duke University. Every Asian student has a very simple old American first name that symbolizes their desire for integration. Virtually every black has a strange new name that symbolizes their lack of desire for integration. The amount of Asian-white dating is enormous and so surely will be the intermarriage. Black-white dating is almost [non-existent] because of the ostracism by blacks of anyone who dates a white.”

Again, it is easy to see why people were put off by such writing. The use of “black”, “white” and “a Chinese” as nouns describing [groups of] people connotes a particularly archaic view on race relations, as does the explicit suggestion that uniquely black names exemplify a lack of desire to “integrate”. Given the nature of the New York Times’ online community, this comment was destined to fail.

However, this is only one paragraph of what is otherwise an analytical opinion of the causes of problems plaguing the black community en masse. In the rest of his five paragraphs, Hough is critical of moneyed democrats’ manipulation of inner-city blacks, the practical inefficiency of Baltimore’s mayor, and the general narrative that racism alone is holding back black progress. He evidences the latter by pointing out that Asian Americans were similarly discriminated against in the 60s and juxtaposing their respective modern socio-economic standing.

There is no hate in his comment. On the contrary, his tone is one of an observer bemoaning what he sees as a misstep in causal analysis. When we strip away a very minor amount of unintentionally offensive material, we are left with an (admittedly subjective) opinion supported by observation.

Once again, there is a hypothesis present (actually there are several) that is at least worth our consideration. Relative specifically to the remark regarding first names of black students, it is acknowledged by many that uniquely black first names are usually not correlated with economic success. Though this knowledge is commonplace, uniquely black names are still very prolific in our society. This is obviously very interesting, because it means that some parents are prioritizing anti-assimilation over the financial future of their children. Whether or not we find this to be fair or exemplary of the world we’d like to live in, we should not shy away from discussing it in the name of sensitivity.

But such is the nature of the beast. In the age of the Internet, we seem to trip over our own feet rushing to be outraged by the first comment that portends on insensitivity. It makes no difference if it is supported by observation or logic; dissenting statements are quickly vilified by an eager community of censors.

Per the title of this essay, I believe that doing so comes at a high cultural cost that is not readily apparent to us. Throwing water on the spark of an idea that we find upsetting cannot possibly be the knee-jerk reaction of an intellectual society.

That personal outrage is accepted as a valid tool by which to debunk arguments is extremely troublesome. When it is so used, it stymies discussion based solely on emotion, a subjective value, while leaving no room for objective analysis. The fact that hypotheses are being created means that a solution is not present. By forgoing an argument because we find its premise offensive, we are implicitly stating that the problem it addresses is less offensive, because we would rather it remain unchanged than tolerate the unpopular view. Is Hough’s argument that resistance to assimilation is handicapping blacks more offensive than black suffering? It would take someone very far-removed from the real world to make such an argument.

Hopefully, people will remember that they don’t know everything and that even initially offensive theories can better our understanding of the world we live in—ultimately leading to solutions. The question we must ask ourselves is: Do we gain more by stifling these conversations than we could by having them? That is the true nature of a high opportunity cost. The answer will usually be a resounding no.