In Defense of the Center

The mushy center never inspires passion like ideological purity. The spectacle of radicalism puts asses in the seats. It’s hard, on the other hand, to imagine rebellious, mask-clad youths taking to the street in the name of fine-tuning marginal tax rates.

Oh sure, you may see a protest here and there, and practically everyone grumbles about this or that issue in which they have an interest. But as the great philosopher Calvin once said: a good compromise leaves everybody mad.

calvin

Some more so than others. Opining in the New York Times, Senator Bernie Sanders suggests Democrats can reverse their political fortunes by abandoning their “overly cautious, centrist ideology,” and more closely approximating the policy positions of a Vermont socialist.

I suppose this could be sound political advice. Everyone has an idea of the way they’d like the world to work, and Sanders’ ideas are appealing to a great many people. You could argue–as Sanders does–that Republicans have had some success with a similar strategy following the Obama years. But, as they’re finding out, ideological purity makes for better campaign slogans than successful governing strategy.

Here’s the thing: We live in a big, diverse country. People have very different wants and needs, yet we all live under the same (federal) laws. Our priorities must sometimes compete against each other, which is why we often end up with some of what we want, but not everything. Striking that balance is tough, and by necessity leaves many people unhappy. We don’t always get it right. But when you’re talking about laws that affect 320 million people, some modesty, or if you prefer, “caution,” is in order.

Alas, Bernie is not of a similar mind. In fewer than 1,000 words, he offers no shortage of progressive bromides without mention of the accompanying price tag. It’s one thing to form a platform around medicare-for-all, higher taxes on the wealthy (their “fair share”), aggressive clean energy commitments, a trillion-dollar infrastructure plan, or free tuition at state universities and lower interest rates on student loans. But all of them? At once?!

Sanders should remember the political and economic lessons of Vermont Governor Peter Shumlin’s foray into single-payer healthcare: Government spending–and thus government activity–is constrained by the population’s tolerance for taxation (And on the other side of things, their tolerance for a deficit of public services. Looking at you, Kansas). Go too far and you risk losing support. And unless you’re willing to rule by force, as extremists often must, that will cost you your ability to shape public policy.

For what it’s worth, I don’t think the Senator’s advice would do the Democrats any favors. The Democrats didn’t move to the center-left because there was widespread and untapped support for endless government programs in America. They did it because they collided with the political and economic reality of governance in our country. Americans are willing to pay for some government programs, but not at the rate Europeans pay to have much more expansive governments. The left, therefore, shouldn’t play an all-or-nothing game, but instead think about what it does well and how it can appeal to, rather than alienate, the rest of the country. That’s going to involve compromise.

Update: Following Jon Ossoff’s narrow defeat in a Georgia special election, there’s been a lot of discussion about whether a more progressive candidate would have fared better. Personally, I find it hard to believe centrism and fiscal conservatism worked against Ossoff in a historically Republican district. Much more believable is Matt Yglesias’ related-but-different take that Ossoff’s reluctance to talk policy left a void for the opposition to exploit, allowing them to cast him as an outsider.

One thing seems certain: the rift within the Democratic party isn’t going away anytime soon.

No, the Interest on Your Student Loan Isn’t Too High. In fact…

It seems like more often than not I’m opening these blog posts with an apology for a multi-week hiatus. Since nobody’s emailed to check on my well-being, I can only infer my readership has gotten on fine without my wonk-Jr. takes on public policy and other matters of high import. Fair enough; but don’t think your demonstrated lack of interest will spare you from a quick update.

Actually, it’s all good news: I’ve been having fun learning R (a statistical language), looking for a new apartment, and testing the limits of a 27-year-old liver. I saw Chance the Rapper and a Pirates game in Pittsburgh, which was awesome. The last article I wrote had some real success and was republished in several places, even earning a shout-out from John Stossel:

The big update is that my stint as a (purely) freelance writer has mercifully drawn to a close; I now write for a non-partisan public policy group. In fact, this very blog was one of my strongest selling points, according to my manager. It just goes to show you, kids: if you toil in anonymity for two years, eventually something will go your way.

*

Okay, enough about me. Let’s talk about a topic close to the heart of many millennials: student loans. More specifically, I want to talk about the interest rates charged on undergraduate student loans.

That interest rates are too high is, unsurprisingly, a common gripe among borrowers. If I had a nickle for every twenty-something I’ve overheard complain that the federal government shouldn’t profit off student loans…well, it still wouldn’t cover one month’s interest. However, this sentiment isn’t limited to overqualified baristas; popular politicians like Elizabeth Warren and Bernie Sanders–and even unpopular politicians–have publicly called for loans to be refinanced at lower rates and decried the “profiteering” of the federal government. From Bernie Sanders’ website:

Over the next decade, it has been estimated that the federal government will make a profit of over $110 billion on student loan programs. This is morally wrong and it is bad economics. Sen. Sanders will fight to prevent the federal government from profiteering on the backs of college students and use this money instead to significantly lower student loan interest rates.

Under the Sanders plan, the formula for setting student loan interest rates would go back to where it was in 2006. If this plan were in effect today, interest rates on undergraduate loans would drop from 4.29% to just 2.37%.

It makes no sense that you can get an auto loan today with an interest rate of 2.5%, but millions of college graduates are forced to pay interest rates of 5-7% or more for decades. Under the Sanders plan, Americans would be able to refinance their student loans at today’s low interest rates.

As one of those debt-saddled graduates, and one of the chumps who took loans at a higher rate of interest, I would obviously be amenable to handing over less of my hard-earned money to the federal government. But as a person concerned with the larger picture, I have to say this is a really bad idea. In fact, rates should be higher, not lower.

First of all, the progressive case for loan refinancing or forgiveness only holds up under the lowest level of scrutiny. Such a policy would overwhelmingly benefit borrowers from wealthy families, who hold the majority of student loan debt. Conversely, most defaulters hold relatively small amounts of debt. Fiddling with interest rates shouldn’t be confused with programs that target low-income students, like the Pell Grant, which are another matter entirely and not the subject of my criticism.

More to the point, the federal government probably isn’t making any money on student loans. Contrary to the claims of Senators Warren and Sanders, which rely on estimates from the Government Accountability Office (GAO) and put federal profit on student loans at $135 billion from 2015-2024, the Congressional Box Office (CBO), using fair-value estimation, shows student loans costing the federal government $88 billion over the same period.

The discrepancy between the CBO and GAO figures comes from the former’s inclusion of macroeconomic forecasts. Essentially, the CBO thinks the risk of default on student loans is higher than the GAO does, due to forces beyond individuals’ control.

Evidence suggests it’s unwise to underestimate the risk associated with student loans. According to a study by the liberal think tank Demos, nearly 40% of federal student loan borrowers are in default or more than 90 days delinquent. Add to that the fact that student loans are unsecured (not backed by collateral or repossessable assets, like a car or house), and they start to look like an incredibly risky venture for the federal government, and ultimately, taxpayers.

That conclusion is deeply unpleasant, but not really surprising if you think about it. Ever notice how the interest rates on private student loans–approximately 10% of the market–are much higher? That’s not because private lenders are greedy; it’s because they can’t lend at the rate of the federal government without losing money.

This is all important because the money that finances student loans has to come from somewhere. Be it infrastructure upgrades, federal support for primary education, or Shrimp Fight Club, the money spent on student loans isn’t available for competing priorities. This is even more important when you consider the loss the federal government is taking on these loans, the cost of which is passed onto future taxpayers in the form of higher taxes or lower spending. Since higher education is only one among infinite human desires, we need to decide how much of our finite resources to devote to it. Properly set interest rates are one way (probably the best way) to figure that out.

The irony, of course, is that doing so would require the government to act more like a private lender–the very thing it’s designed not to do! Our student loan system ensures virtually anyone who wants to study has the money to do so, regardless of the likelihood they’ll be able to repay. One of the nasty side effects of this indiscriminate lending is a large amount of distressed borrowers, who now find themselves in the uncomfortable position of digging out from under a mountain of debt they likely shouldn’t have been able to take on.

More so than other forms of government spending, student loans have specific, discernible beneficiaries: the students who get an expensive education financed by the taxpayer at below-market rates. Sure, you can argue there’s some spillover; society does benefit from having more highly-trained workers. But most of the time, highly skilled labor is rewarded with higher wages. That being the case, is it really too much to ask for borrowers to pay a level of interest that reflects the actual cost of issuing their loans?

Yes, this would be discouraging for some: particularly those who want to pursue non-remunerative fields of study. That’s not such a bad thing; higher interest rates would steer people away from obtaining degrees with low salary expectations, which would–by my reckoning–reduce rates of delinquency and default over the long term. They would also help mitigate some of the pain of defaults when they do happen.

But–you might protest–you can’t run the government like a business! And sure, a lot of the time, you’d be right. However, I really think this is one area where doing so is appropriate–even desirable. Hear me out.

When the government can fund itself through profitable investments rather than zero-sum transfers, it should. If we’re going to have a government of any size (and few suggest that we shouldn’t), then we need to pay for it. Which sounds like the preferable way for that to happen: voluntary, productive, and mutually beneficial investments in society; or the forceful appropriation of private resources? I’m not suggesting the former could entirely replace the latter, but when it can, I think it absolutely should.

Astute readers will realize if the government decides to lend profitably, it will have to compete with private lenders, which would cut into its margins and make its presence in the market redundant. So maybe it’s just a pipe dream. But if profitable lending isn’t possible, the federal government should at least try to minimize losses. One way or another, that means higher interest rates.

Insurance Coverage Numbers Are Important, But Not All-Important

Whether you’re into this sort of thing or not, you’ve probably been hearing a lot about healthcare policy these days. Public debate has roiled as Republican lawmakers attempt to make good on their seven-year promise to repeal and replace the Affordable Care Act (ACA). As the debate rages on, one metric in particular appears to hold outsize importance for the American people: the number of Americans covered by health insurance.

Analysis by the Congressional Budget Office, which showed that 14 million more Americans could lose coverage by 2018 under the Republican replacement, caused intense public outcry and was frequently cited as a rationale for not abandoning the ACA. There is immense political pressure not to take actions that will lead to a large loss of coverage.

But here’s the thing: the relevant metric by which to judge Obamacare isn’t insurance coverage numbers. To do so is to move the goal posts and place undue importance on a number that might not be as significant as we imagine.

The ultimate point of health insurance, and the implied rationale for manipulating insurance markets to cover sicker people, is that people will use insurance as a means by which to improve their health, not just carry a plastic card in their wallets.

Health Insurance ≠ Health

The impulse to use insurance coverage as a proxy for health is misguided but understandable. For one thing, it’s a simple, single number that has dropped precipitously since the implementation of the ACA; that makes it a great marketing piece for supporters. For another, health insurance is the mechanism by which most of us pay for most of our healthcare.

And yet in 2015 the uninsured rate fell to 10.5% (down from 16.4% in 2005) while age-adjusted mortality increased for the first time in a decade.

It turns out a nominal increase in the amount of insured Americans doesn’t necessarily translate into improved health outcomes for those individuals. A newly released paper from the National Bureau of Economic Research (NBER) finds that while the ACA has improved access to healthcare, “no statistically significant effects on risky behaviors or self-assessed health” can be detected among the population (beyond a slight uptick in self-reported health in patients over 65).

These results are consistent with other studies, like the Oregon Medicaid Experiment, which found no improvement in patients’ blood pressure, cholesterol, or cardiovascular risk after enrolling them in medicaid, even though they were far more likely to see a doctor. There were, however, some notable-but-mild psychic benefits, such as a reduction in depression and stress in enrollees.

In short, despite gains in coverage, we haven’t much improved the physical health of the average American, which is ostensibly the objective of the ACA.

Why Not?

To be fair, the ACA is relatively young; most of its provisions didn’t go into effect until 2014. It may well be that more time needs to pass before we start to see a positive effect on people’s health. But there are a few reasons to think those health benefits may never materialize–at least, not to a great extent.

A lot of what plagues modern Americans (especially the poorest Americans) has more to do with behavior and environment than access to a doctor. Health insurance can be a lifesaver if you need help paying for antiretroviral medication, but it won’t stop you from living in a neighborhood with a high rate of violent crime. It won’t make you exercise, or change your diet, or stop you from smoking. It won’t force you to take your medicine or stop you from abusing opioids, and it certainly won’t change how you commute to work (that’s a reference to the rapid increase in traffic deaths in 2015).

Here’s something to consider: A lot of the variables that correlate to health–like income and education–also correlate to the likelihood of having health insurance. If we want healthier Americans, there may be more efficient ways to achieve that than expanding insurance coverage, like improving employment and educational opportunities. Maybe something creative, like Oklahoma City’s quest to become more walker-friendly, could yield better results?

Of course, all things being equal, more insurance coverage is better. But nothing comes without cost, and as a society we want to be sure that benefits justify costs. So far, that’s not clear. This poses an existential question about our current pursuit of universal coverage, and, by extension, the relevance of coverage as a metric for the success of healthcare policy: If insurance isn’t the cure, why are we prescribing it with such zeal?

What’s Up with U.S. Public Education?

*I wrote this a while ago, but didn’t publish. I was on vacation–sue me. I know the internet has the attention span of a five-year-old, and people aren’t really talking about DeVos anymore, but I’m hoping this is still interesting to someone.

The confirmation of Betsy DeVos as Secretary of Education was perhaps the hardest-won victory of President Trump’s nascent administration. Opposition to the DeVos ran deep enough to require Vice President Pence to cast a historic tie-breaking vote.

To hear it from those on the Left, DeVos is uniquely unqualified for the position. Her lack of personal experience with the public school system, coupled with her one-sided approach to education and purported ignorance of education policy make her unsuited to the position, they argue.

On the Right, the response has been to call into question the political motivations behind opposition to DeVos. Teachers’ unions, after all, are some of the biggest spenders in U.S. politics and their economic interests are threatened by the kind of reforms DeVos’ appointment might foreshadow.

It’s hard to know if either or both sides are being overly cynical. I don’t pretend to have any deep knowledge of DeVos or her new mantle. But one thing seems empirically true: the status quo of public education isn’t above reproach.

More Money, Same (Math, Science, Literacy) Problems

According to data from the National Center for Education Statistics (NCES), per pupil spending on public education has increased roughly 1.7% annually since 1980. Student performance, however, has largely stagnated over the same period by various metrics. To somewhat immodestly quote myself:

The statistics are damning: Literacy rates among 17-year-old Americans peaked in 1971. Standardized testing reveals that math scores peaked in 1986. Test scores show a lack of improvement in math, science, and reading, in which respectively 25%, 22%, and 37% of American students are proficient.

This kind of stagnation isn’t typical among other nations; the United States showed much smaller levels of inter-generational improvement than other OECD nations. Up until about 1975, Americans were scoring significantly higher in math and literacy than Americans born before them. Since 1975, scores have plateaued, even adjusting for race and foreign-born status of students. As [Gallup’s] study states, this implicates the entire US school system.

Test scores aren’t the only indicators of educational dysfunction. Fully 60% of first-year college students need to take remedial courses in either math or English (to be fair, you might attribute this in part to college admission policies). Companies are also reporting longer vacancies for STEM positions and increasingly are forced to delay projects or look outside the U.S. for workers.

To be clear, it’s not that US public schools are producing particularly terrible outcomes (though they’re admittedly middling among the developed world). The real problem is spending on public education is becoming increasingly inefficient; we’re putting more and more resources into it and receiving little or no additional benefit. This is a long-term trend that should be addressed immediately to avoid throwing good money after bad.

In fairness, I have to point out that speaking of public schools in national terms risks obscuring that some public schools–usually found in high-income neighborhoods–perform incredibly well. However, unequal educational outcomes are often considered a bug, rather than a feature, of the public school system, which charter schools have in some cases been able to address with varying degrees of success (though there are charges that this is only possible because charters are given greater latitude in selecting their students).

The Status Quo Is Hard on Teachers, Too

There is a perception among some that public school teachers are profiting hand over fist as a result of teachers’ unions, to the expense of students. But the truth is a little more complicated.

On one hand, strong teachers’ unions have engendered some policies that arguably favor educators over students. Teacher firing rates, for example, are extremely low. This is especially true for tenured teachers, of which an average of 0.2 are dismissed per district for poor performance annually, according to the National Center for Education Statistics.

This is made possible (at least in part) by what effectively amounts to state-sanctioned local monopolies on education. Constraints on demand impede normal market mechanisms from weeding out inefficient suppliers (at least, that’s the theory embraced by school choice advocates). This isn’t illogical, and it explains the somewhat rare rift between black parents and the Democratic party line on school choice.

Consider a thought experiment: Imagine families were forced to shop for food only in their own neighborhoods. What might we expect to happen to the quality of food consumed by people in poor areas? What if we put limits on the amount of new stores that could open?

In this light, it might be accurate to say that policies that require students to attend schools in their district prioritize the school system over the scholars.

On the other hand, a lot of teachers are being harmed by the current system–particularly the young and good ones.

Under current agreements, teacher compensation rates are in large part determined by longevity, both within the profession and teaching district. Young teachers–especially women teaching young children–are often underpaid relative to other professions.

screen-shot-2017-02-15-at-1-43-59-pm
Source: No Recovery, Gallup 2016

Additionally, collective bargaining agreements have led to pay compression (a narrowing of the pay gap between high and low performers) among teachers, which penalizes high performing teachers and benefits low performing teachers. Correspondingly, there has been a detectable decline in standardized test scores of new teachers since the 1960s.¹

The combination of longevity-driven pay and salary compression has made teaching a less attractive profession for the best candidates, who can earn more in other comparable fields. A 2014 survey by the American Federation of Teachers revealed merely 15% of teachers report high levels of enthusiasm about their profession, despite 89% feeling highly enthusiastic at the beginning of their careers.

*

What might we say about an education system that grows increasingly expensive without improvement for students or teachers? We might say that it needs work and we should be open to new ideas, in whatever form they might come. It might also be wise to proceed with caution; for better or worse, this is the system we have right now.

I don’t know if Mrs. DeVos’ agenda will result in improvements. The divergent problems of climbing spending and poor teacher incentive could prove difficult to address simultaneously, especially in the current political climate. But we should all remember the true goal of an education system–public, private, or somewhere in between–is to efficiently increase human capital. How that happens should be of secondary concern.

  1. The study I cited found these results to be true only among female teachers. For some reason, scores of incoming male teachers improved slightly over this period. If anyone has any theories as to why this might be, I’d love to hear them.

The Lost Art of Debate

This recent election cycle has been nothing if not revelatory. Who would have guessed that Bernie Sanders and Donald Trump were the harbingers of a populist revolt that would leave both major parties in the throes of identity crisis?

But enough punditry; those are considerations for the political class. While they meet behind closed doors in Washington, We the People should revisit the ancient and sacred art of civil debate and contemplate why so many of us abandoned it.

This isn’t to say that people haven’t fought for what they believe in. Indeed, part of the problem is that many of us confuse fighting with debate. Debate requires patience, empathy, clarity, and above all, an open mind. Fighting requires very little beyond the hubris to mistake conviction for virtue (for examples, visit Facebook, YouTube, or twitter).

To debate someone you have to respect them enough to let them finish a sentence. You have to be willing to let them construct an argument against you under the assumption that you will later be able to successfully challenge its premise. You must also open yourself to the idea that there are realistic limits to your own knowledge and perception.

The bravest thing you can do is give someone time to speak against you. Conversely, resorting to ad hominem, laying waste to innumerable straw men, and shutting out dissent are the strategies of cowards: the easy way out.

Too often these days we lack the bravery to face our ideological opponents. We go online and read news sources that confirm our opinions; we shame our detractors into silence; we withdraw to be with our own kind. Consider that this year over 1,700 counties voted with margins 20 points different than the national vote while only around 250 voted within 5 points of the national vote. At the Washington Post, Phillip Bump writes that two thirds of Clinton and Trump supporters had few to no friends supporting the other candidate. The lukewarm comfort of our echo chambers keeps us content to ignore that ours is a big world.

Nowhere has this tendency become as evident as on college campuses, which have sadly become ideologically homogeneous to the point of enfeeblement. Recently, colleges have developed a disturbing trend of turning away controversial speakers at the request of students. Between 2015 and 2016, the Foundation for Individual Rights in Education counts 52 attempts to disinvite speakers based on student opposition, of which 24 were successful.

Following the election of Donald Trump, there was a widespread and embarrassing exhibition of incredulity. The dangers and consequences of the echo chamber were made apparent.

Rather than take this as an opportunity for introspection, some have chosen to double down. More than a few colleges granted accommodations to their students, ranging from postponing exams to providing “breathing spaces”—rooms containing pets and coloring books—to students who felt stressed by the results. Many smart people took to social media to excoriate others they’d never met, or tried to understand. Is it any wonder that those who have shied from ideological opposition wither in its face?

A closed mind cannot aspire to persuasion. A theory is not made stronger without facing resistance. If you want people to listen to you, as presumably we all do, you should be willing to hear them out. That means reading news from sources that make you uncomfortable and trying to engage people who aren’t like you without writing them off as bigots. It means reaffirming the value of unpopular speech, denouncing censorship, and moving past identity politics. It won’t be easy, but it will be worth it.

For the good of our society, it’s time to revive the art of debate and put aside the intellectual lethargy that precludes it.

Stakes is High

In 1996 De La Soul released Stakes is High. The album contains a running theme of concern for the state of hip-hop. In various skits throughout the album, members of the group fret over the decline of industry integrity, as the genre intensified its flirtation with gangster culture. The album feels like a fitting soundtrack to this election.

In 2016 America played its own high-stakes game, electing a president who, throughout the course of his campaign, displayed an alarming illiteracy of or indifference to the United States’ Constitution. Perhaps more unsettling than his ignorance, he often demonstrated a predilection for authoritarian governance, at various times idolizing Saddam Hussein and Vladimir Putin.

How did we get here? It didn’t happen over night. During times of war American presidents have sought—and often obtained—powers that far exceed the intended scope of the office. Foreign (and at times domestic) threats have been used to justify a litany of unilateral actions and circumvent civil liberties since the First World War. The creeping expanse of Executive power is a feature of a nation inured to a perpetual state of war.

The last two presidents were no exception to this pattern. President Bush secretly authorized the National Security Agency to spy on Americans and issued hundreds of signing statements and executive orders, further compromising the balance of checks and balances between the branches of government.

President Obama continued this trend of erosion, sidestepping Congress on immigration, bombing in Libya, funding the Affordable Care Act, and more.

Partisanship contributed to this phenomenon. Instead of taking principled stands against overreach at all times, members of Congress and the American people have preferred to do so only when it meant thwarting the other team. That cheapens what should be a shared concern of imbalanced government.

In a few months we will have to contend with a President Trump who, as of yet, seems unable to demonstrate a hint of restraint. It’s hard to image that he will adhere to a parochial interpretation of the presidency, especially armed with decades of precedent that suggests there’s really no need to do so.

Democrats and Republicans in office should take every step to right the wrongs of the past and contain the power of the president. Citizens, for their part, must come to regard presidential overreach as the danger it really is or risk a further slide toward tyranny. We must all remember that while authoritarianism may be sweet when your side is winning, it can quickly turn bitter. The stakes, as they say, are high.

Americans Aren’t the Ones Who Should Be Mad about Chinese “Dumping”

One of the few issues upon which Clinton and Trump seemed capable of agreement in the second debate was that cheap steel from China was hurting America. Given how alarming Sunday’s exhibition was, it might have been a nice respite. That is, if they had not both been so wrong.

China produces about as much steel as the rest of the world combined. This is due partly to cheap labor and strong domestic demand, but mostly to heavy government subsidies. Now that China’s economic growth has slowed, markets are awash with cheap Chinese steel.  This has led China’s trading partners to accuse China of “dumping” steel.

Dumping, for those not familiar with the term, refers to the act of selling a good in a foreign market for less than the cost of production. It’s against WTO rules and is penalized by tariffs implemented by importing nations. The United States recently levied a 522% tariff on Chinese cold-rolled steel, which is used for construction and to make shipping containers and cars.

The general consensus, dutifully embraced by both candidates, is that dumping is bad for the importing country and an act of aggression by the exporter. But if you think about it, this is pretty absurd.

First of all, countries don’t trade with each other. The United States doesn’t buy wine from Portugal; American companies buy wine from Portuguese companies. We’re not “getting killed” on bad trade deals as Donald Trump fears; there isn’t even a “we” in the sense that he suggests. There are only people, and people don’t habitually engage in voluntary exchanges at a loss. It should be obvious that importers (American companies, in this instance) are the ones benefiting from cheap steel from China. That’s why they prefer to buy it over more expensive steel made domestically.

It’s true that China’s not a market economy in the same way that America is; their government owns and subsidizes far more than ours. That might sound like an advantage for the Chinese, but it’s really not.

Chinese producers are able to sell steel for less because of large subsidies from their government. The people who benefit from this are the people buying and selling steel–importers and Chinese steel companies, respectively. The people who lose are non-competitive firms and those paying for the subsidies…which would be the Chinese taxpayers.

Subsidized exports are really a transfer of wealth from within a country to without. Importing parties are able to be more profitable and productive, which is precisely why Donald Trump builds with Chinese steel and why we’re all better for it. Yes, it hurts American steel companies, but whatever resources are devoted to domestic steel production can be diverted to other areas with better returns.

Conversely, import tariffs are paid by the importer, and ultimately the consumer. In other words, in order to protect us (read: domestic steel companies) from what amounts to discounted steel, our government taxes the hell out of it so that we end up paying more. Saying that this helps our economy is like claiming that rolling up your sleeves makes your arms warmer. Remember that any jobs or income generated by such tariffs comes directly at the expense of American consumers who are being forced to forgo savings or purchases they would have made with the money they saved on steel.

If millions of tons of steel fell from the sky would we draft legislation to tax the heavens? No, we’d take the free steel and build things with it. If China wants to take money out of its citizens’ pockets and use it to make steel for the rest of the world, Chinese citizens should be outraged. But why should the rest of us complain? When someone gives you a gift, the correct response is: “thank you.”

The Political Impoverishment of America

We’re coming off one hell of a Friday. Released material of both the Democratic and Republican nominees confirms the fears of their respective less-than-fervent supporters: namely that Clinton is an opportunistic liar and that Trump exhibits a moral deficiency that should and will render him unelectable.

A third, fourth, or fifth voice in tomorrow’s debate would be pretty nice right about now.

Unfortunately for We the People, that’s not really up to us. The decision falls solely within the purview of the Commission on Presidential Debates: a non-profit run by the Democratic and Republican parties–because why not? The CPD sets a threshold of 15% that a prospective candidate must reach in 5 national polls, which are conducted in part by some heavily partisan organizations and may not include third party candidates’ names at all.

Somehow, in the same country where we have more types of shoes and deodorant than Bernie Sanders can shake a stick at, we’re left with a binary choice (in practical terms) when it comes to electing a national leader.

For those of us who equate choice with wealth, it’s no shock that severe barriers to entry have left us politically poorer than we should be. Most Americans, after all, are not members of either major party and probably hold an eclectic set of views. Neither candidate is viewed favorably by the public.

But alas, this is life under political duopoly where the players are also the referees. It’s understandable that the two major parties wouldn’t be excited about the prospect of ceding some of their influence. What’s less understandable is the willingness of American “intelligentsia” to play along.

The New York Times has gone into full attack mode to dissuade voters from seeking alternative presidential options. From the opinion section, Paul Krugman and Charles Blow submit missives that malign voters whose opinions diverge from their own (infuriatingly, Krugman’s column, entitled “Vote as if It Matters”, tells voters that “nobody cares” if they use their votes in protest).

Less forgivably, writing in the Politics section can be found using discredited scare tactics to frighten voters away from making their own choices:

And, in what is one of the most difficult barriers for Mrs. Clinton to break through, young people often display little understanding of how a protest vote for a third-party candidate, or not voting at all, can alter the outcome of a close election.

The vast majority of millennials were not old enough to vote in 2000, when Ralph Nader ran as the Green Party nominee and, with the strong backing of young voters, helped cost Vice President Al Gore the presidency.

Hypothesis easily turns to axiom in a feedback loop. Instead of looking inward (300,000 registered Democrats voted for Bush in Florida in 2000), partisans choose to punch down at political minorities (Nader had 90,000 votes in Florida, only 24,000 of which were from registered Democrats) because that absolves them of the responsibility to produce better candidates.

Like any cartel, the political establishment excels at serving itself while being unresponsive to clients (voters). As long as the electorate is willing to swallow the idea that they must choose between the options laid out for them by Democrats and Republicans, that isn’t going to change. The truth is we do have a choice; it’s just a matter of exercising it.

Census Data Are Weird

For those of you with better things to do than scroll through Paul Krugman’s twitter feed, I have news: last Tuesday the Census Bureau released its annual report on Income and Poverty, and people are stoked.

Here’s the upshot: Median household income increased 5.4% from last year after nine years of general decline. It’s now only 1.6% lower than it was in 2007, the year before the recession, and 2.4% lower than its historic peak in 1999.

While Asian households didn’t see a significant increase, black, white, and Hispanic households did. Median household incomes increased in all regions of the country and, for the first time since the recession, real income gains are distributed beyond the top earners.

Sounds like great news! It might well be, but before you celebrate there are some things to note about these statistics. The following isn’t a refutation of the conclusion that the economy is improving. Rather, it’s an indictment of the statistics that lead us to such conclusions. Here are three things to consider:

  • Household income data aren’t all they’re cracked up to be

All statistics have limits, but median household income is particularly misleading in the wrong hands. For years now, economists and politicians have cited median household income data to paint grim pictures of the American economic landscape. While the story is nicer this year, the logic behind the choice to measure households, rather than individuals, is still suspect.

A positive or negative change in median household income doesn’t imply a similar change in individuals. That’s because the characteristics of households vary across time and population.

Average household size has decreased from 3.6 to 2.5 people since 1940. Demographic shifts can also affect household incomes, because average household sizes differ between races.

Another limitation of household income data is that individuals aren’t equally distributed among households of different income levels. There are far more individuals–let alone workers–in the top quintile of income-earning households than the bottom. People who have vested interests in portraying an economically lopsided America tend to cite household data for this reason, without noting this.

Households expand and contract as more people are able to afford their own places. This can strangely cause median household income to rise while people are making less money. For example, if I were demoted and had to move in with my mom because I was now making half as much money, the median household income would increase as our two households merged, despite less aggregate income for the individuals involved.

The same works in reverse. When I started making enough money, I moved out of my mom’s house. Even though our combined income was greater, median household income fell.

Speaking of which…

  • Millennials are living at home longer and in greater numbers than previous generations

Fully 32% of 18-34 year-old Americans live with their parents, making it the most common living arrangement for that group. There are a couple of reasons for this: higher unemployment among young adults; an accompanying delay in or aversion to marriage; and a changing ethnic makeup of America, among others.

While Millennials are more likely to live with mom and dad, we’ve also become the largest generation in the workforce. A larger part of the workforce consolidating in fewer households could explain part of the rise in household income.

This probably isn’t too big of a factor, but since we’re measuring households it’s worth mentioning that about a third of people ages 18-34 are living with mom and dad.

  • “Low-income households” and poor people aren’t necessarily the same

This is a big one. Part of the elation about the Census data comes from the fact that lower-earning households have seen more of a bump in income than they have in recent years.

The problem is income isn’t the same as wealth. It’s closer to a derivative of wealth, like a stillframe is to a film. It’s a simplistic method of gauging standard of living, hobbled by the fact that it doesn’t consider government transfers of money, assets, or liabilities. Economists would probably argue that consumption data are more informative indicators of standard of living.

A wealthy elderly couple and a part-time minimum-wage earner might both be in the lowest income quintile in a given year. That doesn’t mean their standards of living are similar.

Rising incomes of the lowest earners might indicate lots of things: for example, that people are being forced back into the labor market after retiring. As I’ve noted here before, most poor households have no income earners, according to data from the Federal Reserve Board of San Francisco. Unless the rate of employment among the poor grew at the same time, there could be reason to believe that the increase in low-earning households is due to something other than increased income of “the poor.”

Another common assumption is that the households’ positions within income brackets are stagnant, as if we lived in a world without job churn. The households in the bottom 10% of income earners this year aren’t necessarily the same ones that were there in 2008.

We’re used to seeing data based on groups of income earners, not individuals. That’s how the Census reports. However, studying individuals tells a more relevant story.

The United States Treasury tracked individuals’ tax returns from 1990 to 2005. They found that over half of people in the bottom quintile as of 1990 had moved to a higher quintile by 2005.

Screen Shot 2016-09-22 at 10.36.01 PM.png

The Census statistics measure exactly what they measure: nothing more. That doesn’t mean that information is useless, it just means we shouldn’t lose our heads over it. Extrapolating a verdict about America’s economic health from median household income data exposes us to opportunities to make mistakes based on a deceptively simplistic figure.

*

Don’t mistake my skepticism of stats for pessimism about the American economy. Where long-term trends in the American economy are concerned, optimism is never a bad idea.

The Value of Inferior Products

There’s no shortage of lessons for us to learn from the unfolding Mylan scandal. It’s practically a study in the consequences of an uncompetitive market and bureaucratic cynicism. Among all the take-aways from this episode, there’s one thing that stands out as particularly interesting: the value inferior products offer consumers.

That sounds a bit counterintuitive. We typically think of inferiorities as something to be eliminated. But in many cases the presence of inferior goods–economically defined as goods for which demand increases when a consumer’s income drops–is actually beneficial, relative to their absence.

Before I talk about how this applies to the EpiPen, I have a confession to make:

I don’t have the latest iPhone.

I don’t even have the oldest iPhone. I have a Samsung Galaxy Light.

It overheats easily, has abysmal battery life, and sends snapchats that look like flipbooks. It hangs up during calls and will never catch a single Pokémon. I once tried to purge my text archive and it took over 12 hours.

Why do I own a device that’s so pathetic by modern standards? It’s a question of personal priorities and resources. I don’t care enough about the add-ons to spend $400 on a phone: a decision surely influenced by the fact that I don’t make that much money.

And yet, even though my phone is undeniably of poorer quality than the phones of my peers, I am much better for it. If I were to disappear tomorrow, say, because someone outlawed suboptimal cell phones, I’d be upset. I would probably end up buying an iPhone, but if I didn’t have the money, I might end up without a cell phone altogether.

So what’s the scrappy alternative (the Galaxy Light, if you will) to the EpiPen? Well, we don’t know–it’s not allowed to exist.

The FDA has the power of deciding what products consumers can and can’t access. In the case of EpiPen substitutes, as well as others, it’s imposed onerous hurdles to market entry that have severely limited anyone’s ability to compete with Mylan. The Wall Street Journal writes:

But no company has been able to do so to the FDA’s satisfaction. Last year Sanofi withdrew an EpiPen rival called Auvi-Q that was introduced in 2013, after merely 26 cases in which the device malfunctioned and delivered an inaccurate dose. Though the recall was voluntary and the FDA process is not transparent, such extraordinary actions are never done without agency involvement. This suggests a regulatory motive other than patient safety.

Then in February the FDA rejected Teva’s generic EpiPen application. In June the FDA required a San Diego-based company called Adamis to expand patient trials and reliability studies for still another auto-injector rival.

Let’s be charitable and ignore that Mylan spent over $2 million lobbying in Washington in 2015. FDA risk aversion, noble or otherwise, is still hurting consumers by leaving them with fewer options and higher prices.

While the FDA has a preference (and every political incentive) for extreme vetting, it may be that some consumers of epinephrine prefer a less expensive, if less tested, model of injector.

Assuming that consumers have the same priorities as their legislators is a mistake of arrogance, and a costly one. A study by Tufts University in 2014 put the cost of getting a drug through to market at $2.56 billion–$1.4 in out-of-pocket expenses and $1.16 in time costs (the forgone ROI of that $1.4 billion over the time the approval process took).

There are reasons people buy lower-quality products. Sometimes it’s a question of personal priorities, sometimes one of finance. The reluctance to acknowledge that this is as true in healthcare as any other marketplace is wrongheaded, though understandable given human emotion and that people’s health is at stake.

But the rules governing price and availability aren’t swayed by emotion. If we want EpiPens to be less expensive, we need to let more people try to make them, even if that involves some measure of risk. Excessive caution carries risk as well: by implementing such harsh standards, the FDA has ensured that the only product remaining is not only highly effective, but also highly expensive.

Rather than accept that there exists a continuum of quality in healthcare products, as with everything, epinephrine users are forced by regulation to use the best or nothing. Because of the diffuse nature of healthcare expenses, this kind of action raises prices across the board and inhibits the development of more efficient products.

Cell phones, even smartphones, are ubiquitous today and generally affordable. When they first came out, they were strictly the province of the wealthy. The same goes for cars, HD televisions, and most other technology.

Someone like me can afford those things today because lots of different producers were able to compete against each other and figure out how to give people more for less. If, when the iPhone came out, all following smartphones were to be held to the same standards, it’s almost certain that there would have been less people making smartphones, fewer models to choose from, and that the ones that did exist would be markedly more expensive and less innovative, due to reduced competition.

Yes, it makes sense to worry about the quality of medical devices that people are using, and yes, the FDA (or something like it) can be useful in that regard. But it also makes sense to concern ourselves with the availability of those same devices. Climbing healthcare costs are dangerous, too.