President Reagan memorably said that the nine words you don’t want to hear are “I’m from the government, and I’m here to help.” Governments in all the major jurisdictions are now making good on that unwanted promise and are taking responsibility for everything from our shoulders.
Those receiving subsidies and loan guarantees are no doubt grateful, though they probably see it as the government’s duty and their right. But someone has to pay for it. In the past, the redistribution of wealth through taxes meant that the haves were taxed to give financial support to the have-nots, at least that was the story. Today, through monetary debasement nearly everyone benefits from monetary redistribution.
This is not a costless exercise. Governments are no longer robbing Peter to pay Paul. They are robbing Peter to pay Peter as well. You would think this is widely understood, but the Peters are so distracted by the apparent benefits they might or might not get that they don’t see the cost. They fail to appreciate that printing money is not just the marginal source of financing for excess government spending, but that it has now become mainstream.
There is almost a total absence in the established media of any commentary on the consequences of monetary inflation, and in a cry for more we even have financial experts warning us of a deflationary collapse and the need for the Fed to introduce negative interest rates to stave off deflation. Yes, there are deflationary forces, because banks wish to reduce their loan exposure at a time of increasing risk. But we can be sure that central banks and their political masters will do everything they can to counter the trend of contracting bank credit by increasing base money. There can only be one outcome: the debasement and eventual destruction of fiat currencies.
There is an aspect of the destruction brought about by monetary policy which is almost never considered by policymakers, and that is how it distorts the allocation of capital and leads to its misallocation. In free markets, capital is scarce and must be used to greatest effect if the consumer is to be properly served and the entrepreneur is to maximize his profits.
Capital comes in several forms and encompasses every aspect of production: principally an establishment, machinery, labor, semimanufactured goods and commodities to be processed, and money. An establishment, such as a factory or offices, and the availability of labor are relatively fixed in their capacity. Depending on their deployment and capacity they produce a limited amount of goods. It is just one form of capital, money and credit, which central banks and the banking system now provide, and which in its unbacked form is infinitely flexible. Consequently, attempts to stimulate production by monetary means still run into the capacity constraints of the other forms of capital.
Monetary policy has been increasingly used to manipulate capital allocation since the early days of the Great Depression. The effect has varied, but it has generally come up against the constraints of the other forms of capital. Where there is excess labour, it takes time to retrain it with the specialized skills required, a process hampered by trade unions ostensibly protecting their members but in reality resisting the reallocation of labor resources. Government control over planning and increasingly stifling regulations, again putting a brake on change, mean that changes and additions to the use of establishments have lengthened the time before entrepreneurial investment is rewarded with profits. Government intervention has also discouraged the withdrawal of monetary capital from unprofitable deployment, or malinvestments, lengthening recessions needlessly.
When the advanced nations had strong industrial cores, the periodic expansions of credit and their subsequent sudden contractions led to observable booms and busts in the classical sense, since production of labor-intensive consumer goods dominated production overall.
There have been two further developments. The first was the abandonment of the Bretton Woods agreement in 1971, which led to a substantial rise in prices for commodities. The broad-based UN index of commodities rose from 33 to 157 during the decade, a rise of 376 percent.1 This input category of production capital compared unfavorably with US consumer price increases of 112 percent over the decade, the mismatch between these and other categories of capital allocation making economic calculation a fruitless exercise. The second development was the liberation of financial controls in the mid-eighties, London’s Big Bang and the repeal of America’s Glass-Steagall Act of 1933, allowing commercial banks to fully embrace and exploit investment banking activities.
The banking cartel increasingly directed its ability to create credit toward purely financial activities mainly for their own books, thereby financing financial speculation while deemphasising bank credit expansion for production purposes for all but the larger corporations. Partly in response, the nineties saw businesses move production to low-cost centers in Southeast Asia, where all forms of production capital, with the exception of monetary capital, were significantly cheaper and more flexible.
There then commenced a quarter-century of expansion of international trade replacing much of the domestic production of goods in the US, the UK, and Europe. It was these events that denuded America of its manufacturing, not unfair competition as President Trump has alleged, and Germany’s retention of manufactures proves this. But the effect has been to radically alter how we should interpret the effects of monetary expansion on the US economy and others, compared with Hayekian triangles and the like.
Business cycle research had assumed a capitalistic structure of savers saving and thereby making monetary capital available to entrepreneurs. Changes in the propensity to save sent contrary signals to businesses about the propensity to consume, which caused them to alter their production plans. Based on the ratio between consumer spending and savings, this analytic model has been corrupted by the state and its licensed banks by replacing savers with former savers now no longer saving, and even borrowing to consume.
Today, the inflationary origins of investment funds for business development are hidden through financial intermediation by venture capital funds, quasi-government funds, and others. Being mandatory, pension funds continue to invest savings, but their beneficiaries have abandoned voluntary saving and run up debts, so even pension funds are not entirely free of monetary inflation. Insurance funds alone appear to be comprised of genuine savings within an inflationary system.
Other than pension funds and insurance companies, Keynes’s wish for the euthanasia of the saver has been achieved. He went on to suggest there would be a time “when we might aim in practice…at an increase in the volume of capital until it ceases to be scarce, so that the functionless investor will no longer receive a bonus.”2
Now that everywhere bank deposits pay no interest, his wish has been granted, but Keynes did not foresee the unintended consequences of his inflationist policies which are now being visited upon us. Among other errors, he failed to adequately account for the limitation of nonmonetary forms of capital, which leads to bottlenecks and rising prices as monetary expansion proceeds.
The unintended consequences of neo-Keynesian policy failures are shortly to be exposed. The checks and balances on the formation and deployment of monetary capital in the free market system have been completely destroyed and replaced by inflation. So, where do you take us from here, Mr. Powell, Mr. Bailey, Ms. Lagarde, Mr. Kuroda?
We can now say that America, the nation responsible for the world’s reserve currency, has encouraged policies which have turned its economy from being a producer of goods with supporting services as the source of its citizens’ wealth into little more than a financial casino. The virtues of saving and thrift have been replaced by profligate spending funded by debt. Unprofitable businesses are being supported until the hoped-for return of easier times, which are now gone.
Cash and bank deposits (checking accounts and savings deposits) are created almost entirely by inflation and currently total $15.2 trillion in the US, while total commercial bank capital is a little under $2 trillion. This tells us crudely that the $13 trillion sitting in customer accounts can be attributed to bank credit inflation. Increasing proportions of those customers are financial corporations and foreign entities, and not consumers maintaining cash and savings balances.
On the other side of bank balance sheets is consumer debt, mostly off balance sheet, but ultimately funded on balance sheet. Excluding mortgages, the total consumer debt, comprising credit card, auto, and student debt, was $3.86 trillion in mid-2019, amounting to an average debt of $27,571 per household, confirming the extent to which consumer debt has replaced savings.3
At $20.5 trillion, bank balance sheets are far larger than just the sum of cash and bank deposits, giving them a leverage of over ten times their equity. Bankers will be very nervous of the current economic situation, aware that loan and other losses of only 10 percent wipe out their capital. Meanwhile, their corporate customers are either shut down, which means that most of their expenses continue while they have no income, or they are suffering payment disruptions in their supply chains. In short, bank loan books are staring at disaster. Effectively, the whole banking system is underwater at the same time that the Fed is extolling them to join with it in rescuing the economy by expanding their balance sheets even more.
The sums involved in supply chains are considerably larger than the US’s GDP. Onshore, it is a substantial part of the nation’s gross output, which captures supply chain payments at roughly $38 trillion. Overseas, there is a further mammoth figure feeding into the dollar supply chain, taking the total for America to perhaps $50 trillion. The Fed is backstopping the foreign element through currency swaps and the domestic element mainly through the commercial banking system. And it is indirectly funding government attempts to support consumers who are in the hole for that $27,571 on average per household.
In short, the Fed is committed to rescuing all business from the greatest economic collapse since the Great Depression and, probably greater than that, to funding the US government’s rocketing budget deficits and funding the maintenance of domestic consumption directly or indirectly through the US Treasury, while pumping financial markets to achieve these objectives and preserve the illusion of national wealth.
Clearly, we stand on the threshold of an unprecedented monetary expansion. Part of it will be, John Law–style, to ensure that inflated prices for US Treasurys are maintained. At current interest rates, debt servicing was already costing the US government 40 percent of what was expected to be this year’s government deficit. That bill will now rise beyond control even without bond yields rising. Assistance is also being provided to the corporate debt market. Blackrock has been deputed to channel the Fed’s money-printed investment through ETFs (exchange-traded funds) specializing in this market. So not only is the Fed underwriting the rapidly expanding US Treasury market, but it is underwriting commercial dollar debt as well.
In late 1929, a rally in the stock market was prolonged by a similar stimulus, with banks committed to buying stocks and the Fed injecting $100 million in liquidity into markets by buying government securities. Interest rates were cut. And when these attempts at maintaining asset prices failed, the Dow declined, losing 89 percent of its value from September 1929.
Today, similar attempts to rescue economies and financial markets by monetary expansion are common to all major central banks, with the possible exception of the European Central Bank (ECB), which faces the unexpected obstacle of a challenge by the German Federal Constitutional Court claiming primacy in these matters. There is therefore an added risk that the global inflation scheme will unravel in Europe, which would rapidly lead to funding and banking crises for the spendthrift member states. Doubtless, any financial contagion will require yet more money printing by the other major central banks to ensure that there are no bank failures in their domains.
Whither the Exit?
So far, few commentators have grasped the implications of what amounts to the total nationalization of the American economy by monetary means. They have only witnessed the start of it, with the Fed’s balance sheet reflecting the earliest stages of the new inflation which has seen its balance sheet increase by 61 percent so far this year. Not only will the Fed battle to fund everything, but it will also have to compensate for contracting bank credit, which we know stands at about $18 trillion.
The Fed must be assuming that the banks will cooperate and pass on the required liquidity to save the economy. Besides the monetary and operational hurdles such a policy faces, it cannot expect the banks to want responsibility for the management of businesses that without this funding would not exist. The Fed, or some other government agency, then has to decide on one of three broad options: further support, withdrawing support, or taking responsibility for business activities. This last option involves full nationalization.
We must not be seduced into thinking that this is an outcome that can work. The nationalization of failing banks and their eventual privatization is not a good precedent for wider nationalization, because a bank does not require the entrepreneurial flair to estimate future consumer demand and to undertake the economic calculations to provide for it. The state taking over business activities fails for this reason, as demonstrated by the collapse of totalitarian states such as the USSR and the China of Mao Zedong.
That leaves a stark choice between indefinite monetary support or pulling the rug from under failing businesses. There are no prizes for guessing that pulling rugs will be strongly resisted. Therefore, government support for failing businesses is set to continue indefinitely.
At some stage, the dawning realization that central banks and their governments are steering into this economic cul-de-sac will undermine government bond yields, despite attempts by central banks to stop it, even if the deteriorating outlook for fiat currencies’ purchasing power does not destroy government finances first.
Earlier in the descent into the socialization of money, nations had opportunities to change course. Unfortunately, they had neither the knowledge nor the guts to divine and implement a return to free markets and sound money. Those opportunities no longer exist, and there can be only one outcome: the total destruction of fiat currencies, accompanied by all the hardships that go with it.
Listen to the Audio Mises Wire version of this article.
Modern macroeconomics has made price stability the primary objective of monetary policy. It is assumed that central banks can ensure price stability by skillfully managing the money supply, thereby creating the conditions for economic growth and prosperity.
In order to provide a safety buffer against the dreaded price deflation, central banks around the world try to generate a positive but moderate rate of price inflation. Price stability thus means a stable rate of inflation. The prices of goods and services should on average rise slowly at a constant rate over the medium and long term. In the eurozone, the aim is to achieve a price inflation rate close to but below 2 percent.
However, it cannot be denied that measuring a general price level and its rate of change is associated with major problems. The formal inflation target of the central banks must be operationalized in practice. It is therefore necessary to determine which prices are targeted and how they are to be summarized in a weighted average.
The Harmonized Index of Consumer Prices
The member states of the eurozone have agreed on a standardized procedure for measuring inflation. The Harmonized Index of Consumer Prices (HICP) is the operationalized target variable of monetary policy. The calculation of the HICP is relatively complex,1 as attempts are made to eliminate possible distortions in the measurement of inflation by means of elaborate procedures and estimates. However, it is highly questionable whether this is successful. In the following, I would like to take a closer look at two important sources of bias.
The HICP consists of twelve subindices2 which group together different classes of goods. Each of the subindices consists of different subcategories, which are again subdivided until the individual prices of certain goods and services are reached at the lowest level. These unit prices must be adequately weighted for the calculation of the index. The principle is that goods and services on which a large proportion of income is spent must be given a higher weighting than those goods and services that are purchased only very sporadically and in small quantities. Formally, therefore, the weights are determined by the real turnover shares. In Germany, for example, the weight of the subindex "Food and non-alcoholic beverages" (CP01) currently stands at 11.3 percent, which means that the average German household is assumed to spend 11.3 percent of its consumption expenditure on goods in this category. By comparison, the weight for "Alcoholic beverages, tobacco and narcotics" (CP02) is 4.2 percent.
As consumption decisions are constantly changing, the weighting scheme applied may lead to distortions in the measurement of inflation. In the 1990s, for example, the Boskin Commission found a systematic overestimation of price inflation rates in the US of 0.4 percentage points per year.3 The cause of the distortion was the systematic substitution behavior of households.
The argument is as follows. Let us assume a base year with a given weighting scheme for all individual goods and services included in the index. This weighting scheme reflects the consumption behavior of households in the base year. This behavior changes over time, partly because prices for some goods rise faster than for others. Over time, households will tend to buy less of those goods whose prices rise faster. And they will instead buy more of other goods that have remained relatively cheap. Households will thus substitute goods with a relatively high rate of inflation for goods with a relatively low rate of inflation. If the weighting scheme is not changed, an upward distortion of the measured price inflation will result. One would overestimate price inflation.
Let me illustrate this with a simple example. Imagine a price index for soft drinks. The purchase prices of Coke and Pepsi are included in the index at 50 percent each, because on average households spend proportionally the same amount on both beverages. Assume that over a certain period of time the price of Pepsi increases by 5 percent per year. The price of Coke increases by only 1 percent annually. If the weighting is not changed, the overall inflation rate is 3 percent. In fact, however, the consumption behavior has shifted due to the different inflation rates. On average, households now buy more Coke and less Pepsi. Let us assume that households now spend four times as much on Coke as on Pepsi. The weighting would therefore have to be adjusted so that Coca Cola is included in the index at 80 percent and Pepsi at only 20 percent. The adjusted annual inflation rate, using the new weighting scheme, would therefore be 1.8 percent instead of 3 percent.4
As a result of this reasoning, the weighting scheme of the HICP is now continuously adjusted, with the result that reported price inflation is lower than it would have been otherwise. Let us disregard any possible inaccuracies and assume that the adjustments to the weighting scheme perfectly reflect changing consumer behavior. Wouldn't this be overlooking a crucial point?
The answer is yes. If consumers do not switch to other products because of changing preferences, but simply because the prices of the products they would actually prefer have risen disproportionately, then consumers are worse off. The economist would call this a welfare loss. This welfare loss corresponds to a real increase in the cost of living, which is not reflected in the official figures if the weighting scheme in the index is adjusted accurately according to the changing consumption decisions of households. We end up with a downward distortion of the measured price inflation. The rate of inflation is then underestimated.
The second major sources of distortions in official inflation statistics are changes in the quality of goods. Here, too, the Boskin Commission of the 1990s found an upward bias of 0.4 percentage points per year in the US because quality improvements in products were not adequately priced in.5 The measured inflation rate was therefore once again too high.
The theoretical argument is compelling. Assume that prices do not change over a given period of time, but that the quality of the goods increases steadily. Then consumers get better quality for the same money. If you now say that the inflation rate is 0 percent, you are exaggerating. In fact, ceteris paribus the standard of living has improved: you get more quality for the same money, or the same quality for less money. Hence, the reported inflation rate should be negative.
In the following period, not only in America, but also in Europe, so-called hedonic methods of quality adjustment were introduced. For many products, therefore, not only are the observed purchase prices included in the index, but adjusted prices that are supposed to reflect the quality changes.6 In the following, I would like to address only two fundamental problems with quality adjustment.
First, producers have an incentive to highlight the quality improvements in the products they sell. When a car or computer becomes more powerful or faster, this can be seen in measurable core values. The car has more horsepower. The computer has a faster CPU. Manufacturers will openly communicate these core values and use them to promote their products. Quality improvements will thus be made transparent and comprehensible for buyers. They can therefore also be taken into account relatively easily in official statistics.7
On the other hand, producers have an incentive to conceal possible deteriorations in quality from buyers. If the casing and wiring of a computer are made of inferior material, this is usually not mentioned in the product description. If you want to detect deteriorations in quality, you often have to look very closely. In many cases, they are not easily detectable and cannot be quantified.
This leads to a systematic distortion. On the one hand, quality improvements are visible and taken into account. The prices of the products in question are reduced in the official statistics. Quality deteriorations, on the other hand, remain undetected and the prices of the products concerned are not increased accordingly. It is therefore probable that the adjustments made here also create a downward bias. The official statistics then report a price inflation rate that is too low.
The second point I would like to add has not yet been taken into account at all in the relevant literature. Let us assume that all quality changes are accurately priced in by official statistics. Even if this were the case, it would create a downward bias in reported price inflation. The reason for this is that a given quality improvement in a product already creates deflationary price pressures on other goods without any adjustment being made at all. This pressure arises in particular for the previously common and now inferior predecessors of the new product.
For example, when Apple launched the very first smartphone on the market, the iPhone, a negative price pressure on conventional mobile phones arose, because Apple dug away market shares from competing mobile phone manufacturers with its new product. As a result, competitors were forced to charge lower prices for their products than would otherwise have been the case. Only by offering lower prices could at least some buyers be convinced not to switch to the new iPhone.
This negative price pressure on competing products, which results from innovation, is already reducing the measured inflation rates. This means that a given improvement in quality is partly reflected in falling prices for other goods. If the price of the quality-improved good is adjusted in addition to this market adjustment (assuming that it is even possible to do this accurately), we would overshoot the mark. Price inflation would be underestimated.
There is no doubt that both substitution effects and changes in the quality of goods and services pose practically insoluble problems for official inflation statistics. Quality changes cannot be quantified objectively. This circumstance alone opens up enormous discretionary scope for official price statistics, which also has an impact on monetary policy. The M1 money supply in the euro area has increased more than fivefold since its inception.8 This could also be politically justified, because the reported price inflation was relatively low. Prices in the euro area have officially increased by only slightly more than 40 percent since 1999. Is price inflation systematically underestimated? The suspicion is obvious.
Even if the practical problems of measuring inflation, which arise from substitution effects and quality changes, could be solved sufficiently well, the application of the procedures currently used in official statistics would lead to a systematic underestimation of price inflation. The upward biases, which are undoubtedly relevant, once identified are reversed into downward biases when we consider other factors. On both points—substitution effects and quality changes—the results would overshoot the mark, even if the current methods could be applied accurately and flawlessly.
In addition, there are other gaps in the official measurement of inflation. Asset prices are not taken into account. However, disproportionate price inflation has been taking place in recent decades, especially for long-term assets such as real estate and stocks. It is not surprising that the median of subjectively perceived price inflation rates in the eurozone is 5 percentage points higher per year than the officially reported inflation rate.9
- 1. It is therefore also not transparent for the outside observer. The statistical offices provide much of the data used, but far from all. In particular, they do not provide information on the raw data used, i.e., on the prices actually observed and documented before they are included in the statistics after various adjustments.
- 2. The twelve subindices are CP01: Food and non-alcoholic beverages; CP02: Alcoholic beverages, tobacco and narcotics; CP03: Clothing and footwear; CP04: Housing, water, electricity, gas and other fuels; CP05: Household goods and routine household maintenance; CP06: Health; CP07: Transport; CP08: Post and telecommunications; CP09: Recreation and culture; CP10: Education; CP11: Restaurants and hotels; and CP12: Miscellaneous goods and services.
- 3. For Germany, the distortion was found to be somewhat less pronounced shortly afterwards (0.1 percentage points). See Hoffmann "Probleme der Inflationsmessung in Deutschland" (Discussion Paper, Deutsche Bundesbank, 1998).
- 4. In this simplified example, the adjusted inflation rate results from the newly weighted average of the individual inflation rates: 0.8*1+0.2*5=1.8; in contrast to the original weighting: 0.5*1+0.5*5=3.
- 5. For Germany, the estimated distortion due to quality improvements was 0.45 percentage points.
- 6. No information on the extent of the quality adjustments is provided by the relevant offices. Raw data prior to quality adjustment are not made publicly available in Europe.
- 7. The word "relatively" is important here. In the final analysis, quality improvements are of course impossible to price in and quantify, because they are subjective.
- 8. From January 1999, when the euro was introduced as book money, M1 rose from €1,807,005 million to €9,335,181 million in March 2020, an increase of 5.17 times.
- 9. See Karl-Friedrich Israel, "Why Has There Been So Little Consumer Price Inflation?," Mises Wire, May 11, 2020, https://mises.org/wire/why-has-there-been-so-little-consumer-price-inflation?fbclid=IwAR2y6PQnZpTM2kuQp-XLCom3r0SP2rWC1WLzqOQ3npAWFaEFRaAyQ61PJeA.
In theory, it is possible to adjust inflation measures to account for the many constant changes in prices resulting from changing demand, quality, and innovations. But it's essentially impossible to execute these adjustments accurately.
This Audio Mises Wire is generously sponsored by Christopher Condon. Narrated by Millian Quinteros.
Original Article: "Why Official Inflation Measures Don't Work"
Back in the summer of 2006, perhaps sensing momentum for Democrats going into the midterm, Daily Kos founder Markos Moulitsas gave it a push by making a play for the libertarian vote. Considering the degree to which the Bush/GOP years disappointed, it made sense.
At the outset, he targeted “government efforts to intrude in our bedrooms” and “NSA spying,” a couple of sore spots in light of President Bush’s proposed constitutional restriction of marriage to between a man and a woman and new state powers enacted to conduct the war on terror.
Slowly but surely, though, Moulitsas started cutting out on his target audience, alluding to free healthcare and “poverty prevention programs.” San Antonio Express-News Smart Money columnist Michael Taylor has trodden similarly shaky ground lately.
First, in astutely pointing out how some Republican senators are unwittingly greasing the skids for a huge new entitlement called a universal basic income (UBI), he erroneously asserted that “trusting people” with such a handout is a “deeply small-government idea.” Now, he’s criticizing the moralization of “poverty-reduction programs.”
To be sure, he starts out on solid footing in shaming government for picking winners and losers by deciding which types of businesses merit public help after being shut down the last couple months.
Libertarians have been at the forefront of freeing so-called victimless crime from the chains of the supposed morally superior. More and more jurisdictions are decriminalizing marijuana and legalizing gambling; criminal system reform passed a couple years ago.
Taylor’s case becomes tenuous, however, when he equates restrictions on public welfare programs to “blaming poverty on the immorality of its victims.”
Few would disagree that children born into destitution are the most unfortunate segment of society. That consensus is reflected in the $13–18 million that the local United Way spends on helping them every year. But giving their parents checks with no strings attached is not a solution.
While anyone who demands the efficient use of taxpayer funds (assume for the sake of argument that this is possible) would certainly support reducing bureaucracy, giving free reign to the beneficiaries of public aid funded by resources taken from others’ earnings, the thrust of Mr. Taylor’s pitch, is the real immoral act.
Pairing the loosened strings with commensurate cuts to these programs’ provision would be a logical step. Include a reduction on individual payouts, and you might even have a deal. Best of all, transfer these programs entirely to state and local jurisdictions. But even following a more proper federalist path, you can’t call it “libertarian,” despite the best efforts of those who try to appropriate the label.
With a headline that read “An Unusual Breed: A Libertarian Democrat,” The Economist parroted Colorado governor Jared Polis’s self-identification as such upon assuming office last year. In the same paragraph, they touted his support for “universal health care, investments in renewable energy, etc.”
Apparently wanting to lower income taxes is just enough to qualify for the moniker. Cute.
One state-run program that Mr. Taylor holds up as immorally administered is unemployment assistance.
The wages you forgo when your employer is forced to contribute to this program is already a tough pill to swallow. To not insist that those out of work look for a job while drawing benefits would not only compound that burden, but would be counterproductive. Taylor seems to lack an understanding of incentives, a tenet of economics 101.
In the immediate aftermath of the recession a decade ago, the unemployment rate hovered around 9 percent. Once expanded benefits were rolled back, it started dropping. This is roughly the inverse of what happened when Lyndon B. Johnson declared “war on poverty.”
Before Uncle Sam got involved, the poverty rate had been in clear free fall. Once he joined the battle, that trend was arrested, never to escape an 11–15 percent range.
The problem with these appeals to liberty-minded citizens is a confusion over rights.
The petitioners are cool with protecting people’s right to privacy in their home, to do whatever they please as long as they hurt no other. But they also feel that some segments of society are entitled to the resources or support of others, as embodied by Mr. Polis’s support for “paid parental leave.”
The former are negative rights, many of which are the foundation for prosperity (property, trade, etc.), while the latter are positive rights. The two cannot coexist, as the latter necessarily intrude upon the former.
Whether they’re the party faithful, trying to reform the GOP from the inside, or so irritated with the process that they lodge a protest vote for Democrats, libertarians know that “all efforts by government to redistribute wealth…are improper to a free society” and that “agreements between private employers and employees…should not be encumbered by government-mandated benefits.” Their strength of principle is impervious to flaccid, linguistic sleights of hand.
The editors of The Babylon Bee have a podcast, and they invited Bob on to discuss the economic situation. Then, they asked a series of fun questions, against the backdrop of their shared Christianity.
I’d like to discuss some of Nozick’s comments on time preference in his paper “On Austrian Methodology,” but there is an obstacle to doing so. Nozick is fond of intricate arguments, and the section of the paper on time preference is especially difficult. For that reason, I’m going to concentrate on only a few of the many points he addresses.
Nozick criticizes this passage from Human Action, which he rightly recognizes to be vital for Mises’s argument for time preference:
Time preference is a categorial requisite of human action. No mode of action can be thought of in which satisfaction within a nearer period of the future is not—other things being equal—preferred to that in a later period. The very act of gratifying a desire implies that gratification at the present instant is preferred to that at a later instant. He who consumes a nonperishable good instead of postponing consumption for an indefinite later moment thereby reveals a higher valuation of present satisfaction as compared with later satisfaction. If he were not to prefer satisfaction in a nearer period of the future to that in a remoter period, he would never consume and so satisfy wants. He would always accumulate, he would never consume and enjoy. He would not consume today, but he would not consume tomorrow either, as the morrow would confront him with the same alternative. (p. 796)
Nozick raises three objections to what Mises says. First, “a person might be indifferent between doing some act now and doing it later, and do it now. (‘Why not do it now?’) So action now can show time-(weak) preference, but it need not show time-(strong) preference.” By “weak preference,” Nozick means that if you prefer A to B, either you prefer A to B or you are indifferent between them. This notion is standard in neoclassical economics.
The problem with this objection is straightforward. Mises denies that indifference can be demonstrated in action. According to him, if you choose A over B, then your choice shows that you prefer A to B. Your “preference scale” exists only at the moment of choice. Your “demonstrated preference” is just what you do in fact choose on a given occasion. Nozick is well aware that Mises holds this view but nevertheless criticizes him on the basis of a view that Mises explicitly rejects.
And Mises is right to do so. We have a commonsense understanding of choosing something because you would rather have it than any available alternative of which you are aware. If you don’t have this understanding, you are clearly missing something, and it turns out that Nozick’s concept of preference doesn’t allow him to articulate this understanding. This best he can offer is “strong preference,” where you strongly prefer A to B if and only if you weakly prefer A to B and it’s not the case that you weakly prefer B to A. But “strong preference” doesn’t tell us what it means to prefer something. Indeed, “weak preference” is parasitic on that very notion, since you have to understand what it means to prefer A to B in order to understand the definition: you weakly prefer A to B if you prefer A to B or are indifferent between them.
Nozick’s next point fares no better. He says:
A person might act now to get a particular satisfaction, not caring whether it comes sooner or later. He acts now because the option of getting the satisfaction is a fleeting one which will not be available later. Thus, a person can have a reason, other than time preference, to act now; to prefer something sooner rather than later is not necessary in order to act now.
Here the problem lies in a simple oversight. Mises is talking about “nonperishable goods,” which in this context means goods that the actor has a choice of consuming now or at a later time. Satisfactions that are either “now or never” are outside the scope of the argument.
Nozick’s final point rests on a more fundamental misunderstanding. He says:
The fact that we act constantly cannot show that we always have time-preference for all goods, At most, it shows that when a person acts (and the option also is available later) he has time-preference then for the particular good that he then acts to get. This is compatible with an alternation of periods of time-preference for good G, and periods of no time-preference for good G. The person acts to get G during one of the periods of time-preference for G. This is considerably weaker than general time preference.” (Emphasis in original.)
Nozick is of course right that when you prefer getting a good now to later, you are demonstrating time preference only for that particular good now. But for Austrians preferences exist only for actions that occur at particular times. When Nozick says that we prefer G now to G in the future when we act, but maybe we have no time preference for G when we don’t act, this is from the Austrian perspective vacuous. We don’t have preferences when we aren’t acting.
Nozick has more to say about time preference. He offers an evolutionary account of how time preference might have arisen and uses this account to raise a problem of “double discounting” for the standard Austrian position. I hope to address these points in another article, but I ought to issue a warning. Nozick’s discussion is even more convoluted than what I’ve been talking about in this article.
I’d like to conclude by underlining a basic difference between Nozick, on the one hand, and Mises and Rothbard, on the other. Nozick is usually concerned with counterfactuals. Preference, for example, involves not just what you do choose but what you would choose in various hypothetical circumstances. For Mises and Rothbard, by contrast, it is the individual act that matters. As Goethe says, “Im Anfang war die Tat!” (In the beginning was the deed.)
The coronavirus pandemic, and resulting government response, has created one of the greatest disruptions to daily life in modern American history. With much of the country now focused on “reopening,” pundits and policymakers have focused their attention on what the “new normal” of a post-COVID America looks like. Although much of the attention has been focused on the future of massive public gatherings and changes to American work environments, the most significant change to American societies may be faith in our governing structures.
The policy response to the coronavirus has already led to dramatic changes to policy. In the positive, both federal agencies and state governments have waived or altered many traditional regulatory requirements to bypass disastrous delays in medical testing and to better facilitate delivery of services. In the negative, the Federal Reserve has massively escalated its interventionist policies, highlighting how radical these institutions have truly become.
Beyond specific policies, however, the most significant change may be the degree to which the COVID response changes the public view of centralized political power. In particular, there are three relatively unique aspects of this pandemic that may be the precursor to significant realignments going forward.
State Governments Have Taken the Lead in Public Policy
In spite of rhetoric from President Trump about the White House having “full authority” over state governments, the current administration has been largely content with allowing governors to lead the way in responding to the pandemic. This has led to significant differences in the severity of economic lockdowns, testing behavior, and even authorized treatments between states.
Given the hypertribalism of modern politics, it’s easy to simplify this into a typical “red state-blue state” division, but this overlooks significant differences in approach from governors and state legislatures within the same party. For example, although Michigan, New York, and California are high-profile examples of blue states with strong lockdown policies, Colorado is an example of a state with a Democratic governor who has largely followed the reopening guidance promoted by the Trump administration.
The significant differences in policy between states (such as New York and Florida) has meant greater attention, from both the press and the voters subjected to unprecedented restrictions, toward their state capitals and away from the usual circus of Washington. Many governors have seemed to relish this move, such as Governor Gavin Newsom of California, who proudly declared himself leader of a “nation-state.” The power of state governments has even led to some governors engaging in the sort of executive overreach that has become the norm in the national level, such as Colorado governor Jared Polis taking control of federal aid money against the wishes of the state legislature.
The stark contrast between state responses, coupled with differences in outcomes—both in terms of economic and public health measures—is an important lesson in the power of federalism that has been eroded in American politics. The precedents being set today may further embolden the growing trend of state rejection of federal authority that we’ve seen with such issues as drug laws and immigration enforcement. When we factor in the hyperpartisan environment, and a predictably polarizing presidential election later this year, the future of American politics may increasingly be defined by a battle of federal and state authority.
The State Battle over a Federal Bailout
As Ryan McMaken has noted, state budgets are going to face major shortages as the devastating impact of lockdowns limits tax revenue. Although no state will be spared from the economic fallout, this revenue shock will be particularly devastating for those already on particularly unsound economic footing.
Already we’ve seen this begin to play out in Washington, with Republicans pushing back strongly against Democrat calls for a $195 billion bailout of state and local governments. The Wall Street Journal this week summarized this growing conflict with the question, “Why Should Florida Bail Out New York?,” highlighting the differences in governing philosophy and economic health between the two similarly sized states.
Although it’s obvious that Congress has no stomach for any sort of fiscal restraint when it comes to national economic aid or stimulus programs, the more the debate focuses on state—and partisan—differences, the more we are likely to see the federal representatives of fiscally prudent states hunker down against bailouts in their own interest. Already we’ve seen blue state leaders like Governor Newsom threaten their own version of Washington Monument syndrome, stating that police and first responders will be the first victims if Washington doesn’t bend to his bailout demands.
This could easily erupt into the sort of state-on-state legislative battle we haven’t seen play out in Washington in a long time.
Shared Experience and National Unity
Lastly, one of the aspects of the coronavirus that has driven a lot of the radical differences in narrative and policy between states has been the difference in its severity around the country. In past national tragedies, there has usually been a trend toward national unity, as the event created a common experience among all Americans. Although New Yorkers dealing with the aftermath of 9/11 or Gulf Coast residents during Hurricane Katrina experienced these events in a more personal and intimate way, everyone witnessed them on television and with a similar appreciation for their significance.
This is clearly not the same with the coronavirus.
I recently had a good friend who is a nurse in northern Louisiana visit, and he was shocked at how laxly residents of north Florida were taking the virus. Although the city he currently lives in is very red and culturally Southern, it was an early hot spot for COVID-19, and the scars from that had majorly impacted much of the community. In Panama City Beach, Florida, the greatest fears in the last few months came from the impact that lockdowns were having on a local economy so dependent on tourism and the service industry.
Considering that common experiences can shape national unity far more powerfully than government institutions can, it’s possible that the cultural consequences of the coronavirus will fuel divisions between states in a way that disagreements on marijuana laws never could. It is both reasonable and natural for a resident of New York City, which has suffered nearly twenty thousand coronavirus-related deaths, to be far more traumatized by the virus than residents of Houston, which has suffered fewer than two hundred.
Considering that a major question for political fallout going forward will be the degree to which the economic damage inflicted on this country was “justified” by the threat of the virus, the differences in experience make it unlikely that the coronavirus will build anything resembling a national consensus.
The lasting impact of the coronavirus going forward—alongside the devastating economic consequences that we have yet to truly face—could be deepening a regional, cultural, and political polarization that has been building in recent years. These are also precisely the sort of differences that are only escalated by centralized political power, and that will only be fueled by the upcoming theater of the 2020 presidential election.
Although national tragedies tend to bring a country together, it seems clear that the coronavirus will leave America as divided as it has been in modern history.
Some claim "the rich" will be fine—or even better off—after the COVID panic destroys the economy for most of us. But there's a problem: the wealthy depend heavily on an economy fueled by the production and consumption of all workers and entrepreneurs.
This Audio Mises Wire is generously sponsored by Christopher Condon. Narrated by Millian Quinteros.
Original Article: "We’re All in This Together. But Not in the Way You Think."
Thanks to past interventions, the economy is now rife with malinvestments and prices that don't reflect real demand. The solution is to allow deflation and other types of painful readjustment. Otherwise true growth will elude us.
This Audio Mises Wire is generously sponsored by Christopher Condon. Narrated by Millian Quinteros.
Original Article: "How Government Intervention Triggers Depressions"
Since the onset of the COVID-19 crisis, Americans have been told countless times that public policy was based on Science (with a capital S) and that the public should just obey the scientists.
But the accuracy of their predictions and the consequent appropriateness of policies seems to have been little better than Ask Dr. Science and the 0 percent accuracy rate of its answers.
In fact, the massive errors in measurement that have been part and parcel of the scientific COVID Kops show should bring us back to what Lord Kelvin said about science and measurement: “If you cannot measure it, then it is not science” and “your theory is apt to be based more upon imagination than upon knowledge.”
To get an idea of how serious the COVID measurement problems are, one need only look to the two medical experts most commonly appearing on our TV screens. Dr. Anthony Fauci recently testified his belief that its death toll is “almost certainly higher” than reported, because “there may have been people who died at home who did have COVID, who were not counted as COVID because they never really got to the hospital.” In contrast, the Washington Post recently reported that Deborah Birx believes that the Centers for Disease Control and Prevention’s (CDC) accounting system is double counting some cases, boosting case and mortality measurements “by as much as 25 percent.” And what could be a clearer statement of the measurement problems than Birx’s assertion that “there is nothing from the CDC that I can trust”?
The mangled measurements have been with us from the beginning of the COVID crisis.
Mild cases were (and still are) frequently undetected. That means that we have undercounted how many people have (or have had) the disease. It also means that we have overestimated the risk of contagion, which is perhaps the most crucial determinant of COVID’s risk to others.
Early on, there were a very limited number of tests and many of the first ones were faulty. So, as increasing numbers are being tested, especially systematically, rather than just targeting those who are already suspected of having COVID, we must disentangle the portion of the uptick of reported cases, and the implied downward adjustment of the odds of death and the risk of spread, caused by testing more of the population to determine whether there is an increasing incidence of the disease. When tests for COVID antibodies started to be done, it also suggested that more had already been exposed, changing the critical numbers again. And then there are questions about herd immunity, including whether sheltering at home actually undermines its development. Similarly, the constantly updated numbers of COVID cases in particular areas overstated the risk to others, since those who have gotten better and are not a potential source of contagion are still included in those counts.
This continuing evolution of what Science tells us reveals that what we are being told at any given time is highly likely to be revised, if not reversed, soon, and perhaps repeatedly. That should make us leery of all claims, including forecasts, premised on the truth of current Science. And if that weren’t bad enough, even the accuracy of the basic data has been compromised.
In some places, reported COVID deaths have included everyone who has it when they die, overstating (to a degree that we can’t know without more detailed information than we now have, and may ever have, for many cases) COVID risks. The director of the Illinois Department of Public Health, Dr. Ngozi Ezike, illustrated the problem when she said, “if you were in hospice and had already been given a few weeks to live, and then you also were found to have COVID, that would be counted as a COVID death….[E]ven if you died of clear alternative cause, but you had COVID at the same time, it’s still listed as a COVID death.” Further, the miscounting is often not due to judgments about shades of gray. For instance, Colorado counted a man who died of acute alcohol poisoning (his blood alcohol content (BAC) was 0.55, when 0.30 is considered lethal) as a COVID death. And when the state recounted to include only deaths caused by COVID, its total fell from 1,150 to only 878.
New York has also counted as COVID deaths cases involving flulike symptoms, even when postmortem COVID tests have been negative. CDC guidance explicitly advises that “suspected” cases, even in the absence of test evidence, can be reported as COVID deaths. That is why the New York Times could report that on April 21 the city death toll was augmented by “3,700 additional people who were presumed to have died of the coronavirus but had never tested positive.”
Then there is also lots of evidence that bears on appropriate COVID policy. For instance, Charles Murray has demonstrated that “The relationship of population density to the spread of the coronavirus creates sets of policy options that are radically different in high-density and low-density areas,” so that “too many people in high places, in government and the media, have been acting as if there is a right and moral policy toward the pandemic that applies throughout America. That’s wrong.”
Randal O’Toole has also cited studies finding that “mass transportation systems offer an effective way of accelerating the spread of infectious diseases,” that “people who use mass transit were nearly six times more likely to have acute respiratory infections than those who don’t,” that New York City subways were “a major disseminator—if not the principal transmission vehicle—of coronavirus infection,” and that there is “a strong state‐by‐state correlation between transit and coronavirus,” to ask why mass transit systems were not shuttered to stop the harm. Elsewhere, he noted that “The head of New York’s Metropolitan Transit Authority was infected by the virus and the head of New Jersey Transit actually died from it.”
All this evidence reveals that the COVID Science and conclusions Americans were supposed to follow unquestioningly have been incredibly incomplete or wrong, with the stability of quicksand. Such Science is too frail a reed to depend on in making policies with multitrillion dollar price tags. What it does support is much more humility, reflecting Kelvin’s recognition that:
When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts advanced to the stage of science.