The video is here. This video may not be visible on mobile devices. For my testimony, jump ahead to 13:45 and jump again to 51:20.
The written testimony is here.
As national and state government debt levels have risen, governments are facing the fact that they must soon either default on their obligations, cut their spending, increase their revenue or some combination thereof.
Default is an intriguing option, but it rarely occurs on government debt, even when the alternatives are painful. That is because default sharply increases the cost of future borrowing, and possibly because creditors enjoy a lot of political influence.
That leaves tax increases and government spending cuts, or “fiscal austerity” as it is often called. Austerity seems to be the opposite of fiscal stimulus policies, the tax cuts and spending increases that governments sometimes carry out for the stated purpose of trying to expand the economy.
But economic theory and experience show that the growth effects of both austerity and stimulus depend on the form they take.
For brevity and simplicity, I focus on the employment effects of fiscal policies and partition the public in two groups: high earners and low earners. In this view, a total of four policy options (plus their combinations) might qualify as austerity, depending on whether taxes or spending are adjusted and whether those changes are experienced by the high earners or the low earners.
Both raising taxes on high earners and cutting subsidies — food stamps, for example — for low earners would qualify as austerity, but they have different employment effects.
The typical reason that people work is to have earnings – income from their jobs that they can use to support themselves and their families. This monetary reward is the difference between the resources a person has available if she works and what she has available if she doesn’t work. The better the government treats low earners, the less incentive people have to keep their earnings high.
Thus, both raising taxes on high earners and cutting subsidies for low earners would reduce the government deficit, but the former would reduce employment and the latter increase it. For the same reasons, raising subsidies and cutting taxes for low earners, as the 2009 American Recovery and Reinvestment Act did, would both add to deficit and reduce employment.
From this perspective it might appear that the best way for a government to put austerity into effect is by cutting subsidies and raising taxes for low earners, and the best way to carry out stimulus is by cutting taxes for high earners. But, of course, there is no free lunch. The same policies that increase the incentives to work also redistribute from those with low earnings to those with high earnings.
Thus, while the American Recovery and Reinvestment Act probably reduced employment, it deserves credit for giving resources to those in poverty.
The interplay of the social good of helping the poor and providing them incentives creates tough policy choices for governments looking to reduce their deficits. The more resources available to those living below the poverty line, the less incentive they have to raise their income above that line.
More research is needed to quantify work incentives in various countries, but I suspect that Western European nations have been pursuing exactly such redistributive austerity and will continue to do so, which means that they can expect austerity to depress their economies.
For home mortgage borrowers who appear to be headed for foreclosure, mortgage programs typically recommend a revised mortgage payment amount that is lower than the payment specified in the original mortgage contract. The new payment is set in proportion to the borrower’s income at the time of the modification.
The more the borrower is earning at the time of the modification, the more she will be required to pay her lender over several years. Typically, each additional $100 a borrower is earning (on an annual basis) at the time of the modification adds $31 to the annual amount of the mortgage payment recommended by the Treasury’s mortgage modification guidelines. (This modification is not revisited over time; the income is examined one time and payments set.)
The HAMP program, and its predecessor at the Federal Deposit Insurance Corporation, usually modified the mortgage payments by adjusting the loan interest rate over the subsequent five to seven years.
Thus, assuming a five-year modification time frame, each $100 earned at the time of the modification would add $155 to the borrower’s total mortgage payments, or about $130 in present value.
It is done this way with the intention of creating a monthly payment that is “affordable” (defined as 31 percent of income). But there’s a flip side to the argument: the disadvantage of higher earnings in calculating the resulting payment. To an economist looking at it that way, it’s the equivalent of a 130 percent marginal tax rate: a $130 payment differential solely as a consequence of earning an extra $100.
This year the Treasury decided to encourage changes in this procedure. In particular, it will now subsidize lenders for modifying mortgage principal balances rather than interest payments. Because the principal balance determines payments for the life of the loan, in effect Treasury is asking lenders to modify payments for the life of the loan and not just five to seven years.
Take a 30-year mortgage originated in 2006: it has 24 years left. Under the new rules, an extra $100 earned by the borrower at the time of modification costs her $31 a year for 24 years, which amounts to a total of about $390 in present value. That’s a 390 percent marginal tax rate that applies to borrowers who are having, or expect to have, their mortgage modified.
Economists agree that marginal income tax rates of 100 percent or more are destructive to the labor market and strongly encourage corruption. The best we can hope for is that people subject to such confiscatory marginal tax rates are and remain oblivious of the incentives that Treasury is presenting them.
Marginal tax rates in excess of 100 percent are also present in antipoverty programs, especially in what is known as the Medicaid notch, where an additional $1 of income can mean the complete loss of coverage. In a sense, the Medicaid notch is a marginal tax rate in the thousands of percent, because beneficiaries lose benefits valued in thousands of dollars as a consequence of earning an additional $1.
But while a few thousands of dollars are at stake with one family’s Medicaid coverage, tens of thousands, sometimes hundreds of thousands, are at stake in each mortgage modification transaction.
For this reason, I think Treasury officials have earned the award for largest marginal income tax rate ever. Let’s hope they are not in training to yet again break their record.
A recent study by my University of Chicago colleague Erik Hurst and Profs. Orazio Attanasio and Luigi Pistaferri attempts some repairs on the little consumption data we have, in order to get a better understanding of how living standards relate to income.
Economists believe that maintaining and enhancing one’s standard of living are among the most important motivators of economic behavior. A few people work purely for the sake of working, but many others work with the primary intention of providing for themselves and their families. A few people save purely for the joy of saving, but many others save so that they can maintain their living standard in the future, when they might be unemployed or retired. One criticism of the Soviet Union’s economic approach was that while it could create a vast military infrastructure it generated relatively little for consumers.
For these reasons, consumption may be a more important economic indicator than even gross domestic product, which is the total production in the economy.
Although much attention is accorded to the various monthly reports of United States consumer-spending statistics, we economists have surprisingly little information about the amount and nature of consumer spending. A large number of household surveys carefully measure family incomes and their various sources but offer little or no indication of how much those households spend.
One reflection of the lack of consumer spending data is that we know a lot more about the income situation of the poor than we know about what they are spending and what their standards of living are.
We also know a lot about how incomes have evolved over time, especially how less-skilled workers saw little growth in their earning power over the last several decades while skilled workers saw sharp increases in their incomes. These trends are sometimes described as growing wage or income “inequality” – that is, say, a growing gap between the wages of people above average and the wages of those below average.
Less is known about the equality or inequality of living standards, because there is less data on consumer spending by households. To make matters worse, some of the relatively little household spending data we have appears to be ill suited for measuring the amount of inequality in living standards and how it has changed over time.
Among other things, higher-income households appear to underreport their consumer spending relative to low-income households and to an increasing degree over time.
The study by Professors Hurst, Attanasio and Pistaferri tried to correct for these measurement challenges by looking at, among other things, the gaps between high- and low-income households in terms of their spending on entertainment and on food and in terms of the number of vehicles they own.
The results suggest that consumer-spending inequality has increased over the last 30 years, much like the well-known increases in wage and income inequality.
In previous posts I have showed how women who are heads of households and spouses have had different labor market experiences since 2007 depending on their marital status. Employment rates fell more for unmarried women, largely because they returned to work more slowly after layoffs than married women did.
I suggested that safety-net program expansions were an important reason for the different experiences of unmarried women, because they reduce the reward for working, especially among unmarried women.
The chart below puts these changes in a broad historical perspective. I used annual tables from the Bureau of Labor Statistics on employment among women by marital status for all women 16 and over (the tables are not separated for heads of households and spouses, as I did for my 2007-10 analysis).
From 1980 through the mid-1990s, the fraction of unmarried women who worked increased less than the fraction of married women working, which is why the series shown in the chart declines over that time frame. Then, coincident with 1996 federal legislation reforming welfare, the trend sharply reversed itself.
Some main components of the 1996 welfare reform were to require that a significant fraction of welfare recipients be working and to limit the amount of time that households could receive welfare. (Welfare was called Aid for Dependent Children before the law, and Temporary Assistance for Needy Families since its enactment.) One intention of the reform was to give welfare participants more incentives to work and maintain their own living standards.
Economists expected that the reforms would increase work among unmarried women, who had been disproportionately represented among the welfare caseloads.
But as Rebecca M. Blank, now the deputy secretary of commerce, wrote in a review of welfare studies published in 2002 about the “stunning changes in behavior” she found: “Nobody of any political persuasion predicted or would have believed possible the magnitude of change that occurred in the behavior of low-income single-parent families” in the 1990s. Some of those changes were clearly consequences of the welfare rule changes.
Although the welfare program per se has not changed much since 2007, a number of safety-net programs have changed, and changed in the direction of significantly reducing the reward for working among unmarried women who are heads of households.
In this sense, the 1996 welfare reform was reversed over the last couple of years; while the 1996 law increased the reward for working, the recent expansions have reduced it.
The farm bill of 2002 began a gentle push in the direction of more help for the poor – and a lesser reward for work – as it provided for state-level expansions of the food stamp program, a program less affected than welfare was by the 1996 work requirements.
The more drastic changes occurred in 2008 and 2009 when the food stamp program was expanded again (twice), and unemployment insurance was expanded in several dimensions. It was at these times that unmarried women began again to lose ground to married women in terms of their propensities to be employed.
Policy experts still debate whether the 1996 welfare reforms were a good idea. For those who prefer an approach that offers more help to the poor even if it provides fewer incentives to work, the good news is that several pieces of legislation since 2007 have expanded the social safety net and effectively reversed the 1996 law.