Monday, November 30, 2015

Douglass North and Institutions

Douglass North, who shared the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 1993 with Robert W. Fogel "for having renewed research in economic history by applying economic theory and quantitative methods in order to explain economic and institutional change," died last week. Those looking for an accessible overview to his work might start with a couple of essays in the Journal of Economic Perspectives. North wrote an article called "Institutions" for the Winter 1991 issue. Claudia Goldin discussed the intellectual legacies of North and Fogel after they won the Nobel prize in "Cliometrics and the Nobel." in the Spring 1995 issue.

As the one-word title of North's JEP article implies, he is perhaps best-known for his work in broadening the view of economics beyond the specifics of producing, and selling and buying, and emphasizing how a broader institutional context set the stage for economic interactions. In Goldin's essay, she traces this focus on the importance of institutions back to some of North's early work on transportation costs and economic growth. Goldin wrote (footnotes omitted): 

In the 1950s, a primarily theoretical literature emerged conjecturing that economic growth could be enhanced by decreased transport costs, at least under special circumstances. Even when productivity change is moving at a snail's pace in the goods-producing sectors, a decrease in the price of transportation can increase national income substantially. Developing economies were advised to increase certain capital expenditures if they wanted to grow, especially "infrastructure" and, most especially, transportation. How decreased transport costs affected the economic growth of the United States—the great success story—was a natural. ...
Douglass North's best-known research in transportation concerns ocean shipping from 1600 to 1860. The costs of ocean shipping decreased during much of the period, more so in the nineteenth century than before. A large part of the decrease, argued North (1968), came from an increase in total factor productivity. But the question was whether total factor productivity gains were rooted in technological advances or some other innovation. North found that from 1600 to 1784 productivity advanced at a slow rate, but that virtually all of the gains were due to decreased crew size and less time spent idle in ports. For the period from 1814 to 1860, productivity increased faster, at almost 10 times the annual rate in the previous two centuries. Virtually all the gain here was due to an increase in the size of ships and to their greater load factor. For most of the two and a half centuries considered, goods coming from the New World to the Old World were bulky raw materials, whereas those moving in the other direction were compact manufactured goods. In the 1840s and 1850s, however, there was a large increase in immigration, which meant that ships returned to the New World with cargo, not in ballast. The load factor thereby increased. 
The surprising finding is that for both periods, technological change was less responsible for the increase in productivity than were other innovations—a sharp reduction in piracy and organizational changes that increased round-trips per year by a factor of three. With less piracy, ships needed fewer crew members and could carry more goods and fewer armaments. With less need to arm ships, technologically superior vessels could be used. For example, the Dutch "flute," a sailing vessel with a rounded stern, had been used in the Baltic long before the modified flute made it to the ocean. But the reason these superior vessels were used in the Baltic was that piracy had been significantly reduced there, and the flute generally carried no armament. The important point for economic history and for North's intellectual development is that institutions interact with technology. One without the other does not produce economic growth. North learned the lesson well and shifted his attention for the next 25 years to a study of institutions.
North also pointed out how groups in power could use institutions to perpetuate their authority, and that such groups had an incentive to act in this way and hold on to power. even if the overall effects on growth were negative. For example, here's Goldin describing North's analysis of institutions in the US slave-holding South before the Civil War. 

According to Douglass North, the roots of southern stagnation are to be found in the geographic patterns of trade in the antebellum period. The South, using slave labor, grew cotton and exported it to the American North and to Britain. With the receipts from its northern shipments it purchased foodstuffs from the Midwest and industrial goods from the North. With its receipts from European shipments, it purchased luxury items and other industrial wares. Little was ploughed back into the South as internal improvements. Schooling was denied slaves and was poorly provided to southerners in general. Cities, those generators of agglomeration economies, were rare in the South. Innovation was thereby stifled. 
The North ran a very different ship. With far more equality of income and wealth, northerners purchased goods produced by local tradesmen and local firms. Its funds were ploughed back into local industry and internal improvements. Its people were the best educated in the world. The North established institutions that served an egalitarian society and that furthered an industrial and growing region. The South had norms that reinforced a caste and race-based society and that inhibited growth at the service of a master class. Such institutions have long lives. 
The message, repeated in many of Douglass North's later works, is that when institutions serve to enrich one group (masters, feudal lords) at the expense of another (slaves, serfs), it does not matter that these institutions also reduce the potential income of the elite. Pareto-improving trades are generally impossible between the two groups, and thus there is no assurance that more efficient institutions will drive out less efficient ones.
In North's JEP essay, North begins with a succinct summary of his view of the centrality of institutional development in understanding economic history and the performance of economies over time. North wrote: 
Institutions are the humanly devised constraints that structure political, economic and social interaction. They consist of both informal constraints (sanctions, taboos, customs, traditions, and codes of conduct), and formal rules (constitutions, laws, property rights). Throughout history, institutions have been devised by human beings to create order and reduce uncertainty in exchange. Together with the standard constraints of economics they define the choice set and therefore determine transaction and production costs and hence the profitability and feasibility of engaging in economic activity. They evolve incrementally, connecting the past with the present and the future; history in consequence is largely a story of institutional evolution in which the historical performance of economies can only be understood as a part of a sequential story. Institutions provide the incentive structure of an economy; as that structure evolves, it shapes the direction of economic change towards growth, stagnation, or decline.
 In some ways, North's emphasis on institutions has become so embedded in economic thinking that it runs the risk of sounding obvious. By now, everyone is familiar with the big idea that institutional traits like property rights and the rule of law play a central role in economic performance. But every big insight--like "institutions matter"--sounds obvious when it is raised to a high level of abstraction. The more lasting insights come from a double process: first digging down into the specifics of different times and places so that you can be specific about which institutions mattered at which times and for reasons, and then taking the next step of looking for commonalities and patterns across the landscape of these specific studies. North led the way in showing how to do these kinds of studies, and did far more than his fair share of them. But as North wrote at the end of his JEP essay in 1991:
The foregoing comparative sketch probably raises more questions than it answers about institutions and the role that they play in the performance of economies. Under what conditions does a path get reversed, like the revival of Spain in modern times? What is it about informal constraints that gives them such a pervasive influence upon the long-run character of economies? What is the relationship between formal and informal constraints? How does an economy develop the informal constraints that make individuals constrain their behavior so that they make political and judicial systems effective forces for third party enforcement? Clearly we have a long way to go for complete answers, but the modern study of institutions offers the promise of dramatic new understanding of economic performance and economic change. 
(Full disclosure: I've worked as Managing Editor of the JEP since the first issue in 1987. All JEP articles from the first issue to the present are freely available online courtesy of the journal's publisher, the American Economic Association.) 

Friday, November 27, 2015

Capitalism for Growth, Goverment for Fairness

When I hear discussions of how to encourage economic growth, along with equality and fairness, I sometimes feel as if the discussants are in the grip of a category confusion, like someone who rinses their vegetables in the shower and then tries to bathe in the kitchen sink. Here's the confusion as expressed in an October 1990 opinion column by Donald Kaul, who was a prominent opinion columnist, mainly with the Des Moines Register, from the 1970s through the 1990s. Kaul wrote: 
We have come to rely upon capitalism for justice and the government for economic stimulation, precisely the opposite of what reason would suggest. Capitalism does not produce justice, any more than knife fights do. It produces winners and energy and growth. It is the job of government to channel that energy and growth into socially useful avenues, without stifling what it seeks to channel. That's the basic problem of our form of government: how to achieve a balance between economic vitality and justice. It is a problem that we increasingly ignore.
In the modern version of this category confusion, a number of politicians and my fellow citizens seem to view it as the role of companies to provide justice and fairness. Their policy prescriptions seem to be all about how companies should be held responsible for providing higher wages, a more equal distribution of wages, health insurance, pensions and retirement accounts, parental leave and sick leave, job training, healthy foods, affordable housing, a sufficient number of parking spaces cleaning up the environment, paying more taxes, and so on. 

Meanwhile, when the discussion turns to encouraging growth in the US economy, the topics that seem to come up most often are how the government can encourage growth. Sometimes the focus in on how the Federal Reserve should be boosting growth through its monetary policies. Sometimes the focus is on how government should be boosting growth through tax cuts or spending boosts, either in general terms or by subsidizing  preferred sectors of the economy like noncarbon sources of energy and building more roads and bridges.  

Striking the balance between "economic vitality and justice" (in Kaul's phrase) can be an intertwined and delicate business.  But framing the discussion in a way that government is supposed to provide growth and companies are supposed to provide fairness is topsy-turvy, and blurs the lines of responsibility in a way that does not benefit either economic growth or concerns of fairness and justice. 

Thursday, November 26, 2015

Thankgiving Tidbits: George Washington, Sarah J. Hale, Abraham Lincoln

The first presidential proclamation of Thanksgiving as a national holiday was issued by George Washington on October 3, 1789. But it was a one-time event. Individual states (especially those in New England) continued to issue Thanksgiving proclamations on various days in the decades to come.  But it wasn't until 1863 when a magazine editor named Sarah Josepha Hale, after 15 years of letter-writing, prompted Abraham Lincoln to designate the last Thursday in  November as a national holiday--a pattern which then continued into the future.

An original and thus hard-to-read version of George Washington's Thanksgiving proclamation can be viewed through the Library of Congress website. The economist in me was intrigued to notice that some of the causes for giving of thanks included "the means we have of acquiring and diffusing useful knowledge ... the encrease of science among them and us—and generally to grant unto all Mankind such a degree of temporal prosperity as he alone knows to be best."

Also, the original Thankgiving proclamation was not without some controversy and dissent in the House of Representatives, as reported by the Papers of George Washington website at the University of Virginia.

The House was not unanimous in its determination to give thanks. Aedanus Burke of South Carolina objected that he “did not like this mimicking of European customs, where they made a mere mockery of thanksgivings.” Thomas Tudor Tucker “thought the House had no business to interfere in a matter which did not concern them. Why should the President direct the people to do what, perhaps, they have no mind to do? They may not be inclined to return thanks for a Constitution until they have experienced that it promotes their safety and happiness. We do not yet know but they may have reason to be dissatisfied with the effects it has already produced; but whether this be so or not, it is a business with which Congress have nothing to do; it is a religious matter, and, as such, is proscribed to us. If a day of thanksgiving must take place, let it be done by the authority of the several States.”

Here's the transcript of George Washington's Thanksgiving proclamation from the National Archives.
Thanksgiving Proclamation
By the President of the United States of America. a Proclamation.
Whereas it is the duty of all Nations to acknowledge the providence of Almighty God, to obey his will, to be grateful for his benefits, and humbly to implore his protection and favor—and whereas both Houses of Congress have by their joint Committee requested me “to recommend to the People of the United States a day of public thanksgiving and prayer to be observed by acknowledging with grateful hearts the many signal favors of Almighty God especially by affording them an opportunity peaceably to establish a form of government for their safety and happiness.”
Now therefore I do recommend and assign Thursday the 26th day of November next to be devoted by the People of these States to the service of that great and glorious Being, who is the beneficent Author of all the good that was, that is, or that will be—That we may then all unite in rendering unto him our sincere and humble thanks—for his kind care and protection of the People of this Country previous to their becoming a Nation—for the signal and manifold mercies, and the favorable interpositions of his Providence which we experienced in the course and conclusion of the late war—for the great degree of tranquillity, union, and plenty, which we have since enjoyed—for the peaceable and rational manner, in which we have been enabled to establish constitutions of government for our safety and happiness, and particularly the national One now lately instituted—for the civil and religious liberty with which we are blessed; and the means we have of acquiring and diffusing useful knowledge; and in general for all the great and various favors which he hath been pleased to confer upon us.
and also that we may then unite in most humbly offering our prayers and supplications to the great Lord and Ruler of Nations and beseech him to pardon our national and other transgressions—to enable us all, whether in public or private stations, to perform our several and relative duties properly and punctually—to render our national government a blessing to all the people, by constantly being a Government of wise, just, and constitutional laws, discreetly and faithfully executed and obeyed—to protect and guide all Sovereigns and Nations (especially such as have shewn kindness unto us) and to bless them with good government, peace, and concord—To promote the knowledge and practice of true religion and virtue, and the encrease of science among them and us—and generally to grant unto all Mankind such a degree of temporal prosperity as he alone knows to be best.
Given under my hand at the City of New-York the third day of October in the year of our Lord 1789.

Go: Washington

Sarah Josepha Hale was editor of a magazine first called Ladies' Magazine and later called Ladies' Book from 1828 to 1877. It was among the most widely-known and influential magazines for women of its time. Hale wrote to Abraham Lincoln on September 28, 1863, suggesting that he set a national date for a Thankgiving holiday. From the Library of Congress, here's a PDF file of the Hale's actual letter to Lincoln, along with a typed transcript for 21st-century eyes. Here are a few sentences from Hale's letter to Lincoln:
You may have observed that, for some years past, there has been an increasing interest felt in our land to have the Thanksgiving held on the same day, in all the States; it now needs National recognition and authoritive fixation, only, to become permanently, an American custom and institution. ...  For the last fifteen years I have set forth this idea in the "Lady's Book", and placed the papers before the Governors of all the States and Territories -- also I have sent these to our Ministers abroad, and our Missionaries to the heathen -- and commanders in the Navy. From the recipients I have received, uniformly the most kind approval. ... But I find there are obstacles not possible to be overcome without legislative aid -- that each State should, by statute, make it obligatory on the Governor to appoint the last Thursday of November, annually, as Thanksgiving Day; -- or, as this way would require years to be realized, it has ocurred to me that a proclamation from the President of the United States would be the best, surest and most fitting method of National appointment. I have written to my friend, Hon. Wm. H. Seward, and requested him to confer with President Lincoln on this subject ... 

William Seward was Lincoln's Secretary of State. In a remarkable example of rapid government decision-making, Lincoln responded to Hale's September 28 letter by issuing a proclamation on October 3. It seems likely that Seward actually wrote the proclamation, and then Lincoln signed off. Here's the text of Lincoln's Thanksgiving proclamation, which characteristically mixed themes of thankfulness, mercy, and penitence:

Washington, D.C.
October 3, 1863
By the President of the United States of America.
A Proclamation.
The year that is drawing towards its close, has been filled with the blessings of fruitful fields and healthful skies. To these bounties, which are so constantly enjoyed that we are prone to forget the source from which they come, others have been added, which are of so extraordinary a nature, that they cannot fail to penetrate and soften even the heart which is habitually insensible to the ever watchful providence of Almighty God. In the midst of a civil war of unequaled magnitude and severity, which has sometimes seemed to foreign States to invite and to provoke their aggression, peace has been preserved with all nations, order has been maintained, the laws have been respected and obeyed, and harmony has prevailed everywhere except in the theatre of military conflict; while that theatre has been greatly contracted by the advancing armies and navies of the Union. Needful diversions of wealth and of strength from the fields of peaceful industry to the national defence, have not arrested the plough, the shuttle or the ship; the axe has enlarged the borders of our settlements, and the mines, as well of iron and coal as of the precious metals, have yielded even more abundantly than heretofore. Population has steadily increased, notwithstanding the waste that has been made in the camp, the siege and the battle-field; and the country, rejoicing in the consiousness of augmented strength and vigor, is permitted to expect continuance of years with large increase of freedom. No human counsel hath devised nor hath any mortal hand worked out these great things. They are the gracious gifts of the Most High God, who, while dealing with us in anger for our sins, hath nevertheless remembered mercy. It has seemed to me fit and proper that they should be solemnly, reverently and gratefully acknowledged as with one heart and one voice by the whole American People. I do therefore invite my fellow citizens in every part of the United States, and also those who are at sea and those who are sojourning in foreign lands, to set apart and observe the last Thursday of November next, as a day of Thanksgiving and Praise to our beneficent Father who dwelleth in the Heavens. And I recommend to them that while offering up the ascriptions justly due to Him for such singular deliverances and blessings, they do also, with humble penitence for our national perverseness and disobedience, commend to His tender care all those who have become widows, orphans, mourners or sufferers in the lamentable civil strife in which we are unavoidably engaged, and fervently implore the interposition of the Almighty Hand to heal the wounds of the nation and to restore it as soon as may be consistent with the Divine purposes to the full enjoyment of peace, harmony, tranquillity and Union.
In testimony whereof, I have hereunto set my hand and caused the Seal of the United States to be affixed.
Done at the City of Washington, this Third day of October, in the year of our Lord one thousand eight hundred and sixty-three, and of the Independence of the United States the Eighty-eighth.
By the President: Abraham Lincoln
William H. Seward,
Secretary of State

Wednesday, November 25, 2015

An Economist Chews Over Thanksgiving

As Thanksgiving preparations arrive, I naturally find my thoughts veering to the evolution of demand for turkey, technological change in turkey production, market concentration in the turkey industry, and price indexes for a classic Thanksgiving dinner. Not that there's anything wrong with that. [Note: This is an updated and amended version of a post that was first published on Thanksgiving Day 2011.]

The last time the U.S. Department of Agriculture did a detailed "Overview of the U.S. Turkey Industry" appears to be back in 2007, although an update was published in April 2014 . Some themes about the turkey market waddle out from those reports on both the demand and supply sides.

On the demand side, the quantity of turkey per person consumed rose dramatically from the mid-1970s up to about 1990, but since then has declined somewhat. The figure below is from the website run by the National Turkey Federation. Apparently, the Classic Thanksgiving Dinner is becoming slightly less widespread.

On the production side, the National Turkey Federation explains: "Turkey companies are vertically integrated, meaning they control or contract for all phases of production and processing - from breeding through delivery to retail." However, production of turkeys has shifted substantially, away from a model in which turkeys were hatched and raised all in one place, and toward a model in which the steps of turkey production have become separated and specialized--with some of these steps happening at much larger scale. The result has been an efficiency gain in the production of turkeys. Here is some commentary from the 2007 USDA report, with references to charts omitted for readability:

"In 1975, there were 180 turkey hatcheries in the United States compared with 55 operations in 2007, or 31 percent of the 1975 hatcheries. Incubator capacity in 1975 was 41.9 million eggs, compared with 38.7 million eggs in 2007. Hatchery intensity increased from an average 33 thousand egg capacity per hatchery in 1975 to 704 thousand egg capacity per hatchery in 2007.
Some decades ago, turkeys were historically hatched and raised on the same operation and either slaughtered on or close to where they were raised. Historically, operations owned the parent stock of the turkeys they raised while supplying their own eggs. The increase in technology and mastery of turkey breeding has led to highly specialized operations. Each production process of the turkey industry is now mainly represented by various specialized operations.
Eggs are produced at laying facilities, some of which have had the same genetic turkey breed for more than a century. Eggs are immediately shipped to hatcheries and set in incubators. Once the poults are hatched, they are then typically shipped to a brooder barn. As poults mature, they are moved to growout facilities until they reach slaughter weight. Some operations use the same building for the entire growout process of turkeys. Once the turkeys reach slaughter weight, they are shipped to slaughter facilities and processed for meat products or sold as whole birds.
Turkeys have been carefully bred to become the efficient meat producers they are today. In 1986, a turkey weighed an average of 20.0 pounds. This average has increased to 28.2 pounds per bird in 2006. The increase in bird weight reflects an efficiency gain for growers of about 41 percent."
The 2014 report points out that the capacity of eggs per hatchery has continued to rise (again, references to charts omitted):
For several decades, the number of turkey hatcheries has declined steadily. During the last six years, however, this decrease began to slow down. As of 2013, there are 54 turkey hatcheries in the United States, down from 58 in 2008, but up from the historical low of 49 reached in 2012. The total capacity of these facilities remained steady during this period at approximately 39.4 million eggs. The average capacity per hatchery reached a record high in 2012. During 2013, average capacity per hatchery was 730 thousand (data records are available from 1965 to present).

U.S. agriculture is full of examples of remarkable increases in yields over a few decades, but they always drop my jaw. I tend to think of a "turkey" as a product that doesn't have a lot of opportunity for technological development, but clearly I'm wrong. Here's a graph showing the rise in size of turkeys over time from the 2007 report.

The production of turkey remains an industry that is not very concentrated, with three relatively large producers and then more than a dozen mid-sized producers. Here's a list of top turkey producers in 2014 from the National Turkey Federation:

In the last couple of years, the US turkey industry has been affected by an outbreak of HPAI
 (Highly Pathogenic Avian Influenza). In the November 17, 2015 issue of the "Livestock, Dairy, andPoultry Outlook" from the US Department of Agriculture, Kenneth Mathews and Mildred Haley offer some details.
U.S. turkey meat production in third-quarter 2015 was 1.35 billion pounds, down 9 percent from a year earlier. This continued the downward path for turkey production in 2015 ... The third-quarter decline was due to both a lower number of turkeys slaughtered and a drop in their average live weight at slaughter. The slaughter number fell to 57.5 million, 6 percent lower than a year earlier, while the average live weight at slaughter declined to 29.3 pounds, a drop of 3 percent from the previous year. Since April the average live weight at slaughter has been lower than the previous year, for a period of 6 consecutive months—reflecting the impact of the HPAI outbreak, which caused processors to slaughter birds somewhat earlier than they normally would in order to maintain supply levels. ... Lower turkey meat production during third-quarter 2015 helped to lower overall turkey stocks, which, in turn, put upward pressure on whole bird prices. ... Turkey meat production in 2016 is forecast at 6 billion pounds, which would be an increase of 8 percent from the HPAI-reduced production of the previous year; much of the increase will come in the second half of the year. ... Prices for whole frozen hen turkeys at the wholesale level averaged $1.36 per pound in October, up from $1.16 per pound the previous year (17 percent). ... The quarterly price for frozen whole hens in 2016 is forecast higher through the first half of the year, but then to average below year-earlier levels in the second half, as higher production mitigates traditional seasonal price increases.

For some reason, this entire post is reminding me of the old line that if you want to have free-flowing and cordial conversation at dinner party, never seat two economists beside each other. Did I mention that I make an excellent chestnut stuffing?

Anyway, the starting point for measuring inflation is to define a relevant "basket" or group of goods, and then to track how the price of this basket of goods changes over time. When the Bureau of Labor Statistics measures the Consumer Price Index, the basket of goods is defined as what a typical U.S. household buys. But one can also define a more specific basket of goods if desired, and since 1986, the American Farm Bureau Federation has been using more than 100 shoppers in states across the country to estimate the cost of purchasing a Thanksgiving dinner. The basket of goods for their Classic Thanksgiving Dinner Price Index looks like this:

The cost of buying the Classic Thanksgiving Dinner rose by a  bit less than 1% in 2015, compared with 2014. The top line of the graph that follows shows the nominal price of purchasing the basket of goods for the Classic Thanksgiving Dinner. The lower line on the graph shows the price of the Classic Thanksgiving Dinner adjusted for the overall inflation rate in the economy. The line is relatively flat, especially since 1990 or so, which means that inflation in the Classic Thanksgiving Dinner has actually been a pretty good measure of the overall inflation rate.

Thanksgiving is a distinctively American holiday, and it's my favorite. Good food, good company, no presents--and all these good topics for conversation. What's not to like?

Will Convergence Occur?

Economic theory suggests that low-income countries have the possibilities for growing rapidly in a way that would allow them to converge toward the per capita income levels of high-income countries. After all, the low wages and lack of capital of low-income countries should make them an attractive place for international firms and investors. Moreover, today's low-income countries don't have to invent all the technologies developed in the last century all over again; instead, they can draw on knowledge and technology already available. A prominent essay by an economist named Alexander Gerschenkron back in 1962 (available at various places on the web like here ) referred in the spirit of these arguments to "the advantages of backwardness."

So how is that process of convergence actually coming along Maria A. Arias and Yi Wen offer some fact and analysis in "Trapped: Few Developing Countries Can Climb the Economic Ladder or Stay There," which appears in the October 2015 issue of the Regional Economist, published by the Federal Reserve Bank of St. Louis (pp. 4-9). Consider a couple of figures showing where some convergence has happened in the last 60 years--and where it has not.

First, consider countries that were "middle income" by global standards in 1950--that is, their per capita incomes at that time were between roughly 10% and 40% of the US level. On the figure, the US level of per capita GDP is used as the baseline, represented by 1, and the per capita GDP of other countries is expressed relative to that baseline. The rising lines show some examples of convergence: Hong Kong, Ireland, Spain, and Taiwan. Other examples would include some countries of east Asia like South Korea. But the other four countries, all from Latin America--Mexico, Brazil, Ecuador, and Guatamala--have seen at best a very modest force of convergence over the last 60 years.

The low-income countries of the world back in 1950, those starting at well below 10% of the US level of per capita income, show a mixed pattern as well. In the figure below, the recent rise of China and India are put in perspective. In terms of per capita GDP, they have now reached the lower part of "middle-income" by global terms. But a number of other low-income countries around the world haven't shown much convergence. The examples given here, which can be thought of as representing other low-income swaths of south Asia, sub-Saharan Africa, and Latin America, are Bangladesh, El Salvador, Mozambique, and Nepal. 

More systematic evidence shows that countries often remain in the low-income and middle-income positions for a long time. By their calculation based on all the countries for which estimates are available  (references to tables omitted): 
The probability of remaining trapped in the low-income range is 94 percent after 10 years 90 percent after 20 years and 80 percent in the entire observational period, 30 to 61 years.  ... [T]he probability of escaping the middle-income trap is 11 percent after a 10-year period, 21 percent after a 20-year period and 36 percent after 30 to 61 years. Also interesting to note is that countries almost never degrade to low- or middle-income status once they have reached the high-income status: The probability of remaining at a high-income status is at least 97 percent.
One of the central questions of development economics is: "What is holding back convergence?" It's easy to come up with theories. For example, low-income countries might have economic or political institutions that aren't conducive to growth. For example, perhaps they don't offer support for the rule of law or private property in a way that helps economic growth. Or perhaps political elites in a low-income country would rather control and sometimes close off interactions with the rest of the world, rather than open up to the world and risk that alternative centers of power and wealth might form. But as the authors point out, many of these explanations are at best partial, and it's not hard to think of exceptions to whatever rule you have formulated. 

All of which leaves me with a few thoughts: 

1) It doesn't seem quite right to argue that countries are "trapped" in their current level of income. Aart Kraay and David McKenzie make a persuasive case for skepticism about this view in their essay "Do Poverty Traps Exist? Assessing the Evidence," in the Summer 2014 issue of the Journal of Economic Perspectives. They point out that while there has been a lack of convergence, the world's poorest economies circa 1960 actually have growth at about the same rate as the world's higher-income economies since then. Moreover, whatever kind of "trap" exists apparently applies not just to the poorest countries, but also to middle-income countries and high-income countries--which rarely switch their positions, either.  

2) The take-off of China and India from low-income toward middle-income status is a phenomenal change. These countries each have populations over 1.2 billion, and together they represent something like one-third of the world population. Their growth in recent decades suggest that we need to rethink our beliefs from a few decades ago about countries being stuck. I find myself wonder if something about the huge size of these countries has perhaps helped their growth. Maybe once a large-population country gets its economy rolling, it has more sustained momentum than would a small-population country that took similar steps. 

3) As a way of thinking concretely if anecdotally abou this issue, Arias and Wen compare two middle income countries: Ireland, which has experienced convergence in recent decades, and Mexico, which has not. Both countries are near high-income countries. They give Ireland credit for its willingness to be open to foriegn direct investment, which helped link it to the global economy, as well as for its investments in education and its ability (most of the time) to avoid high government budget deficits or inflation. In contrast, Mexico for many decades focused mainly on exporting oil, while its government did not invest heavily in education and instead ran up large debts and periods of high inflation. These kinds of differences surely aren't a full description of why one country converges and another doesn't, but it's a start. 

4) My own sense, for what it's worth, is that convergence is in some substantial way a broad national commitment to welcome extensive and continual change. Sustained economic growth will shake up the lives of people in low-income and middle-income countries: not just what they can buy, but what jobs they do, what their living spaces look like, how their children will be educated, ties with family, what firms prosper or go under, and an ongoing transformation of villages, towns and cities. CNo nation can have a genuine commitment to sustained and powerful economic growth if it doesn't also have a broad-based willingness to to experience extensive change. But change is disruption, and disruption involves losses as well as gains. 

Tuesday, November 24, 2015

The Economics of the Retail Sector

Lots of economic analysis focuses on production, or on consumption. But there is less focus on the economic characteristics of what happens in between production and consumption--which is called the retail sector. In the Fall 2015 issue of the Journal of Economic Perspectives, Ali Hortaçsu and Chad Syverson look at "The Ongoing Evolution of US Retail: A Format Tug-of-War," while Bart J. Bronnenberg and Paul B. Ellickson take an international view in "Adolescence and the Path to Maturity in Global Retail." (Full disclosure: I've been Managing Editor of JEP since the first issue in 1987.)

Broadly understood,  the retail sector includes all of the activities between the producer and the consumer, which can include purchases by a wholesaler, transportation costs of shipping to warehouses, and the costs of holding that inventory for a time along with transportation costs of shipping to the retailer and the costs of retailing itself, including physical facilities and inventory costs. The evolution of retailing means that these costs are reshuffled among different players. For example, if I order a giant bulk-pack of paper towels from an e-retailer, and then store it in the basement and use it for several, I am bearing some of the storage and inventory costs that would otherwise be carried by a brick-and-mortar retailer.  However, the transportation costs of delivering that bulk-pack of paper towels from the e-retailer involves a relatively small shipment in a delivery van from a warehouse to my house, while the transportation costs of delivering to a warehouse store involves both a large truck shipment to the store, my own time and energy to pick up the bulk-pack and take it through check-out, followed by use of my own car to complete the delivery to my home. In various ways, the economic of retail involves issues of coordination, inventory-holding, economies of scale, as well as questions of how much variety is provided.

Hortaçsu and Syverson point out that the US retail sector accounts for about 11% of all jobs, and about 6% of the economy, as measured by the economic value-added by the sector. If one measures productivity by value-added per employee, then productivity is relatively low in the retail sector, which helps to explain why the average job in retail relatively low-paid. 

The story in US retail over the last few decades is the appearance of two new sets of players, each with a powerful gravitational pull that has greatly disrupted traditional retailes. One new set of players are the big box retailers, sometimes known as "warehouse clubs" and "supercenters," led by Walmart but also including Costco, Target, and others. The other new set of players are the e-commerce retailers, led by Amazon, eBay, and including many others.  In different ways, these new players in retail are both driven by changes in information technology. For the big-box retailers, information technology is how they manage their huge set of suppliers and  their inventories, allowing them to take advantage of economies of scale and scope. For the e-commerce retailers, information technology creates their virtual stores, ties them to their customers, and coordinates their shipping and billing. Hortaçsu and Syverson summarize the situation in US retail in this way:

"One can imagine the future of the retail sector as being pulled in one direction by the growth of e-commerce, which involves smaller employment firms, less market concentration, more geographical dispersion, and higher productivity. At the same time, the sector is being pulled in another direction by the warehouse clubs and supercenters, with higher employment firms, very high market concentration, location near population centers, and lower productivity relative to online channels. While warehouse clubs/supercenters have had more influence on the sector to this point, e-commerce has had its own effects and may be growing in relative importance. Perhaps this concurrent expansion and strength of e-commerce and a physical format portends a retail future not dominated by either, but rather with a substantial role for a “bricks-and-clicks” hybrid. The formats may end up being as much complements as substitutes, with online technologies specializing in product search and discovery, and physical locations facilitating consumers’ testing, purchase, and returns of products ...."
The international view of retail in the article by Bronnenberg and Ellickson shows a related transformation of retail happening at different speeds and in different ways around the word. They emphasize a broad view of retailing that includes potential roles for customers and government, as well as for retail firms themselves. For example, if customers have cars for transporting goods, spacious living accommodations for storage, and sufficient income, they will be more likely to buy bulk-packs of goods from warehouse retailers. The time consumers need to spend affects retail: for example, by encouraging e-commerce purchases that will be delivered.

A number of government policies affect retailing, including the road infrastructure, but also "the ease of obtaining building permits, the regulation of corruption, the availability of autos (through policies allowing the imports of used cars), and the minimum wage structure. ... In many emerging markets, another way in which government affects the retail sector lies in its ability to set policies regarding foreign direct investment."

Of course, large firms also play a role in what they call "modern retailing." They write:
"Firms are clearly the foremost strategic players driving the adoption of modern retailing technology. A modern chain of vertically integrated, large-format stores relies on an upstream distribution system of local producers, third-party logistics firms, and either third-party or integrated wholesalers who must all modernize together. Transactions that were often historically informal must be formalized through contracts with local suppliers and intermediaries. In a case study of Chile, Berdegué (2001) found that small farming cooperatives had to incur significant costs to deliver products of homogeneous quality, to coordinate harvest cycles, and to grade, sort, and package in a manner that met the downstream chain’s requirements. Also, adopting formal accounting processes makes previously informal transactions subject to taxes. ... Among the toughest coordination problems is the joint adoption of commonly used technology."
"In developed markets, the transition to modern retailing is nearly complete. In contrast, many low-income and emerging markets continue to rely on traditional retail formats, that is, a collection of independent stores and open air markets supplied by small-scale wholesalers, although modern retail has begun to spread to these markets as well. ... E-commerce is a notable exception: the penetration of e-commerce in China and several developing nations in Asia has already surpassed that of high-income countries for some types of consumer goods."
 This graph shows the takeoff in e-commerce in China, as measured by retail in the two sectors of "Apparel/Footware" and "Electronics/Appliances."

Bronnenberg and Ellickson agree that while e-commerce is going to be a big player in the future, it's not going to take over retail in general. Warehouses and supercenters will still play a large role, quite likely the dominant role, for some time to come. Other kinds of niche retailers--for example, those who specialize in a certain product, or those located in urban areas where huge store-spaces and parking places aren't available--will also play a role. In thinking about the tradeoffs of the various kinds of retail, they write: 
"Online purchases have benefits and costs that vary by product category. For example, online purchase of physical goods introduces a delay between purchase and delivery, but also gives consumers a greater opportunity to comparison-shop by lowering search costs and travel time and provides a seamless method of gathering information on the experience of previous customers (through online reviews). On the other hand, online retail offers less ability to inspect goods before purchase (and adds the risk of not having a product delivered at all), which renders the reputation of the firm all the more important. Whether a purchase is made online or in-store clearly depends on the frequency of purchase, the homogeneity of the product, and the number of products typically purchased in a given occasion, amongst other factors. Books fall at one end of this spectrum, and thus, in modern retailing systems, are primarily bought online, while groceries fall at the other end, and are typically bought in-store."

Monday, November 23, 2015

The Size of Automatic Stabilizers in the US Budget

The notion that bigger budget deficits (or smaller surpluses) can help to stimulate an economy in recession, and that smaller budget deficits (or bigger surpluses) can help to prevent an economy from being overstimulated into inflation, are the core ideas of countercyclical fiscal policy. As every intro econ textbook points out, this countercyclical fiscal policy can be "automatic" or "discretionary."

Discretionary fiscal policy is perhaps easier to understand: for example, it's when government passes new laws to raise spending or cut taxes in a recession. But automatic countercylical fiscal policy--also called "automatic stabilizers"--happens without any new legislation being passed. When the economy heads south so that incomes and profits fall, less in taxes is collected automatically, with no need for new legislation. When the economy booms so that incomes rise, there is automatically less need for government programs like Medicaid or welfare payment, again with no need for new legislation.

How big are the automatic stabilizers? Frank Russek and Kim Kowalewski offer some estimates, along with lots of detail about how these calculations are made, in "How CBO Estimates Automatic Stabilizers," published in November 2015 as Congressional Budget Office Working Paper 2015-07. They write:
Most types of revenues—mainly personal, corporate, and social insurance taxes—are sensitive to the business cycle and account for most of the value of the automatic stabilizers. A relatively small part of total outlays—those for the programs that are intended to support people’s income and have a cyclical component—contribute to the value of the automatic stabilizers; those benefits include ones from unemployment insurance, Medicaid, and SNAP (the Supplemental Nutrition Assistance Program). The automatic stabilizers do not include discretionary spending because that spending (which requires legislation) is not automatic or interest payments because those outlays are not designed to provide income support.CBO’s estimates of the automatic stabilizers are based on the estimated cyclical elements of those revenues and outlays. The magnitude of the automatic stabilizers is zero when the economy is operating at its potential and grows as the economy operates further away from its potential. 
To get a sense of the size of automatic stabilizers in the US economy, here's are a few figures. The first shows how automatic stabilizers on the revenue side affect federal budget deficits. The second shows how automatic stabilizers on the spending side affect budget deficits.

A few patterns emerge from these figures. The timing of the automatic stabilizers is about right. For example, if one looks at the size of the most recent recessions in 2001 and in 2007-2009, you can see the automatic drop in tax revenues and the automatic rise in spending. The automatic changes in tax revenues are typically larger than the spending changes. And taken together, the effects of automatic stabilizers are substantial, combining in the 2007-2009 recession to equal about 3% of GDP.

If you combine the automatic stabilizers on the revenue and spending sides, and put it on a graph with actual deficits, here's what it looks like.. The light blue line shows how the budget deficit actually changed, including both automatic stabilizers and discretionary changes. The dark blue line shows how the deficit would have changed with the automatic stabilizers subtracted out. Clearly, discretionary policies have a bigger effect on overall deficits than do automatic stabilizers. But in helping to counterbalance economic swings, the automatic stabilizers remain useful.

The existence of automatic stabilizers is one reason why it would be foolish to require that the federal budget be balanced in every year. Think about it: a recession arrives, and so tax revenues automatically fall and spending in recession-related categories automatically rises. A true believer that the budget should be balanced each year would have to argue that in the fact of that recession, taxes should be hiked and spending cut to offset the changes of the automatic stabilizers.

Friday, November 20, 2015

Refugees, Displaced, Resettled: Some Global Snapshots

Each year the United Nations High Commissioner for Refugees publishes a "Global Trends" report. The report for 2014, published in June 2015, was titled: "World at War: Forced Displacement in 2014." The report doesn't have much do say about policy details: it's focused mainly on describing the scope of the problem. It's perhaps useful to lay out some terms that are used in specific ways in the report, although they are often used more-or-less interchangeably in media reports and conversation.

Displaced persons is an overall category for those who have been forced to move "as a result of persecution, conflict, generalized violence, or human rights violations. The UNHCR report notes:
"The year 2014 has seen continuing dramatic growth in mass displacement from wars and conflict, once again reaching levels unprecedented in recent history. One year ago, UNHCR announced that worldwide forced displacement numbers had reached 51.2 million, a level not previously seen in the post-World War II era. Twelve months later, this figure has grown to a staggering 59.5 million,  roughly equalling the population of Italy or the United Kingdom. Persecution, conflict, generalized violence, and human rights violations have formed a ‘nation of the displaced’ that, if they were a country, would make up the 24th largest in the world."

According to the report, the total of 59.5 million displaced people can then be divided up into three groups: 19.5 million refugees, 38.2 million internally displaced persons, and 1.8 million asylum-seekers. This total does not include as many as 10 million "stateless" persons, who are living in a country but not recognized as citizens of that country--or of any other country. According to a 2014 UNHCR report, some examples of the stateless include
 More than two decades after the disintegration of the Soviet Union, over 600,000 people remain stateless. Some 300,000 Urdu-speaking Biharis were denied citizenship by the government of Bangladesh when the country gained its independence in 1971. A 2013 Constitutional Court ruling in the Dominican Republic led to tens of thousands of Dominicans, the vast majority of Haitian descent, being deprived of their nationality, and of the rights that flowed from it. More than 800,000 Rohingya in Myanmar have been refused nationality under the 1982 citizenship law and their freedom of movement, religion and education severely curtailed.
Refugees are defined in this way: "According to the 1951 Refugee Convention, a refugee is a person who is outside the country of his or her nationality and is unable or unwilling to avail him- or herself of the protection of that country because of a well-founded fear of being persecuted for reasons of “race”, religion, nationality, political opinion or membership of a particular social group in case of return. People fleeing conflicts or generalized violence are also generally considered as refugees, although sometimes under legal mechanisms other than the 1951 Convention."

Of the 19.5 million refugees, 5.1 million are Palestinians. Of the remaining 14.4 million, here are their countries of origin. While the large recent increase in refugees from Syria as moved that country to the top of this list in 2014, the top place on the list for the previous 30 years had been held by Afghanistan. Indeed, there are still 2.6 million refugees from Afghanistan living outside their country, some of whom have now been refugees for more than three decades since the late 1970s and early 1980s.

Other than the countries on this list: "Other main source countries of refugees were Colombia, Pakistan, and Ukraine. The number of Colombian refugees (360,300) decreased by 36,300 persons compared to the start of the year, mainly as a result of a revision in the number in the Bolivarian Republic of Venezuela. In contrast, figures for both Pakistan and Ukraine increased dramatically. In Pakistan, some 283,500 individuals fled to Afghanistan as armed conflict in their country unfolded during the year; likewise, fighting in eastern Ukraine not only displaced more than 800,000 people within the country but also led to 271,200 persons applying for refugee status or temporary asylum in the Russian Federation."

The goal of the UNHCR is to find a "durable solution" for refugees: return to their original country; local integration into the country where they have ended up, which would eventually involve full legal recognition and citizenship; or resettlement in a different country. Return to the original country has traditionally been the way in which most refugee issues were ultimately resolved. However, in the last few years, return to the original country has diminished.

There doesn't seem to be good data on what numbers of refugees end up being locally integrated in a given year. Such integration is often a slow and evolving process. The category of "resettlement" is what the current US disputes are about. The US has in recent years been the destination for about two-thirds of resettled refugees.
The cumulative number of resettled refugees (900,000) for the past decade is almost at par with the previous decade, 1995-2004 (923,000). Among the 105,200 refugees admitted during the year, Iraqi refugees constituted the largest group (25,800). This was followed by those from Myanmar (17,900), Somalia (11,900), Bhutan (8,200), the Democratic Republic of the Congo (7,100), and the Syrian Arab Republic (6,400). Under its resettlement programme, the United States of America continued to admit the largest number of refugees worldwide. It admitted 73,000 refugees during 2014, more than two-thirds (70%) of total resettlement admissions. Other countries that admitted large numbers of refugees included Canada (12,300), Australia (11,600), Sweden (2,000), Norway (1,300), and Finland (1,100).
This overview of displaced persons, refugees, and resettlement suggests that in a global perspective, US discussions of whether resettle perhaps 10,000 refugees from Syria is heavier on symbolism and emotion than on addressing the actual underlying humanitarian situation. It won't make much of a dent in the total number of the 3.8 million refugees from Syria, or the the 2.6 million Afghan refugees, or the millions of other refugees. It doesn't start to address the question of whether the US or the international community should seek to address the humanitarian needs of the 38.2 million internally displaced persons, who aren't counted in the total number of refugees.

Along with the immediate issues of how to support refugees and other displaced persons wherever they are, the ultimate question is about what the UNHCR calls the "durable solution." Is the vision here that most of the refugees will ultimately return to their home countries, or not?  The historical presumption has been that most refugees will return to their home countries, and resettlement elsewhere is for a few extreme situations. If there is a shift to a presumption that a high proportion of those refugees will be resettled in high-income countries, then the issue of refugees quickly becomes entangled in broader questions about freedom of international immigration.

Wednesday, November 18, 2015

Remembering Herbert Scarf: 1930-2015

Herbert Scarf, one of the giants of economic theory and operations research, died on November 15. I only met him in passing once or twice, but the Journal of Economic Perspectives (where I have toiled in the fields as Managing Editor since 1987), ran a couple of Scarf-related articles  in the Fall 1994 issue. One was an overview of Scarf's career by Kenneth J. Arrow and Timothy J. Kehoe, called "Distinguished Fellow: Herbert Scarf's Contributions to Economics," because in 1991 Scarf was named a "Distinguished Fellow" of the American Economic Association. The other was an article by Scarf himself on one of the many theoretical subjects where his contributions loom large: "The Allocation of Resourcesin the Presence of Indivisibilities."

The article by Arrow and Kehoe lays out some of Scarf's most prominent work. For example, there is a theory of the optimal holding of inventories called the  (S, s) theory: basically, the idea is that firms and stores don't re-order more supplies every day. They re-order it in batches. They wait until the quantity on hand falls to some lower level s, and then place an order for a fixed amount which raises the quantity on hand up to the higher level S. The theory of how far apart s and S should be will depend on various measures of volatility and risk. The question of when or if (S, s) theory was the right way to think about inventory problems was a hot topic in the 1950s. Scarf provided an answer that was as nearly definitive as these things get in economic theory, arguing that it was.

It turns out that the (S, s) theory isn't just about business inventories. More broadly, it's a theory about how economic agents make decisions when there are costs of adjustment, which can often lead to a situation where there are long periods where nothing much seems to happen, followed by sharp changes. For example, this pattern often arises in business investment of many kinds, in hiring and firing decision by firms, in consumer purchases of big durables like cars and houses, and even in small-scale decisions like taking a larger fixed amount of cash out of the ATM machine, rather than going by the machine every time you need $20. It further turns out that these sharp and lumpy changes can be related to overall macroeconomic business cycles.

For those who want an overview of (S, s) theory, useful starting points in JEP would be the article by Andrew Caplin and John Leahy in the Winter 2010 issue, "Economic Theory and the World of Practice: A Celebration of the (S, s) Model," and farther back, the article by Alan S. Blinder and Louis J. Maccini in the Winter 1991 issue, "Taking Stock: A Critical Assessment of Recent Research on Inventories."

Scarf was a major player in many of the central topics of research into economic theory for several decades after the 1950s. Arrow and Kehoe discuss his work on describing the "core" of an economy, on how to calculate a fixed point as the equilibrium of an overall economy (not just an individual market), how the presence of increasing returns affected the existence of equilibrium, and other issues. I would only embarrass myself by trying to summarize this work here, but I'll note that while Scarf work was often deeply technical and mathematical, he also had a gift for suggesting straightforward phrases and analogies that clarified what was at issue.

Scarf's article in the Fall 1994 issue of JEP took up the topic  of "indivisibilities," which overlaps with aspects of these issues of determining an optimal outcome in the presence of increasing returns and lumpy choices. Scarf wrote:
I am, I believe, not alone in thinking that the essence of economies of scale in production is the presence of large and significant indivisibilities in production. What I have in mind are assembly lines, bridges, transportation and communication networks, giant presses and complex manufacturing plants, which are available in specific discrete sizes, and whose economic usefulness manifests itself only when the scale of operation is large. If the technology giving rise to a large firm is based on indivisibilities, then this technology can be described by, say, an activity analysis model in which the activity levels referring to indivisible goods are required to assume integral values, like 0, 1, 2, . . . , only. When factor levels are specified and a particular objective function is chosen, we are led directly to that class of difficult optimization problems known as integer programs.
Of course, this problem of considering big lumpy changes is conceptually similar to the inventory problem, which also involved thinking about lumpy changes. Scarf uses a series of numerical examples in JEP to argue that when there are indivisibilities, the optimal answer will be a "neighborhood system"--which is to say that there often is not a single correct answer, but rather a group of closely related possibilities. Here's his conclusion in the JEP article. For those not initiated into economics, it may not carry a lot of meaning. For those of us who have drunk the economics Kool-Aid, it's an example of Scarf's facility for using the language of technical economics with a mixture of concreteness and fluidity that keeps the economic themes front and center:
But let us leave this example with only two discrete choices concerning types of plants, and remember that in a large manufacturing enterprise there will be many discrete choices involving a large menu of tasks and machinery, each of which has its own capacity, set-up cost and marginal cost. The equipment may be placed in a number of different locations on the shop floor; the work may be passed from one piece of machinery to another with complex requirements of scheduling and precedence, and the tasks may alter from one job lot to another as the product specification varies. Demands may be revised capriciously and unexpectedly over time; output may be shipped to many different regions. The enterprise may have a host of competitors or none at all. In the absence of internal market prices, combinatorial arguments and quantity tests are necessary to regulate the flow of activity inside the enterprise in an optimal fashion. 
My message boils down to a simple straightforward piece of advice; if economists are to study economies of scale, and the division of labor in the large firm, the first step is to take our trusty derivatives, pack them up carefully in mothballs and put them away respectfully; they have served us well for many a year. But derivatives are prices, and in the presence of indivisibilities in production, prices simply don't do the jobs that they were meant to do. They do not detect optimality; they aren't useful in comparative statics; and they tell us very little about the organized complexity of the large firm. Neighborhood systems are the discrete approximations to the marginal rates of substitution revealed by prices. They are relatively easy to compute, seem to behave pretty well under continuous changes in the technology, and will ultimately lead to even better algorithms than we have now. 
We know much more about the structure of neighborhood systems than I have been able to describe here—not enough, perhaps, to derive a really satisfactory theory of the internal organization of the large firm at the present time. But my own intuition is that this is an important way to proceed. I am confident that serious, ultimately useful insights about the large firm will eventually be obtained by thinking very hard and long about indivisibilities in production. 

Tuesday, November 17, 2015

Why More Humanitarian Aid Should be Given in Cash

My mental image of humanitarian relief workers after a crisis is people who are unloading and handing out supplies. But for some purposes and in some cases,  handing out cash might work better. That's the case made in the "Report of the High Level Panel on Humanitarian Cash Transfers," called "Doing Cash Differently: How Cash Transfers Can Transform Humanitarian Aid" (published in September 2015 by the Center for Global Development). The panel includes a selection of academics, representatives from humanitarian relief organizations, and organizations with experience transferring money into low-income countries. I'll list the membership of the group at the bottom of this post.

Let me start by sketching the challenge of humanitarian aid.
"The main components of international humanitarian action are donor governments, the United Nations and its implementing organisations, the Red Cross and Red Crescent Movement and international NGOs. While sometimes described as a ‘system’, it is actually a complicated and constantly evolving web of organisations. In 2014, the humanitarian system comprised some 4,480 operational aid organisations and more than 450,000 professional humanitarian aid workers. It had a combined expenditure of over $25 billion. ...  
"However, this system is under great and growing strain. The 2015 State of the Humanitarian System report concludes that international humanitarian action is at the ‘wrong scale and is structurally deficient to meet the multiple demands that have been placed upon it’. 2014 was an exceedingly difficult year, with simultaneous large-scale disasters in South Sudan, the Central African Republic, Syria and the Philippines, and in West Africa with the Ebola outbreak. ...  In 2014 there were nearly 60 million people around the world who had been displaced by conflict. Natural disasters affect on average 218 million people a year. Conflict in the Central African Republic has touched more than half its population. Almost 12 million people have been forced to flee their homes in Syria."
The Panel estimates that at present about 5-6% of humanitarian aid is distributed in the form of cash payments. "If sectors where cash is often less appropriate (health, water and sanitation) and not appropriate at all (mine action, coordination, security) are removed from the equation, then cash and vouchers were roughly 10% of the total."

It may seem counterintuitive to think of money as an answer to humanitarian crises. After all, isn't it obvious that the in such a crisis, the problem is a shortage of food, shelter, water, tents, clothing, medical care, and the like? But actually, this point isn't at all obvious. If there is buying power, market forces are often extremely clever and flexible about making goods available. One of the most famous works by Amartya Sen, the 1998 Nobel laureate in economics, looked at causes of famine. As he pointed out, famines sometimes happened in places where there had been no drop in crop production; even in famine-stricken areas, large groups of people did get food; and in some cases, food was even being exported out of famine areas. In short, Sen pointed out that famine was often not a result of a physical shortage of food. Instead, it was the result of a large group of people people who found themselves unable to pay for food, often because some sort of disaster had wiped out their way of making a living. If the government provided income to people, perhaps through make-work show-up-you-get-paid jobs, then that local buying power would often bring a supply of food to the area.

The Panel's top recommendation is "Give more unconditional cash transfers. The questions should always be asked: ‘why not cash?’ and, ‘if not now, when?’" Let me first run through some of the advantages of making greater use of cash in humanitarian relief, in no particular order, and then consider some objections.

Humanitarian cash relief offers greater flexibility for the recipients, who can prioritize what is most important to them.

"A consistent theme in research and evaluations is the flexibility of cash transfers, enabling assistance to meet a more diverse array of needs. In the Philippines, for example, people reported using the money for food, building materials, agricultural inputs, health fees, school fees, sharing, debt repayment, clothing, hygiene, fishing equipment and transport. Often people spend the vast majority of cash in fairly predictable ways – during the Somalia famine, cash transfers were mainly used to buy food and repay loans. Sometimes there are surprises. In Lebanon, for example, while UNHCR provided cash to Syrian refugees to cope with the harsh winter conditions as an alternative to ‘winterisation kits’, most directed their additional income towards food and water. It is not that they did not need fuel – it was that they needed other things more. The element of choice is critical. ... 
"The evidence shows that cash in humanitarian settings can be effective at achieving a wide range of aims – such as improving access to food, enabling households to meet basic needs, supporting livelihoods and reconstructing homes. ...  Cash impacts local economies and market recovery by increasing demand and generating positive multiplier effects. In Zimbabwe, every dollar of cash transfers generated $2.59 in income (compared to $1.67 for food aid). It can encourage the recovery of credit markets by enabling repayment of loans."
Cash lets humanitarian dollars help more people, because the humanitarian organization doesn't need to gather, transport, and store physical objects.
"Cash transfers can also make limited humanitarian resources go further. ...  It usually costs less to get money to people than in-kind assistance because aid agencies do not need to transport and store relief goods.A four-country study comparing cash transfers and food aid found that 18% more people could be assisted at no extra cost if everyone received cash instead of food."
A focus on cash can simplify and reduce overlap among the many aid organizations. It can also let aid organizations focus on broader issues of reconstruction after a disaster.

In Lebanon in 2014, 30 aid agencies provided cash transfers and vouchers for 14 different objectives, including winterisation, legal assistance and food. People do not divide their needs by sectors and clusters. A more logical approach is to have fewer, larger-scale interventions providing unconditional cash grants using common delivery infrastructure where possible, complemented by other forms of humanitarian aid in sectors where cash is not appropriate. ...
Providing cash does not and should not mean that humanitarian actors lose a focus on a key public good that they are uniquely placed to provide: proximity, presence and bearing witness to the suffering of disaster-affected populations. On the contrary, streamlining aid delivery should allow them more time to focus on exactly that. Giving people cash, therefore, does not imply simply dumping the money and leaving them to fend for themselves. People receiving cash intended to help meet shelter needs may require help to secure land rights, build disaster-resistant housing or manage procurement and contractors. Where people use cash to buy agricultural inputs this can be complemented with extension advice.
Humanitarian cash payments can help build links between low-income people and the financial sector. As I noted in an earlier post: "For the individual, it provides safety for saving, a channel for receiving and making payments, and the possibility of getting a loan at a more reasonable rate than offered by an informal money-lender. An economy in which many people have bank accounts will find it easier to make transactions, both because buying and selling are easier and because whether a payment was in fact made can be verified by a third party." Many low-income countries are moving toward providing government payments through electronic accounts already. Providing humanitarian aid through these channels may be less prone to corruption, and easier to audit, than providing in-kind assistance.

Perhaps the main concern with using cash for humanitarian relief is that it won't benefit those in need. It could be skimmed off in some way, or those who receive it might spend it on the local intoxicants rather than feeding their children. Cash assistance is surely susceptible to these problems, but so are other sorts of aid. There are plenty of stories of emergency supplies of food being stolen and sold. Those who get physical aid can sell what they have received in the black or the gray market for cash, and then buy whatever else they want instead. The report cites one study that 70% of Syrian refugees in Iraq have sold or traded some of the in-kind aid they received.
"Evidence from humanitarian settings and from social protection overwhelmingly demonstrates that people receiving money tend to buy what they most need and do not spend it on alcohol or tobacco or for other anti-social purposes. There are inevitably some exceptions, because crises and disasters do not change the fact that there are some irresponsible people in the world, but the evidence is clear that cash is no more likely to be used irresponsibly than other kinds of assistance (which can be sold to buy other things, and often is).
Of course, humanitarian aid in the form of cash doesn't work all the time. But as the report notes:
Nobody expects cash to replace vaccines or therapeutic feeding for malnourished children, or that money alone can enable the safe rebuilding of shelters. But the times and contexts when cash isn’t appropriate are narrow and limited, and should not be used as excuses to continue providing in-kind assistance if cash becomes possible. Markets recover quickly after disasters and continue during conflicts. 
Here's the membership of the Panel:

Friday, November 13, 2015

Uber: What are the Real Economic Gains?

A common accusation against Uber and other web-facilitated car-hire services is what looks like a competitive advantage only arises because they operate under a different and more lax set of rules than regular taxicabs. In other words, the newfangled service looks great until you are in a situation with an unsafe and undermaintained vehicle, along with an untrained or underinsured driver. In "The Social Costs of Uber,"  Brishen Rogers points out two sources of genuine economic gains from Uber and similar firms (The University of Chicago Law Review Dialogue, 2015, 82: pp. 85-102). He also describes the evolving negotiations over rules that Uber and other companies seem sure to face.

A company like Uber offers two sources of genuine economic gains: reduced search costs for both passengers and drivers, and gains from horizontal and vertical integration. Here's Rogers on the mess that search costs on the part of both drivers and passengers create for conventional taxicab markets, and how Uber addresses them (with footnotes omitted).
"[B]oth regulated and deregulated taxi sectors suffer from high search costs. Riders have difficulty finding empty cabs when needed. Taxis therefore tend to congregate in spaces of high demand, such as airports and hotels. Deregulation arguably made this worse. Since supply went up, cab drivers had even greater incentives to stay in high-demand areas, and yet they had to raise fares to stay afloat.
High search costs and low effective supply may also reduce demand for cabs in two ways. First, if consumers have difficulty finding cabs because cabs are scarce, they may tend not to search in the first place. Second, high search costs may create a vicious cycle for phone-dispatched cabs. Riders who get tired of waiting for a dispatched cab may simply hail another on the street; drivers en route to a rider may also decide to take another fare from the street, rationally estimating that the rider who called may have already found another car. In some cities, the result is that dispatched cabs may never arrive—full stop.
Uber has basically eradicated search costs. Rather than calling a dispatcher and waiting, or standing on the street, users can hail a car from indoors and watch its progress toward their location. Drivers also cannot poach one another’s pre-committed fares. This is a real boon for consumers who don’t like long waits or uncertainty—which is to say everyone. Uber can also advise drivers on when to enter and exit the market—for example, by encouraging part-time drivers to work a few hours on weekend nights.
The article cite some evidence from a few years back in San Francisco that fewer than half of the attempts to dispatch a cab to a certain address ended up with a cab actually arriving.

For economists, "vertical integration" refers to whether a few or many economic actors are involved in number of steps along the chain of production from start to finish. In contrast, "horizontal integration" refers to whether a few or many are involved in a particular stage of the production process. Rogers argues that the taxicab industry has evolved in ways that don't involve much vertical or horizontal integration, and Uber and other ride-sharing services are creating efficiency gains bringing greater integration in these ways. Rogers writes:
Uber is also extremely important for another reason that has received little attention: it is encouraging vertical and horizontal integration in the car-hire sector. ... In Chicago, for example, medallion owners often lease their operating rights to management companies; management companies in turn purchase or lease cars and outfit them as required per local regulations; drivers then lease those cars from management companies on a weekly, daily, or even hourly basis. Other cities have different licensing systems, but any licensing system that does not mandate owner operation or direct employment of drivers will encourage similar vertical fragmentation. Taxi companies will rationally (and lawfully) lease cars to drivers rather than employ drivers in order to avoid the costs associated with employment, which include minimum wage laws, unemployment and workers’ compensation taxes, and possible unionization. Uber is now reducing such vertical fragmentation, since it has a direct contractual relationship with its drivers. It is also integrating the sector horizontally as it gains market share within cities. Meanwhile, the company is compiling a massive database of driver and rider behavior. Those data are essential to Uber’s price-setting and market-making functions but would be all-but-impossible to compile in a fragmented industry. 
In short, the economics behind Uber and other ride-sharing services suggests the possibility of substantial and real economic gains. Rogers quickly mentions some other gains, as well: "For example, Uber reduces consumers’ incentives to purchase automobiles, almost certainly saving them money and reducing environmental harms. As consumers buy fewer cars, Uber also opens up the remarkable possibility of converting parking spaces to new and environmentally sound uses. Uber may also reduce drunk driving and other accidents."

But even if Uber isn't just a case of those who can sidestep existing regulations having a cost advantage, it is nonetheless true that Uber like any company providing service to the public is going to find itself facing some rules and regulations. For example, basic checks on driver competence, as well as rules about vehicle safety and appropriate insurance, seem to be on their way.

What is perhaps more interesting is that the web-enabled car-hire model raises some questions that didn't arise in the same way in the previous taxicab industry.

For example, there are a combination of old and new concerns about discrimination. The old concern is that taxis may not be available for hire in certain neighborhoods, or drivers may not pick up riders from certain racial or ethnic groups. A web-connected car-hire service seems likely to reduce this problem. The new concern is that Uber riders are expected to evaluate drivers. What if such evaluations carry a dose of racial/ethnic or gender prejudice?

Another issue is whether the Uber drivers should be treated as "employees." Rogers doubts that ultimately Uber drivers will be treated in this way, and refers to mentions that there are similar cases involving whether FedEx drivers are employees. He writes:
The most analogous recent cases, in which courts have split, involve FedEx drivers. Those that found for the workers have noted, for example, that FedEx requires uniforms and other trade dress, that it requires drivers to show up at sorting facilities at designated times each day, and that it requires them to deliver packages every day. Uber drivers are different in each respect. They use their own cars, need not wear uniforms, and most importantly they work whatever hours they please.
But ultimately, as these kinds of regulations are discussed and debated, the very success of Uber and similar services is likely to help in enacting and enforcing certain standards. As Rogers notes: "These developments could make it relatively simple to ensure that Uber complies with the law and plays its part in advancing public goals. The reason is simple: as scholars have documented, large, sophisticated firms can detect and root out internal legal violations—and otherwise alter employees’ and contractors’ behavior—far more easily than public authorities or outside private attorneys."

In other words, Uber and similar companies are not going to be both enormous commercial successes and also untouched by regulatory concerns. Instead, Uber's huge and growing database of drivers, fares, prices, time-of-day, locations, accidents, evaluations of drivers by passengers, evaluations by passengers of drivers, will all tend to provide information that can be used to monitor what happens and to motivate improvements where needed. Moreover, if enough potential customers or drivers are discontented with Uber and the existing web-enabled car hire companies, the barriers to entry for other firms to start up Uber-like companies on a city-by-city basis are not very high. As Rogers writes:
Moreover, it is not clear that Uber’s position at the top of the ride-sharing sector is stable. While Uber’s app is revolutionary, it is also easy to replicate. Uber already faces intense competition from Lyft and other ride-sharing companies, competition that should only become more intense given Uber’s repeated public relations disasters. While Uber’s success relies in part on network effects—more riders and drivers enable a more efficient market—the switching costs for riders and drivers appear to be fairly minimal. Uber may become the Myspace or Netscape of ride sharing—that is, a pioneer that could not maintain its market position. Concerns about monopoly therefore seem premature.   

Those interested in this subject might also want to check out an earlier post on "Who are the Uber Drivers?" (February 18. 2015).

Thursday, November 12, 2015

How Many Deaths from Mistakes in US Health Care?

Back in the 1999, the Institute of Medicine (part of the National Academies of Science) estimated in its report To Err is Human that in 1997 at least 44,000 and as many as 98,000 patients died in hospitals as the result of medical errors that could have been prevented. Current estimates are higher, as Thomas R. Krause points out in "Department of Measurement: Scorecard Needed" in the Milken Institute Review (Fourth Quarter 2015, pp. 91-94). Krause writes:
"You've seen the astounding numbers: hundreds of thousands of Americans die each year due to medical treatment errors. Indeed, the median credible estimate is 350,000, more than U.S. combat deaths in all of World War II. If you measure the “value of life” the way economists and federal agencies do it – that is, by observing how much individuals voluntarily pay in daily life to reduce the risk of accidental death – those 350,000 lives represent a loss exceeding $3 trillion, or one-sixth of GDP. But when decades pass and little seems to change, even these figures lose their power to shock, and the public is inclined to focus its outrage on apparently more tractable problems."
In case you're one of the vast majority who actually haven't seen those estimates, or at least haven't mentally registered that they exist, here are a couple of the more recent underlying sources.

The Agency for Healthcare Research and Quality (part of the US Department of Health and Human Services) published in May 2015 the 2014 National Healthcare Quality and Disparities Report. Here are some good news/bad news statistics from the report:
From 2010 to 2013, the overall rate of hospital-acquired conditions declined from 145 to 121 per 1,000 hospital discharges. This decline is estimated to correspond to 1.3 million fewer hospital-acquired conditions, 50,000 fewer inpatient deaths, and $12 billion savings in health care costs. Large declines were observed in rates of adverse drug events, healthcare-associated infections, and pressure ulcers.
The good news is 50,000 fewer deaths, along with health improvements and saving money. The bad new is that the rate of hospital-acquired conditions basically fell from one patient in every seven patients to one out of every eight. Sure, hospital-acquired conditions will never fall to zero. But it certainly looks to me as if at least tens thousands of lives were being lost each year because that rate had not been reduced, and that tens of thousands of additional could be saved be reducing the rate further. For another analysis in a different setting, here's a 2014 US government study about adverse and preventable effects of care in nursing care facilities.

John T. James published "A New, Evidence-based Estimate of Patient Harms Associated with Hospital Care" in the Journal of Patient Safety (September 2013, pp. 122-128). James reviews four studies of quality of care that focus on relatively small numbers of patients (three of the studies are less than 1000 patient records, the other is 2,300). He uses a software package called the Global Trigger Tool to flag cases where preventable errors might have occurred, and then those cases are examined by physicians. James describes the process this way:
The GTT depends on systematic review of medical records by persons trained to find specific clues or triggers suggesting that an adverse event has taken place. For example, triggers might include orders to stop a medication, an abnormal lab result, or prescription of an antidote medication such as naloxone. As a final step, the examination of the record must be validated by 1 or more physicians. As will be shown shortly, the methods used to find adverse events in hospital medical records target primarily errors of commission and are much less likely to find harm from errors of omission, communication, context, or missed diagnosis.
Projecting from four small studies to national patterns is obviously a little dicey, but for what it's worth, James finds:
Using a weighted average of the 4 studies, a lower limit of 210,000 deaths per year was associated with preventable harm in hospitals. Given limitations in the search capability of the Global Trigger Tool and the incompleteness of medical records on which the Tool depends, the true number of premature deaths associated with preventable harm to patients was estimated at more than 400,000 per year. Serious harm seems to be 10- to 20-fold more common than lethal harm.
My reactions to this body of evidence on the prevalence and costs of mistakes in the US health care system can be summarized in two bits of skepticism and one burst of outrage.

It seems sensible to be skeptical about the largest estimates of the size of the problem. There are obviously issues in deciding what was "preventable" or a "mistake."

The other bit of skepticism is that seeking to reduce the problem of medical errors is harder than it might at first sound. For example, Christine K. Cassel, Patrick H. Conway, Suzanne F. Delbanco, Ashish K. Jha, Robert S. Saunders, and Thomas H. Lee wrote about some efforts to measure and set guidelines for health care in "Getting More Performance from Performance Measurement." which appears in the New England Journal of Medicine on December 4, 2014. They point out that there are often literally  hundreds of measures of quality of care, some important, some not, and many that turn out to be useless or even harmful.
Many observers fear that a proliferation of measures is leading to measurement fatigue without commensurate results. An analysis of 48 state and regional measure sets found that they included more than 500 different measures, only 20% ofwhich were used by more than one program. Similarly, a study of 29 private health plans identified approximately 550 distinct measures, which overlapped little with the measures used by public programs. Health care organizations are therefore devoting substantial resources to reporting their performance to regulators and payers; one northeastern health system, for instance, uses 1% of its net patient-service revenue for that purpose. Beyond the problem of too many measures, there is concern that programs are not using the right ones. Some metrics capture health outcomes or processes that have major effects on overall health, but others focus on activities that may have minimal effects. ...
Unfortunately, for every instance in which performance initiatives improved care, there were cases in which our good intentions for measurement simply enraged colleagues or inspired expenditures that produced no care improvements. One example of a measurement effort that had unintended consequences was the CMS quality measure for community-acquired pneumonia. This metric assessed whether providers administered the first dose of antibiotics to a patient within 6 hours after presentation, since analyses of Medicare databases had shown that an interval exceeding 4 hours was associated with increased in-hospital mortality. But the measure led to inappropriate antibiotic use in patients without community-acquired pneumonia, had adverse consequences such as Clostridium difficile colitis, and did not reduce mortality. The measure therefore lost its endorsement by the National Quality Forum in 2012, and CMS removed it from its Hospital Inpatient Quality Reporting and Hospital Compare programs.
But even after acknowledging that quantifying death and injury caused by health care mistakes is an inexact process, and fixing it isn't simple, the sheer scale of the issue remains.

The US economy will spend about $3 trillion this year on health care. As Krause noted at the start, the loss of 350,000 lives life from preventable errors, if we value a life at about $9 million as is commonly done by federal regulators, means that the total costs of death from by health care mistakes is about  $3 trillion. On one side, perhaps this total is overstated. On the other side, it includes only costs of deaths, not health costs from serious but nonlethal harms (which James estimates are 10 or 20 times as common), and not the costs of resources used by the health care system in seeking to deal with mistakes already made.

There is considerable public debate over how to make sure all Americans have health insurance. But the issue of the enormous costs of the US health care system doesn't get the same airtime. Sure, there are arguments over how much or why the rate of growth of US health care spending has changed. In the meantime, the US continues to vastly outspend other countries. For example, here's a figure from the OECD showing health care spending as a share of GDP,50% higher than any other country and roughly double the OECD average. Based on this data, the US is spending about $8500 per person per year on health care, while Canada and Germany are spending about $4400 per person per year, and the United Kingdom and Japan are spending about $3,300 per person per year.

I understand the reasons why high US health care spending doesn't buy health. But it's a bitter irony indeed that the extremely high levels of US health care spending are actually causing at least tens of thousands, and quite possible hundreds of thousands, of deaths each year.