Which European power pushed into the Canadian Hudson Bay region in pursuit of furs during the seventeenth century?

The Economic History of the Fur Trade: 1670 to 1870

Introduction

A commercial fur trade in North America grew out of the early contact between Indians and European fisherman who were netting cod on the Grand Banks off Newfoundland and on the Bay of Gaspé near Quebec. Indians would trade the pelts of small animals, such as mink, for knives and other iron-based products, or for textiles. Exchange at first was haphazard and it was only in the late sixteenth century, when the wearing of beaver hats became fashionable, that firms were established who dealt exclusively in furs. High quality pelts are available only where winters are severe, so the trade took place predominantly in the regions we now know as Canada, although some activity took place further south along the Mississippi River and in the Rocky Mountains. There was also a market in deer skins that predominated in the Appalachians.

Show

The first firms to participate in the fur trade were French, and under French rule the trade spread along the St. Lawrence and Ottawa Rivers, and down the Mississippi. In the seventeenth century, following the Dutch, the English developed a trade through Albany. Then in 1670, a charter was granted by the British crown to the Hudson’s Bay Company, which began operating from posts along the coast of Hudson Bay (see Figure 1). For roughly the next hundred years, this northern region saw competition of varying intensity between the French and the English. With the conquest of New France in 1763, the French trade shifted to Scottish merchants operating out of Montreal. After the negotiation of Jay’s Treaty (1794), the northern border was defined and trade along the Mississippi passed to the American Fur Company under John Jacob Astor. In 1821, the northern participants merged under the name of the Hudson’s Bay Company, and for many decades this merged company continued to trade in furs. Finally, in the 1990s, under pressure from animal rights groups, the Hudson’s Bay Company, which in the twentieth century had become a large Canadian retailer, ended the fur component of its operation.

Figure 1 Hudson’s Bay Company Hinterlands

Source: Ray (1987, plate 60)

The fur trade was based on pelts destined either for the luxury clothing market or for the felting industries, of which hatting was the most important. This was a transatlantic trade. The animals were trapped and exchanged for goods in North America, and the pelts were transported to Europe for processing and final sale. As a result, forces operating on the demand side of the market in Europe and on the supply side in North America determined prices and volumes; while intermediaries, who linked the two geographically separated areas, determined how the trade was conducted.

The Demand for Fur: Hats, Pelts and Prices

However much hats may be considered an accessory today, they were for centuries a mandatory part of everyday dress, for both men and women. Of course styles changed, and, in response to the vagaries of fashion and politics, hats took on various forms and shapes, from the high-crowned, broad-brimmed hat of the first two Stuarts to the conically-shaped, plainer hat of the Puritans. The Restoration of Charles II of England in 1660 and the Glorious Revolution in 1689 brought their own changes in style (Clarke, 1982, chapter 1). What remained a constant was the material from which hats were made – wool felt. The wool came from various animals, but towards the end of the fifteenth century beaver wool began to be predominate. Over time, beaver hats became increasingly popular eventually dominating the market. Only in the nineteenth century did silk replace beaver in high-fashion men’s hats.

Wool Felt

Furs have long been classified as either fancy or staple. Fancy furs are those demanded for the beauty and luster of their pelt. These furs – mink, fox, otter – are fashioned by furriers into garments or robes. Staple furs are sought for their wool. All staple furs have a double coating of hair with long, stiff, smooth hairs called guard hairs which protect the shorter, softer hair, called wool, that grows next to the animal skin. Only the wool can be felted. Each of the shorter hairs is barbed and once the barbs at the ends of the hair are open, the wool can be compressed into a solid piece of material called felt. The prime staple fur has been beaver, although muskrat and rabbit have also been used.

Wool felt was used for over two centuries to make high-fashion hats. Felt is stronger than a woven material. It will not tear or unravel in a straight line; it is more resistant to water, and it will hold its shape even if it gets wet. These characteristics made felt the prime material for hatters especially when fashion called for hats with large brims. The highest quality hats would be made fully from beaver wool, whereas lower quality hats included inferior wool, such as rabbit.

Felt Making

The transformation of beaver skins into felt and then hats was a highly skilled activity. The process required first that the beaver wool be separated from the guard hairs and the skin, and that some of the wool have open barbs, since felt required some open-barbed wool in the mixture. Felt dates back to the nomads of Central Asia, who are said to have invented the process of felting and made their tents from this light but durable material. Although the art of felting disappeared from much of western Europe during the first millennium, felt-making survived in Russia, Sweden, and Asia Minor. As a result of the Medieval Crusades, felting was reintroduced through the Mediterranean into France (Crean, 1962).

In Russia, the felting industry was based on the European beaver (castor fiber). Given their long tradition of working with beaver pelts, the Russians had perfected the art of combing out the short barbed hairs from among the longer guard hairs, a technology that they safeguarded. As a consequence, the early felting trades in England and France had to rely on beaver wool imported from Russia, although they also used domestic supplies of wool from other animals, such rabbit, sheep and goat. But by the end of the seventeenth century, Russian supplies were drying up, reflecting the serious depletion of the European beaver population.

Coincident with the decline in European beaver stocks was the emergence of a North American trade. North American beaver (castor canadensis) was imported through agents in the English, French and Dutch colonies. Although many of the pelts were shipped to Russia for initial processing, the growth of the beaver market in England and France led to the development of local technologies, and more knowledge of the art of combing. Separating the beaver wool from the felt was only the first step in the felting process. It was also necessary that some of the barbs on the short hairs be raised or open. On the animal these hairs were naturally covered with keratin to prevent the barbs from opening, thus to make felt, the keratin had to be stripped from at least some of the hairs. The process was difficult to refine and entailed considerable experimentation by felt-makers. For instance, one felt maker “bundled [the skins] in a sack of linen and boiled [them] for twelve hours in water containing several fatty substances and nitric acid” (Crean, 1962, p. 381). Although such processes removed the keratin, they did so at the price of a lower quality wool.

The opening of the North American trade not only increased the supply of skins for the felting industry, it also provided a subset of skins whose guard hairs had already been removed and the keratin broken down. Beaver pelts imported from North America were classified as either parchment beaver (castor sec – dry beaver), or coat beaver (castor gras – greasy beaver). Parchment beaver were from freshly caught animals, whose skins were simply dried before being presented for trade. Coat beaver were skins that had been worn by the Indians for a year or more. With wear, the guard hairs fell out and the pelt became oily and more pliable. In addition, the keratin covering the shorter hairs broke down. By the middle of the seventeenth century, hatters and felt-makers came to learn that parchment and coat beaver could be combined to produce a strong, smooth, pliable, top-quality waterproof material.

Until the 1720s, beaver felt was produced with relatively fixed proportions of coat and parchment skins, which led to periodic shortages of one or the other type of pelt. The constraint was relaxed when carotting was developed, a chemical process by which parchment skins were transformed into a type of coat beaver. The original carrotting formula consisted of salts of mercury diluted in nitric acid, which was brushed on the pelts. The use of mercury was a big advance, but it also had serious health consequences for hatters and felters, who were forced to breathe the mercury vapor for extended periods. The expression “mad as a hatter” dates from this period, as the vapor attacked the nervous systems of these workers.

The Prices of Parchment and Coat Beaver

Drawn from the accounts of the Hudson’s Bay Company, Table 1 presents some eighteenth century prices of parchment and coat beaver pelts. From 1713 to 1726, before the carotting process had become established, coat beaver generally fetched a higher price than parchment beaver, averaging 6.6 shillings per pelt as compared to 5.5 shillings. Once carotting was widely used, however, the prices were reversed, and from 1730 to 1770 parchment exceeded coat in almost every year. The same general pattern is seen in the Paris data, although there the reversal was delayed, suggesting slower diffusion in France of the carotting technology. As Crean (1962, p. 382) notes, Nollet’s L’Art de faire des chapeaux included the exact formula, but it was not published until 1765.

A weighted average of parchment and coat prices in London reveals three episodes. From 1713 to 1722 prices were quite stable, fluctuating within the narrow band of 5.0 and 5.5 shillings per pelt. During the period, 1723 to 1745, prices moved sharply higher and remained in the range of 7 to 9 shillings. The years 1746 to 1763 saw another big increase to over 12 shillings per pelt. There are far fewer prices available for Paris, but we do know that in the period 1739 to 1753 the trend was also sharply higher with prices more than doubling.

Table 1 Price of Beaver Pelts in Britain: 1713-1763 (shillings per skin)

Year Parchment Coat Averagea Year Parchment Coat Averagea
1713 5.21 4.62 5.03 1739 8.51 7.11 8.05
1714 5.24 7.86 5.66 1740 8.44 6.66 7.88
1715 4.88 5.49 1741 8.30 6.83 7.84
1716 4.68 8.81 5.16 1742 7.72 6.41 7.36
1717 5.29 8.37 5.65 1743 8.98 6.74 8.27
1718 4.77 7.81 5.22 1744 9.18 6.61 8.52
1719 5.30 6.86 5.51 1745 9.76 6.08 8.76
1720 5.31 6.05 5.38 1746 12.73 7.18 10.88
1721 5.27 5.79 5.29 1747 10.68 6.99 9.50
1722 4.55 4.97 4.55 1748 9.27 6.22 8.44
1723 8.54 5.56 7.84 1749 11.27 6.49 9.77
1724 7.47 5.97 7.17 1750 17.11 8.42 14.00
1725 5.82 6.62 5.88 1751 14.31 10.42 12.90
1726 5.41 7.49 5.83 1752 12.94 10.18 11.84
1727 7.22 1753 10.71 11.97 10.87
1728 8.13 1754 12.19 12.68 12.08
1729 9.56 1755 12.05 12.04 11.99
1730 8.71 1756 13.46 12.02 12.84
1731 6.27 1757 12.59 11.60 12.17
1732 7.12 1758 13.07 11.32 12.49
1733 8.07 1759 15.99 14.68
1734 7.39 1760 13.37 13.06 13.22
1735 8.33 1761 10.94 13.03 11.36
1736 8.72 7.07 8.38 1762 13.17 16.33 13.83
1737 7.94 6.46 7.50 1763 16.33 17.56 16.34
1738 8.95 6.47 8.32

a A weighted average of the prices of parchment, coat and half parchment beaver pelts. Weights are based on the trade in these types of furs at Fort Albany. Prices of the individual types of pelts are not available for the years, 1727 to 1735.

Source: Carlos and Lewis, 1999.

The Demand for Beaver Hats

The main cause of the rising beaver pelt prices in England and France was the increasing demand for beaver hats, which included hats made exclusively with beaver wool and referred to as “beaver hats,” and those hats containing a combination of beaver and a lower cost wool, such as rabbit. These were called “felt hats.” Unfortunately, aggregate consumption series for the eighteenth century Europe are not available. We do, however, have Gregory King’s contemporary work for England which provides a good starting point. In a table entitled “Annual Consumption of Apparell, anno 1688,” King calculated that consumption of all types of hats was about 3.3 million, or nearly one hat per person. King also included a second category, caps of all sorts, for which he estimated consumption at 1.6 million (Harte, 1991, p. 293). This means that as early as 1700, the potential market for hats in England alone was nearly 5 million per year. Over the next century, the rising demand for beaver pelts was a result of a number factors including population growth, a greater export market, a shift toward beaver hats from hats made of other materials, and a shift from caps to hats.

The British export data indicate that demand for beaver hats was growing not just in England, but in Europe as well. In 1700 a modest 69,500 beaver hats were exported from England and almost the same number of felt hats; but by 1760, slightly over 500,000 beaver hats and 370,000 felt halts were shipped from English ports (Lawson, 1943, app. I). In total, over the seventy years to 1770, 21 million beaver and felt hats were exported from England. In addition to the final product, England exported the raw material, beaver pelts. In 1760, £15,000 in beaver pelts were exported along with a range of other furs. The hats and the pelts tended to go to different parts of Europe. Raw pelts were shipped mainly to northern Europe, including Germany, Flanders, Holland and Russia; whereas hats went to the southern European markets of Spain and Portugal. In 1750, Germany imported 16,500 beaver hats, while Spain imported 110,000 and Portugal 175,000 (Lawson, 1943, appendices F & G). Over the first six decades of the eighteenth century, these markets grew dramatically, such that the value of beaver hat sales to Portugal alone was £89,000 in 1756-1760, representing about 300,000 hats or two-thirds of the entire export trade.

European Intermediaries in the Fur Trade

By the eighteenth century, the demand for furs in Europe was being met mainly by exports from North America with intermediaries playing an essential role. The American trade, which moved along the main water systems, was organized largely through chartered companies. At the far north, operating out of Hudson Bay, was the Hudson’s Bay Company, chartered in 1670. The Compagnie d’Occident, founded in 1718, was the most successful of a series of monopoly French companies. It operated through the St. Lawrence River and in the region of the eastern Great Lakes. There was also an English trade through Albany and New York, and a French trade down the Mississippi.

The Hudson’s Bay Company and the Compagnie d’Occident, although similar in title, had very different internal structures. The English trade was organized along hierarchical lines with salaried managers, whereas the French monopoly issued licenses (congés) or leased out the use of its posts. The structure of the English company allowed for more control from the London head office, but required systems that could monitor the managers of the trading posts (Carlos and Nicholas, 1990). The leasing and licensing arrangements of the French made monitoring unnecessary, but led to a system where the center had little influence over the conduct of the trade.

The French and English were distinguished as well by how they interacted with the Natives. The Hudson’s Bay Company established posts around the Bay and waited for the Indians, often middlemen, to come to them. The French, by contrast, moved into the interior, directly trading with the Indians who harvested the furs. The French arrangement was more conducive to expansion, and by the end of the seventeenth century, they had moved beyond the St. Lawrence and Ottawa rivers into the western Great Lakes region (see Figure 1). Later they established posts in the heart of the Hudson Bay hinterland. In addition, the French explored the river systems to the south, setting up a post at the mouth of the Mississippi. As noted earlier, after Jay’s Treaty was signed, the French were replaced in the Mississippi region by U.S. interests which later formed the American Fur Company (Haeger, 1991).

The English takeover of New France at the end of the French and Indian Wars in 1763 did not, at first, fundamentally change the structure of the trade. Rather, French management was replaced by Scottish and English merchants operating in Montreal. But, within a decade, the Montreal trade was reorganized into partnerships between merchants in Montreal and traders who wintered in the interior. The most important of these arrangements led to the formation of the Northwest Company, which for the first two decades of the nineteenth century, competed with the Hudson’s Bay Company (Carlos and Hoffman, 1986). By the early decades of the nineteenth century, the Hudson’s Bay Company, the Northwest Company, and the American Fur Company had, combined, a system of trading posts across North America, including posts in Oregon and British Columbia and on the Mackenzie River. In 1821, the Northwest Company and the Hudson’s Bay Company merged under the name of the Hudson’s Bay Company. The Hudson’s Bay Company then ran the trade as a monopsony until the late 1840s when it began facing serious competition from trappers to the south. The Company’s role in the northwest changed again with the Canadian Confederation in 1867. Over the next decades treaties were signed with many of the northern tribes forever changing the old fur trade order in Canada.

The Supply of Furs: The Harvesting of Beaver and Depletion

During the eighteenth century, the changing technology of felt production and the growing demand for felt hats were met by attempts to increase the supply of furs, especially the supply of beaver pelts. Any permanent increase, however, was ultimately dependent on the animal resource base. How that base changed over time must be a matter of speculation since no animal counts exist from that period; nevertheless, the evidence we do have points to a scenario in which over-harvesting, at least in some years, gave rise to serious depletion of the beaver and possibly other animals such as marten that were also being traded. Why the beaver were over-harvested was closely related to the prices Natives were receiving, but important as well was the nature of Native property rights to the resource.

Harvests in the Fort Albany and York Factory Regions

That beaver populations along the Eastern seaboard regions of North America were depleted as the fur trade advanced is widely accepted. In fact the search for new sources of supply further west, including the region of Hudson Bay, has been attributed in part to dwindling beaver stocks in areas where the fur trade had been long established. Although there has been little discussion of the impact that the Hudson’s Bay Company and the French, who traded in the region of Hudson Bay, were having on the beaver stock, the remarkably complete records of the Hudson’s Bay Company provide the basis for reasonable inferences about depletion. From 1700 there is an uninterrupted annual series of fur returns at Fort Albany; the fur returns from York Factory begin in 1716 (see Figure 1).

The beaver returns at Fort Albany and York Factory for the period 1700 to 1770 are described in Figure 2. At Fort Albany the number of beaver skins over the period 1700 to 1720 averaged roughly 19,000, with wide year-to-year fluctuations; the range was about 15,000 to 30,000. After 1720 and until the late 1740s average returns declined by about 5,000 skins, and remained within the somewhat narrower range of roughly 10,000 to 20,000 skins. The period of relative stability was broken in the final years of the 1740s. In 1748 and 1749, returns increased to an average of nearly 23,000. Following these unusually strong years, the trade fell precipitously so that in 1756 fewer than 6,000 beaver pelts were received. There was a brief recovery in the early 1760s but by the end decade trade had fallen below even the mid-1750s levels. In 1770, Fort Albany took in just 3,600 beaver pelts. This pattern – unusually large returns in the late 1740s and low returns thereafter – indicates that the beaver in the Fort Albany region were being seriously depleted.

Figure 2 Beaver Traded at Fort Albany and York Factory 1700 – 1770

Source: Carlos and Lewis, 1993.

The beaver returns at York Factory from 1716 to 1770, also described in Figure 2, have some of the key features of the Fort Albany data. After some low returns early on (from 1716 to 1720), the number of beaver pelts increased to an average of 35,000. There were extraordinary returns in 1730 and 1731, when the average was 55,600 skins, but beaver receipts then stabilized at about 31,000 over the remainder of the decade. The first break in the pattern came in the early 1740s shortly after the French established several trading posts in the area. Surprisingly perhaps, given the increased competition, trade in beaver pelts at the Hudson’s Bay Company post increased to an average of 34,300, this over the period 1740 to 1743. Indeed, the 1742 return of 38,791 skins was the largest since the French had established any posts in the region. The returns in 1745 were also strong, but after that year the trade in beaver pelts began a decline that continued through to 1770. Average returns over the rest of the decade were 25,000; the average during the 1750s was 18,000, and just 15,500 in the 1760s. The pattern of beaver returns at York Factory – high returns in the early 1740s followed by a large decline – strongly suggests that, as in the Fort Albany hinterland, the beaver population had been greatly reduced.

The overall carrying capacity of any region, or the size of the animal stock, depends on the nature of the terrain and the underlying biological determinants such as birth and death rates. A standard relationship between the annual harvest and the animal population is the Lotka-Volterra logistic, commonly used in natural resource models to relate the natural growth of a population to the size of that population:
F(X) = aX – bX2, a, b > 0 (1)

where X is the population, F(X) is the natural growth in the population, a is the maximum proportional growth rate of the population, and b = a/X, where X is the upper limit to population size. The population dynamics of the species exploited depends on the harvest each period:

DX = aX – bX2– H (2)

where DX is the annual change in the population and H is the harvest. The choice of parameter a and maximum population X is central to the population estimates and have been based largely on estimates from the beaver ecology literature and Ontario provincial field reports of beaver densities (Carlos and Lewis, 1993).

Simulations based on equation 2 suggest that, until the 1730s, beaver populations remained at levels roughly consistent with maximum sustained yield management, sometimes referred to as the biological optimum. But after the 1730s there was a decline in beaver stocks to about half the maximum sustained yield levels. The cause of the depletion was closely related to what was happening in Europe. There, buoyant demand for felt hats and dwindling local fur supplies resulted in much higher prices for beaver pelts. These higher prices, in conjunction with the resulting competition from the French in the Hudson Bay region, led the Hudson’s Bay Company to offer much better terms to Natives who came to their trading posts (Carlos and Lewis, 1999).

Figure 3 reports a price index for furs at Fort Albany and at York Factory. The index represents a measure of what Natives received in European goods for their furs. At Fort Albany, fur prices were close to 70 from 1713 to 1731, but in 1732, in response to higher European fur prices and the entry of la Vérendrye, an important French trader, the price jumped to 81. After that year, prices continued to rise. The pattern at York Factory was similar. Although prices were high in the early years when the post was being established, beginning in 1724 the price settled down to about 70. At York Factory, the jump in price came in 1738, which was the year la Vérendrye set up a trading post in the York Factory hinterland. Prices then continued to increase. It was these higher fur prices that led to over-harvesting and, ultimately, a decline in beaver stocks.

Figure 3 Price Index for Furs: Fort Albany and York Factory, 1713 – 1770

Source: Carlos and Lewis, 2001.

Property Rights Regimes

An increase in price paid to Native hunters did not have to lead to a decline in the animal stocks, because Indians could have chosen to limit their harvesting. Why they did not was closely related their system of property rights. One can classify property rights along a spectrum with, at one end, open access, where anyone can hunt or fish, and at the other, complete private property, where a sole owner has full control over the resource. Between, there are a range of property rights regimes with access controlled by a community or a government, and where individual members of the group do not necessarily have private property rights. Open access creates a situation where there is less incentive to conserve, because animals not harvested by a particular hunter will be available to other hunters in the future. Thus the closer is a system to open access the more likely it is that the resource will be depleted.

Across aboriginal societies in North America, one finds a range of property rights regimes. Native Americans did have a concept of trespass and of property, but individual and family rights to resources were not absolute. Sometimes referred to as the Good Samaritan principle (McManus, 1972), outsiders were not permitted to harvest furs on another’s territory for trade, but they were allowed to hunt game and even beaver for food. Combined with this limitation to private property was an Ethic of Generosity that included liberal gift-giving where any visitor to one’s encampment was to be supplied with food and shelter.

Why a social norm such as gift-giving or the related Good Samaritan principle emerged was due to the nature of the aboriginal environment. The primary objective of aboriginal societies was survival. Hunting was risky, and so rules were put in place that would reduce the risk of starvation. As Berkes et al.(1989, p. 153) notes, for such societies: “all resources are subject to the overriding principle that no one can prevent a person from obtaining what he needs for his family’s survival.” Such actions were reciprocal and especially in the sub-arctic world were an insurance mechanism. These norms, however, also reduced the incentive to conserve the beaver and other animals that were part of the fur trade. The combination of these norms and the increasing price paid to Native traders led to the large harvests in the 1740s and ultimately depletion of the animal stock.

The Trade in European Goods

Indians were the primary agents in the North American commercial fur trade. It was they who hunted the animals, and transported and traded the pelts or skins to European intermediaries. The exchange was a voluntary. In return for their furs, Indians obtained both access to an iron technology to improve production and access to a wide range of new consumer goods. It is important to recognize, however, that although the European goods were new to aboriginals, the concept of exchange was not. The archaeological evidence indicates an extensive trade between Native tribes in the north and south of North America prior to European contact.

The extraordinary records of the Hudson’s Bay Company allow us to form a clear picture of what Indians were buying. Table 2 lists the goods received by Natives at York Factory, which was by far the largest of the Hudson’s Bay Company trading posts. As is evident from the table, the commercial trade was more than in beads and baubles or even guns and alcohol; rather Native traders were receiving a wide range of products that improved their ability to meet their subsistence requirements and allowed them to raise their living standards. The items have been grouped by use. The producer goods category was dominated by firearms, including guns, shot and powder, but also includes knives, awls and twine. The Natives traded for guns of different lengths. The 3-foot gun was used mainly for waterfowl and in heavily forested areas where game could be shot at close range. The 4-foot gun was more accurate and suitable for open spaces. In addition, the 4-foot gun could play a role in warfare. Maintaining guns in the harsh sub-arctic environment was a serious problem, and ultimately, the Hudson’s Bay Company was forced to send gunsmiths to its trading posts to assess quality and help with repairs. Kettles and blankets were the main items in the “household goods” category. These goods probably became necessities to the Natives who adopted them. Then there were the luxury goods, which have been divided into two broad categories: “tobacco and alcohol,” and “other luxuries,” dominated by cloth of various kinds (Carlos and Lewis, 2001; 2002).

Table 2 Value of Goods Received at York Factory in 1740 (made beaver)

We have much less information about the French trade. The French are reported to have exchanged similar items, although given their higher transport costs, both the furs received and the goods traded tended to be higher in value relative to weight. The Europeans, it might be noted, supplied no food to the trade in the eighteenth century. In fact, Indians helped provision the posts with fish and fowl. This role of food purveyor grew in the nineteenth century as groups known as the “home guard Cree” came to live around the posts; as well, pemmican, supplied by Natives, became an important source of nourishment for Europeans involved in the buffalo hunts.

The value of the goods listed in Table 2 is expressed in terms of the unit of account, the made beaver, which the Hudson’s Bay Company used to record its transactions and determine the rate of exchange between furs and European goods. The price of a prime beaver pelt was 1 made beaver, and every other type of fur and good was assigned a price based on that unit. For example, a marten (a type of mink) was a made beaver, a blanket was 7 made beaver, a gallon of brandy, 4 made beaver, and a yard of cloth, 3? made beaver. These were the official prices at York Factory. Thus Indians, who traded at these prices, received, for example, a gallon of brandy for four prime beaver pelts, two yards of cloth for seven beaver pelts, and a blanket for 21 marten pelts. This was barter trade in that no currency was used; and although the official prices implied certain rates of exchange between furs and goods, Hudson’s Bay Company factors were encouraged to trade at rates more favorable to the Company. The actual rates, however, depended on market conditions in Europe and, most importantly, the extent of French competition in Canada. Figure 3 illustrates the rise in the price of furs at York Factory and Fort Albany in response to higher beaver prices in London and Paris, as well as to a greater French presence in the region (Carlos and Lewis, 1999). The increase in price also reflects the bargaining ability of Native traders during periods of direct competition between the English and French and later the Hudson’s Bay Company and the Northwest Company. At such times, the Native traders would play both parties off against each other (Ray and Freeman, 1978).

The records of the Hudson’s Bay Company provide us with a unique window to the trading process, including the bargaining ability of Native traders, which is evident in the range of commodities received. Natives only bought goods they wanted. Clear from the Company records is that it was the Natives who largely determined the nature and quality of those goods. As well the records tell us how income from the trade was being allocated. The breakdown differed by post and varied over time; but, for example, in 1740 at York Factory, the distribution was: producer goods – 44 percent; household goods – 9 percent; alcohol and tobacco – 24 percent; and other luxuries – 23 percent. An important implication of the trade data is that, like many Europeans and most American colonists, Native Americans were taking part in the consumer revolution of the eighteenth century (de Vries, 1993; Shammas, 1993). In addition to necessities, they were consuming a remarkable variety of luxury products. Cloth, including baize, duffel, flannel, and gartering, was by far the largest class, but they also purchased beads, combs, looking glasses, rings, shirts, and vermillion among a much longer list. Because these items were heterogeneous in nature, the Hudson’s Bay Company’s head office went to great lengths to satisfy the specific tastes of Native consumers. Attempts were also made, not always successfully, to introduce new products (Carlos and Lewis, 2002).

Perhaps surprising, given the emphasis that has been placed on it in the historical literature, was the comparatively small role of alcohol in the trade. At York Factory, Native traders received in 1740 a total of 494 gallons of brandy and “strong water,” which had a value of 1,976 made beaver. More than twice this amount was spent on tobacco in that year, nearly five times was spent on firearms, twice was spent on cloth, and more was spent on blankets and kettles than on alcohol. Thus, brandy, although a significant item of trade, was by no means a dominant one. In addition, alcohol could hardly have created serious social problems during this period. The amount received would have allowed for no more than ten two-ounce drinks per year for the adult Native population living in the region.

The Labor Supply of Natives

Another important question can be addressed using the trade data. Were Natives “lazy and improvident” as they have been described by some contemporaries, or were they “industrious” like the American colonists and many Europeans? Central to answering this question is how Native groups responded to the price of furs, which began rising in the 1730s. Much of the literature argues that Indian trappers reduced their effort in response to higher fur prices; that is, they had backward-bending supply curves of labor. The view is that Natives had a fixed demand for European goods that, at higher fur prices, could be met with fewer furs, and hence less effort. Although widely cited, this argument does not stand up. Not only were higher fur prices accompanied by larger total harvests of furs in the region, but the pattern of Native expenditure also points to a scenario of greater effort. From the late 1730s to the 1760s, as the price of furs rose, the share of expenditure on luxury goods increased dramatically (see Figure 4). Thus Natives were not content simply to accept their good fortune by working less; rather they seized the opportunity provided to them by the strong fur market by increasing their effort in the commercial sector, thereby dramatically augmenting the purchases of those goods, namely the luxuries, that could raise their living standards.

Figure 4 Native Expenditure Shares at York Factory 1716 – 1770

Source: Carlos and Lewis, 2001.

A Note on the Non-commercial Sector

As important as the fur trade was to Native Americans in the sub-arctic regions of Canada, commerce with the Europeans comprised just one, relatively small, part of their overall economy. Exact figures are not available, but the traditional sectors; hunting, gathering, food preparation and, to some extent, agriculture must have accounted for at least 75 to 80 percent of Native labor during these decades. Nevertheless, despite the limited time spent in commercial activity, the fur trade had a profound effect on the nature of the Native economy and Native society. The introduction of European producer goods, such as guns, and household goods, mainly kettles and blankets, changed the way Native Americans achieved subsistence; and the European luxury goods expanded the range of products that allowed them to move beyond subsistence. Most importantly, the fur trade connected Natives to Europeans in ways that affected how and how much they chose to work, where they chose to live, and how they exploited the resources on which the trade and their survival was based.

References

Berkes, Fikret, David Feeny, Bonnie J. McCay, and James M. Acheson. “The Benefits of the Commons.” Nature 340 (July 13, 1989): 91-93.

Braund, Kathryn E. Holland.Deerskins and Duffels: The Creek Indian Trade with Anglo-America, 1685-1815. Lincoln: University of Nebraska Press, 1993.

Carlos, Ann M., and Elizabeth Hoffman. “The North American Fur Trade: Bargaining to a Joint Profit Maximum under Incomplete Information, 1804-1821.” Journal of Economic History 46, no. 4 (1986): 967-86.

Carlos, Ann M., and Frank D. Lewis. “Indians, the Beaver and the Bay: The Economics of Depletion in the Lands of the Hudson’s Bay Company, 1700-1763.” Journal of Economic History 53, no. 3 (1993): 465-94.

Carlos, Ann M., and Frank D. Lewis. “Property Rights, Competition and Depletion in the Eighteenth-Century Canadian Fur Trade: The Role of the European Market.” Canadian Journal of Economics 32, no. 3 (1999): 705-28.

Carlos, Ann M., and Frank D. Lewis. “Property Rights and Competition in the Depletion of the Beaver: Native Americans and the Hudson’s Bay Company.” In The Other Side of the Frontier: Economic Explorations in Native American History, edited by Linda Barrington, 131-149. Boulder, CO: Westview Press, 1999.

Carlos, Ann M., and Frank D. Lewis. “Trade, Consumption, and the Native Economy: Lessons from York Factory, Hudson Bay.” Journal of Economic History61, no. 4 (2001): 465-94.

Carlos, Ann M., and Frank D. Lewis. “Marketing in the Land of Hudson Bay: Indian Consumers and the Hudson’s Bay Company, 1670-1770.” Enterprise and Society 2 (2002): 285-317.

Carlos, Ann and Nicholas, Stephen. “Agency Problems in Early Chartered Companies: The Case of the Hudson’s Bay Company.” Journal of Economic History 50, no. 4 (1990): 853-75.

Clarke, Fiona. Hats. London: Batsford, 1982.

Crean, J. F. “Hats and the Fur Trade.” Canadian Journal of Economics and Political Science 28, no. 3 (1962): 373-386.

Corner, David. “The Tyranny of Fashion: The Case of the Felt-Hatting Trade in the Late Seventeenth and Eighteenth Centuries.” Textile History 22, no.2 (1991): 153-178.

de Vries, Jan. “Between Purchasing Power and the World of Goods: Understanding the Household Economy in Early Modern Europe.” In Consumption and the World of Goods, edited by John Brewer and Roy Porter, 85-132. London: Routledge, 1993.

Ginsburg Madeleine. The Hat: Trends and Traditions. London: Studio Editions, 1990.

Haeger, John D. John Jacob Astor: Business and Finance in the Early Republic. Detroit: Wayne State University Press, 1991.

Harte, N.B. “The Economics of Clothing in the Late Seventeenth Century.” Textile History 22, no. 2 (1991): 277-296.

Heidenreich, Conrad E., and Arthur J. Ray. The Early Fur Trade: A Study in Cultural Interaction. Toronto: McClelland and Stewart, 1976.

Helm, Jane, ed. Handbook of North American Indians 6, Subarctic. Washington: Smithsonian, 1981.

Innis, Harold. The Fur Trade in Canada (revised edition). Toronto: University of Toronto Press, 1956.

Krech III, Shepard. The Ecological Indian: Myth and History. New York: Norton, 1999.

Lawson, Murray G. Fur: A Study in English Mercantilism. Toronto: University of Toronto Press, 1943.

McManus, John. “An Economic Analysis of Indian Behavior in the North American Fur Trade.” Journal of Economic History 32, no.1 (1972): 36-53.

Ray, Arthur J. Indians in the Fur Trade: Their Role as Hunters, Trappers and Middlemen in the Lands Southwest of Hudson Bay, 1660-1870. Toronto: University of Toronto Press, 1974.

Ray, Arthur J. and Donald Freeman. “Give Us Good Measure”: An Economic Analysis of Relations between the Indians and the Hudson’s Bay Company before 1763. Toronto: University of Toronto Press, 1978.

Ray, Arthur J. “Bayside Trade, 1720-1780.” In Historical Atlas of Canada 1, edited by R. Cole Harris, plate 60. Toronto: University of Toronto Press, 1987.

Rich, E. E. Hudson’s Bay Company, 1670 – 1870. 2 vols. Toronto: McClelland and Stewart, 1960.

Rich, E.E. “Trade Habits and Economic Motivation among the Indians of North America.” Canadian Journal of Economics and Political Science 26, no. 1 (1960): 35-53.

Shammas, Carole. “Changes in English and Anglo-American Consumption from 1550-1800.” In Consumption and the World of Goods, edited by John Brewer and Roy Porter, 177-205. London: Routledge, 1993.

Wien, Thomas. “Selling Beaver Skins in North America and Europe, 1720-1760: The Uses of Fur-Trade Imperialism.” Journal of the Canadian Historical Association, New Series 1 (1990): 293-317.

The Economic History of the International Film Industry

Gerben Bakker, University of Essex

Introduction

Like other major innovations such as the automobile, electricity, chemicals and the airplane, cinema emerged in most Western countries at the same time. As the first form of industrialized mass-entertainment, it was all-pervasive. From the 1910s onwards, each year billions of cinema-tickets were sold and consumers who did not regularly visit the cinema became a minority. In Italy, today hardly significant in international entertainment, the film industry was the fourth-largest export industry before the First World War. In the depression-struck U.S., film was the tenth most profitable industry, and in 1930s France it was the fastest-growing industry, followed by paper and electricity, while in Britain the number of cinema-tickets sold rose to almost one billion a year (Bakker 2001b). Despite this economic significance, despite its rapid emergence and growth, despite its pronounced effect on the everyday life of consumers, and despite its importance as an early case of the industrialization of services, the economic history of the film industry has hardly been examined.

This article will limit itself exclusively to the economic development of the industry. It will discuss just a few countries, mainly the U.S., Britain and France, and then exclusively to investigate the economic issues it addresses, not to give complete histories of the industries in those countries. This entry cannot do justice to developments in each and every country, given the nature of an encyclopedia article. This entry also limits itself to the evolution of the Western film industry, because it has been and still is the largest film industry in the world, in revenue terms, although this may well change in the future.

Before Cinema

In the late eighteenth century most consumers enjoyed their entertainment in an informal, haphazard and often non-commercial way. When making a trip they could suddenly meet a roadside entertainer, and their villages were often visited by traveling showmen, clowns and troubadours. Seasonal fairs attracted a large variety of musicians, magicians, dancers, fortune-tellers and sword-swallowers. Only a few large cities harbored legitimate theaters, strictly regulated by the local and national rulers. This world was torn apart in two stages.

First, most Western countries started to deregulate their entertainment industries, enabling many more entrepreneurs to enter the business and make far larger investments, for example in circuits of fixed stone theaters. The U.S. was the first with liberalization in the late eighteenth century. Most European countries followed during the nineteenth century. Britain, for example, deregulated in the mid-1840s, and France in the late 1860s. The result of this was that commercial, formalized and standardized live entertainment emerged that destroyed a fair part of traditional entertainment. The combined effect of liberalization, innovation and changes in business organization, made the industry grow rapidly throughout the nineteenth century, and integrated local and regional entertainment markets into national ones. By the end of the nineteenth century, integrated national entertainment industries and markets maximized productivity attainable through process innovations. Creative inputs, for example, circulated swiftly along the venues – often in dedicated trains – coordinated by centralized booking offices, maximizing capital and labor utilization.

At the end of the nineteenth century, in the era of the second industrial revolution, falling working hours, rising disposable income, increasing urbanization, rapidly expanding transport networks and strong population growth resulted in a sharp rise in the demand for entertainment. The effect of this boom was further rapid growth of live entertainment through process innovations. At the turn of the century, the production possibilities of the existing industry configuration were fully realized and further innovation within the existing live-entertainment industry could only increase productivity incrementally.

At this moment, in a second stage, cinema emerged and in its turn destroyed this world, by industrializing it into the modern world of automated, standardized, tradable mass-entertainment, integrating the national entertainment markets into an international one.

Technological Origins

In the early 1890s, Thomas Edison introduced the kinematograph, which enabled the shooting of films and their play-back in slot-coin machines for individual viewing. In the mid-1890s, the Lumière brothers added projection to the invention and started to play films in theater-like settings. Cinema reconfigured different technologies that all were available from the late 1880s onwards: photography (1830s), taking negative pictures and printing positives (1880s), roll films (1850s), celluloid (1868), high-sensitivity photographic emulsion (late 1880s), projection (1645) and movement dissection/ persistence of vision (1872).

After the preconditions for motion pictures had been established, cinema technology itself was invented. Already in 1860/1861 patents were filed for viewing and projecting motion pictures, but not for the taking of pictures. The scientist Jean Marey completed the first working model of a film camera in 1888 in Paris. Edison visited Georges Demeney in 1888 and saw his films. In 1891, he filed an American patent for a film camera, which had a different moving mechanism than the Marey camera. In 1890, the Englishman Friese Green presented a working camera to a group of enthusiasts. In 1893 the Frenchman Demeney filed a patent for a camera. Finally, the Lumière brothers filed a patent for their type of camera and for projection in February 1895. In December of that year they gave the first projection for a paying audience. They were followed in February 1896 by the Englishman Robert W. Paul. Paul also invented the ‘Maltese cross,’ a device which is still used in film cameras today. It is instrumental in the smooth rolling of the film, and in the correcting of the lens for the space between the exposures (Michaelis 1958; Musser 1990: 65-67; Low and Manvell 1948).

Three characteristics stand out in this innovation process. First, it was an international process of invention, taking place in several countries at the same time, and the inventors building upon and improving upon each other’s inventions. This connects to Joel Mokyr’s notion that in the nineteenth century communication became increasingly important to innovations, and many innovations depended on international communication between inventors (Mokyr 1990: 123-124). Second, it was what Mokyr calls a typical nineteenth century invention, in that it was a smart combination of many existing technologies. Many different innovations in the technologies which it combined had been necessary to make possible the innovation of cinema. Third, cinema was a major innovation in the sense that it was quickly and universally adopted throughout the western world, quicker than the steam engine, the railroad or the steamship.

The Emergence of Cinema

For about the first ten years of its existence, cinema in the United States and elsewhere was mainly a trick and a gadget. Before 1896 the coin-operated Kinematograph of Edison was present at fairs and in entertainment venues. Spectators had to throw a coin in the machine and peek through glasses to see the film. The first projections, from 1896 onwards, attracted large audiences. Lumière had a group of operators who traveled around the world with the cinematograph, and showed the pictures in theaters. After a few years films became a part of the program in vaudeville and sometimes in theater as well. At the same time traveling cinema emerged: cinemas which traveled around with a tent or mobile theater and set up shop for a short time in towns and villages. These differed from the Lumière operators and others in that they catered for the general, popular audiences, while the former were more upscale parts of theater programs, or a special program for the bourgeoisie (Musser 1990: 140, 299, 417-20).

This whole era, which in the U.S. lasted up to about 1905, was a time in which cinema seemed just one of many new fashions, and it was not at all certain that it would persist, or that it would be forgotten or marginalized quickly, such as happened to the boom in skating rinks and bowling alleys at the time. This changed when Nickelodeons, fixed cinemas with a few hundred seats, emerged and quickly spread all over the country between 1905 and 1907. From this time onwards cinema changed into an industry in its own right, which was distinct from other entertainments, since it had its own buildings and its own advertising. The emergence of fixed cinemas coincided which a huge growth phase in the business in general; film production increased greatly, and film distribution developed into a special activity, often managed by large film producers. However, until about 1914, besides the cinemas, films also continued to be combined with live entertainment in vaudeville and other theaters (Musser 1990; Allen 1980).

Figure 1 shows the total length of negatives released on the U.S., British and French film markets. In the U.S., the total released negative length increased from 38,000 feet in 1897, to two million feet in 1910, to twenty million feet in 1920. Clearly, the initial U.S. growth between 1893 and 1898 was very strong: the market increased by over three orders of magnitude, but from an infinitesimal initial base. Between 1898 and 1906, far less growth took place, and in this period it may well have looked like the cinematograph would remain a niche product, a gimmick shown at fairs and used to be interspersed with live entertainment. From 1907, however, a new, sharp sustained growth phase starts: The market increased further again by two orders of magnitude – and from a far higher base this time. At the same time, the average film length increased considerably, from eighty feet in 1897 to seven hundred feet in 1910 to three thousand feet in 1920. One reel of film held about 1,500 feet and had a playing time of about fifteen minutes.

Between the mid-1900s and 1914 the British and French markets were growing at roughly the same rates as the U.S. one. World War I constituted a discontinuity: from 1914 onwards European growth rates are far lower those in the U.S.

The prices the Nickelodeons charged were between five and ten cents, for which spectators could stay as long as they liked. Around 1910, when larger cinemas emerged in hot city center locations, more closely resembling theaters than the small and shabby Nickelodeons, prices increased. They varied from between one dollar to one dollar and-a-half for ‘first run’ cinemas to five cents for sixth-run neighborhood cinemas (see also Sedgwick 1998).

Figure 1

Total Released Length on the U.S., British and French Film Markets (in Meters), 1893-1922

Which European power pushed into the Canadian Hudson Bay region in pursuit of furs during the seventeenth century?

Note: The length refers to the total length of original negatives that were released commercially.

See Bakker 2005, appendix I for the method of estimation and for a discussion of the sources.

Source: Bakker 2001b; American Film Institute Catalogue, 1893-1910; Motion Picture World, 1907-1920.

The Quality Race

Once Nickelodeons and other types of cinemas were established, the industry entered a new stage with the emergence of the feature film. Before 1915, cinemagoers saw a succession of many different films, each between one and fifteen minutes, of varying genres such as cartoons, newsreels, comedies, travelogues, sports films, ‘gymnastics’ pictures and dramas. After the mid-1910s, going to the cinema meant watching a feature film, a heavily promoted dramatic film with a length that came closer to that of a theater play, based on a famous story and featuring famous stars. Shorts remained only as side dishes.

The feature film emerged when cinema owners discovered that films with a far higher quality and length, enabled them to ask far higher ticket prices and get far more people into their cinemas, resulting in far higher profits, even if cinemas needed to pay far more for the film rental. The discovery that consumers would turn their back on packages of shorts (newsreels, sports, cartoons and the likes) as the quality of features increased set in motion a quality race between film producers (Bakker 2005). They all started investing heavily in portfolios of feature films, spending large sums on well-known stars, rights to famous novels and theater plays, extravagant sets, star directors, etc. A contributing factor in the U.S. was the demise of the Motion Picture Patents Company (MPPC), a cartel that tried to monopolize film production and distribution. Between about 1908 and 1912 the Edison-backed MPPC had restricted quality artificially by setting limits on film length and film rental prices. When William Fox and the Department of Justice started legal action in 1912, the power of the MPPC quickly waned and the ‘independents’ came to dominate the industry.

In the U.S., the motion picture industry became the internet of the 1910s. When companies put the word motion pictures in their IPO investors would flock to it. Many of these companies went bankrupt, were dissolved or were taken over. A few survived and became the Hollywood studios most of which we still know today: Paramount, Metro-Goldwyn-Mayer (MGM), Warner Brothers, Universal, Radio-Keith-Orpheum (RKO), Twentieth Century-Fox, Columbia and United Artists.

A necessary condition for the quality race was some form of vertical integration. In the early film industry, films were sold. This meant that the cinema-owner who bought a film, would receive all the marginal revenues the film generated. In the film industry, these revenues were largely marginal profits, as most costs were fixed, so an additional film ticket sold was pure (gross) profit. Because the producer did not get any of these revenues, at the margin there was little incentive to increase quality. When outright sales made way for the rental of films to cinemas for a fixed fee, producers got a higher incentive to increase a film’s quality, because it would generate more rentals (Bakker 2005). This further increased when percentage contracts were introduced for large city center cinemas, and when producers-distributors actually started to buy large cinemas. The changing contractual relationship between cinemas and producers was paralleled between producers and distributors.

The Decline and Fall of the European Film Industry

Because the quality race happened when Europe was at war, European companies could not participate in the escalation of quality (and production costs) discussed above. This does not mean all of them were in crisis. Many made high profits during the war from newsreels, other short films, propaganda films and distribution. They also were able to participate in the shift towards the feature film, substantially increasing output in the new genre during the war (Figure 2). However, it was difficult for them to secure the massive amount of venture capital necessary to participate in the quality race while their countries were at war. Even if they would have managed it may have been difficult to justify these lavish expenditures when people were dying in the trenches.

Yet a few European companies did participate in the escalation phase. The Danish Nordisk company invested heavily in long feature-type films, and bought cinema chains and distributors in Germany, Austria and Switzerland. Its strategy ended when the German government forced it to sell its German assets to the newly founded UFA company, in return for a 33 percent minority stake. The French Pathé company was one of the largest U.S. film producers. It set up its own U.S. distribution network and invested in heavily advertised serials (films in weekly installments) expecting that this would become the industry standard. As it turned out, Pathé bet on the wrong horse and was overtaken by competitors riding high on the feature film. Yet it eventually switched to features and remained a significant company. In the early 1920s, its U.S. assets were sold to Merrill Lynch and eventually became part of RKO.

Figure 2

Number of Feature Films Produced in Britain, France and the U.S., 1911-1925

(semi-logarithmic scale)

Which European power pushed into the Canadian Hudson Bay region in pursuit of furs during the seventeenth century?

Source: Bakker 2005 [American Film Institute Catalogue; British Film Institute; Screen Digest; Globe, World Film Index, Chirat, Longue métrage.]

Because it could not participate in the quality race, the European film industry started to decline in relative terms. Its market share at home and abroad diminished substantially (Figure 3). In the 1900s European companies supplied at least half of the films shown in the U.S. In the early 1910s this dropped to about twenty percent. In the mid-1910s, when the feature film emerged, the European market share declined to nearly undetectable levels.

By the 1920s, most large European companies gave up film production altogether. Pathé and Gaumont sold their U.S. and international business, left film making and focused on distribution in France. Éclair, their major competitor, went bankrupt. Nordisk continued as an insignificant Danish film company, and eventually collapsed into receivership. The eleven largest Italian film producers formed a trust, which terribly failed and one by one they fell into financial disaster. The famous British producer, Cecil Hepworth, went bankrupt. By late 1924, hardly any films were being made in Britain. American films were shown everywhere.

Figure 3

Market Shares by National Film Industries, U.S., Britain, France, 1893-1930

Which European power pushed into the Canadian Hudson Bay region in pursuit of furs during the seventeenth century?

Note: EU/US is the share of European companies on the U.S. market, EU/UK is the share of European companies on the British market, and so on. For further details see Bakker 2005.

The Rise of Hollywood

Once they had lost out, it was difficult for European companies to catch up. First of all, since the sharply rising film production costs were fixed and sunk, market size was becoming of essential importance as it affected the amount of money that could be spent on a film. Exactly at this crucial moment, the European film market disintegrated, first because of war, later because of protectionism. The market size was further diminished by heavy taxes on cinema tickets that sharply increased the price of cinema compared to live entertainment.

Second, the emerging Hollywood studios benefited from first mover advantages in feature film production: they owned international distribution networks, they could offer cinemas large portfolios of films at a discount (block-booking), sometimes before they were even made (blind-bidding), the quality gap with European features was so large it would be difficult to close in one go, and, finally, the American origin of the feature films in the 1910s had established U.S. films as a kind of brand, leaving consumers with high switching costs to try out films from other national origins. It would be extremely costly for European companies to re-enter international distribution, produce large portfolios, jump-start film quality, and establish a new brand of films – all at the same time (Bakker 2005).

A third factor was the rise of Hollywood as production location. The large existing American Northeast coast film industry and the newly emerging film industry in Florida declined as U.S. film companies started to locate in Southern California. First of all, the ‘sharing’ of inputs facilitated knowledge spillovers and allowed higher returns. The studios lowered costs because creative inputs had less down-time, needed to travel less, could participate in many try-outs to achieve optimal casting and could be rented out easily to competitors when not immediately wanted. Hollywood also attracted new creative inputs through non-monetary means: even more than money creative inputs wanted to maximize fame and professional recognition. For an actress, an offer to work with the world’s best directors, costume designers, lighting specialists and make-up artists was difficult to decline.

Second, a thick market for specialized supply and demand existed. Companies could easily rent out excess studio capacity (for example, during the nighttime B-films were made), and a producer was quite likely to find the highly specific products or services needed somewhere in Hollywood (Christopherson and Storper 1987, 1989). While a European industrial ‘film’ district may have been competitive and even have a lower over-all cost/quality ratio than Hollywood, a first European major would have a substantially higher cost/quality ratio (lacking external economies) and would therefore not easily enter (see, for example, Krugman and Obstfeld 2003, chapter 6). If entry did happen, the Hollywood studios could and would buy successful creative inputs away, since they could realize higher returns on these inputs, which resulted in American films with even a higher perceived quality, thus perpetuating the situation.

Sunlight, climate and the variety of landscape in California were of course favorable to film production, but were not unique. Locations such as Florida, Italy, Spain and Southern France offered similar conditions.

The Coming of Sound

In 1927, sound films were introduced. The main innovator was Warner Brothers, backed by the bank Goldman, Sachs, which actually parachuted a vice-president to Warner. Although many other sound systems had been tried and marketed from the 1900s onwards, the electrical microphone, invented at Bell labs in the mid-1920s, sharply increased the quality of sound films and made possible the change of the industry. Sound increased the interests in the film industry of large industrial companies such as General Electric, Western Electric and RCA, as well as those of the banks who were eager the finance the new innovation, such as the Bank of America and Goldman, Sachs.

In economic terms, sound represented an exogenous jump in sunk costs (and product quality) which did not affect the basic industry structure very much: The industry structure was already highly concentrated before sound and the European, New York/Jersey and Florida film industries were already shattered. What it did do was industrialize away most of the musicians and entertainers that had complemented the silent films with sound and entertainment, especially those working in the smaller cinemas. This led to massive unemployment among musicians (see, for example, Gomery 1975; Kraft 1996).

The effect of sound film in Europe was to increase the domestic revenues of European films, because they became more culture-specific as they were in the local language, but at the same time it decreased the foreign revenues European films received (Bakker 2004b). It is difficult to completely assess the impact of sound film, as it coincided with increased protection; many European countries set quotas for the amount of foreign films that could be shown shortly before the coming of sound. In France, for example, where sound became widely adopted from 1930 onwards, the U.S. share of films dropped from eighty to fifty percent between 1926 and 1929, mainly the result of protectionist legislation. During the 1930s, the share temporarily declined to about forty percent, and then hovered to between fifty and sixty percent. In short, protectionism decreased the U.S. market share and increased the French market shares of French and other European films, while sound film increased French market share, mostly at the expense of other European films and less so at the expense of U.S. films.

In Britain, the share of releases of American films declined from eighty percent in 1927 to seventy percent in 1930, while British films increased from five percent to twenty percent, exactly in line with the requirements of the 1927 quota act. After 1930, the American share remained roughly stable. This suggests that sound film did not have a large influence, and that the share of U.S. films was mainly brought down by the introduction of the Cinematograph Films Act in 1927, which set quotas for British films. Nevertheless, revenue data, which are unfortunately lacking, would be needed to give a definitive answer, as little is known about effects on the revenue per film.

The Economics of the Interwar Film Trade

Because film production costs were mainly fixed and sunk, international sales or distribution were important, because these were additional sales without much additional cost to the producer; the film itself had already been made. Films had special characteristics that necessitated international sales. Because they essentially were copyrights rather than physical products, theoretically the costs of additional sales were zero. Film production involved high endogenous sunk costs, recouped through renting the copyright to the film. The marginal foreign revenue equaled marginal net revenue (and marginal profits after the film’s production costs had been fully amortized). All companies large or small had to take into account foreign sales when setting film budgets (Bakker 2004b).

Films were intermediate products sold to foreign distributors and cinemas. While the rent paid varied depending on perceived quality and general conditions of supply and demand, the ticket price paid by consumers generally did not vary. It only varied by cinema: highest in first-run city center cinemas and lowest in sixth-run ramshackle neighborhood cinemas. Cinemas used films to produce ‘spectator-hours’: a five-hundred-seat cinema providing one hour of film, produced five hundred spectator-hours of entertainment. If it sold three hundred tickets, the other two hundred spectator-hours produced would have perished.

Because film was an intermediate product and a capital good at that, international competition could not be on price alone, just as sales of machines depend on the price/performance ratio. If we consider a film’s ‘capacity to sell spectator-hours’ (hereafter called selling capacity) as proportional to production costs, a low-budget producer could not simply push down a film’s rental price in line with its quality in order to make a sale; even at a price of zero, some low-budget films could not be sold. The reasons were twofold.

First, because cinemas had mostly fixed costs and few variable costs, a film’s selling capacity needed to be at least as large as fixed cinema costs plus its rental price. A seven-hundred-seat cinema, with a production capacity of 39,200 spectator-hours a week, weekly fixed costs of five hundred dollars, and an average admission price of five cents per spectator-hour, needed a film selling at least ten thousand spectator-hours, and would not be prepared to pay for that (marginal) film, because it only recouped fixed costs. Films needed a minimum selling capacity to cover cinema fixed costs. Producers could only price down low-budget films to just above the threshold level. With a lower expected selling capacity, these films could not be sold at any price.

This reasoning assumes that we know a film’s selling capacity ex ante. A main feature distinguishing foreign markets from domestic ones was that uncertainty was markedly lower: from a film’s domestic launch the audience appeal was known, and each subsequent country added additional information. While a film’s audience appeal across countries was not perfectly correlated, uncertainty was reduced. For various companies, correlations between foreign and domestic revenues for entire film portfolios fluctuated between 0.60 and 0.95 (Bakker 2004b). Given the riskiness of film production, this reduction in uncertainty undoubtedly was important.

The second reason for limited price competition was the opportunity cost, given cinemas’ production capacities. If the hypothetical cinema obtained a high-capacity film for a weekly rental of twelve hundred dollars, which sold all 39,200 spectator-hours, the cinema made a profit of $260 (($0.05 times 39,200) – $1,200 – $500 = $260). If a film with half the budget and, we assume, half the selling capacity, rented for half the price, the cinema-owner would lose $120 (($0.05 times 19,600) – $600 – $500 = -$120). Thus, the cinema owner would want to pay no more than $220 for the lower budget film, given that the high budget film is available (($0.05 times 19,600) – $220- $500 = $260). So the low-capacity film with half the selling capacity of the high-capacity film would need to sell for under a fifth of the price of the high capacity film to even enable the possibility of a transaction.

These sharply increasing returns to selling capacity made the setting of production outlays important, as a right price/capacity ratio was crucial to win foreign markets.

How Films Became Branded Products

To make sure film revenues reached above cinema fixed costs, film companies transformed films into branded products. With the emergence of the feature film, they started to pay large sums to actors, actresses and directors and for rights to famous plays and novels. This is still a major characteristic of the film industry today that fascinates many people. Yet the huge sums paid for stars and stories are not as irrational and haphazard as they sometimes may seem. Actually, they might be just as ‘rational’ and have just as quantifiable a return as direct spending on marketing and promotion (Bakker 2001a).

To secure an audience, film producers borrowed branding techniques from other consumer goods’ industries, but the short product-life-cycle forced them to extend the brand beyond one product – using trademarks or stars – to buy existing ‘brands,’ such as famous plays or novels, and to deepen the product-life-cycle by licensing their brands.

Thus, the main value of stars and stories lay not in their ability to predict successes, but in their services as giant ‘publicity machines’ which optimized advertising effectiveness by rapidly amassing high levels of brand-awareness. After a film’s release, information such as word-of-mouth and reviews would affect its success. The young age at which stars reached their peak, and the disproportionate income distribution even among the superstars, confirm that stars were paid for their ability to generate publicity. Likewise, because ‘stories’ were paid several times as much as original screenplays, they were at least partially bought for their popular appeal (Bakker 2001a).

Stars and stories marked a film’s qualities to some extent, confirming that they at least contained themselves. Consumer preferences confirm that stars and stories were the main reason to see a film. Further, fame of stars is distributed disproportionately, possibly even twice as unequal as income. Film companies, aided by long-term contracts, probably captured part of the rent of their popularity. Gradually these companies specialized in developing and leasing their ‘instant brands’ to other consumer goods’ industries in the form of merchandising.

Already from the late 1930s onwards, the Hollywood studios used the new scientific market research techniques of George Gallup to continuously track the brand-awareness among the public of their major stars (Bakker 2003). Figure 4 is based on one such graph used by Hollywood. It shows that Lana Turner was a rising star, Gable was consistently a top star, while Stewart’s popularity was high but volatile. James Stewart was eleven percentage-points more popular among the richest consumers than among the poorest, while Lana Turner differed only a few percentage-points. Additional segmentation by city size seemed to matter, since substantial differences were found: Clark Gable was ten percentage-points more popular in small cities than in large ones. Of the richest consumers, 51 percent wanted to see a movie starring Gable, but altogether they constituted just 14 percent of Gable’s market, while the 57 percent poorest Gable-fans constituted 34 percent. The increases in Gable’s popularity roughly coincided with his releases, suggesting that while producers used Gable partially for the brand-awareness of his name, each use (film) subsequently increased or maintained that awareness in what seems to have been a self-reinforcing process.

Figure 4

Popularity of Clark Gable, James Stewart and Lana Turner among U.S. respondents

April 1940 – October 1942, in percentage

Which European power pushed into the Canadian Hudson Bay region in pursuit of furs during the seventeenth century?

Source: Audience Research Inc.; Bakker 2003.

The Film Industry’s Contribution to Economic Growth and Welfare

By the late 1930s, cinema had become an important mass entertainment industry. Nearly everyone in the Western world went to the cinema and many at least once a week. Cinema had made possible a massive growth in productivity in the entertainment industry and thereby disproved the notions of some economists that productivity growth in certain service industries is inherently impossible. Between 1900 and 1938, output of the entertainment industry, measured in spectator-hours, grew substantially in the U.S., Britain and France, varying from three to eleven percent per year over a period of nearly forty years (Table 1). The output per worker increased from 2,453 spectator hours in the U.S. in 1900 to 34,879 in 1938. In Britain it increased from 16,404 to 37,537 spectator-hours and in France from 1,575 to 8,175 spectator-hours. This phenomenal growth could be explained partially by adding more capital (such as in the form of film technology and film production outlays) and partially by simply producing more efficiently with the existing amount of capital and labor. The increase in efficiency (‘total factor productivity’) varied from about one percent per year in Britain to over five percent in the U.S., with France somewhere in between. In all countries, this increase in efficiency was at least one and a half times the increase in efficiency at the level of the entire nation. For the U.S. it was as much as five times and for France it was more than three times the national increase in efficiency (Bakker 2004a).

Which European power pushed into the Canadian Hudson Bay region in pursuit of furs during the seventeenth century?

Another noteworthy feature is that the labor productivity in entertainment varied less across countries in the late 1930s than it did in 1900. Part of the reason is that cinema technology made entertainment partially tradable and therefore forced productivity in similar directions in all countries; the tradable part of the entertainment industry would now exert competitive pressure on the non-tradable part (Bakker 2004a). It is therefore not surprising that cinema caused the lowest efficiency increase in Britain, which had already a well-developed and competitive entertainment industry (with the highest labor and capital productivity both in 1900 and in 1938) and higher efficiency increases in the U.S. and to a lesser extent in France, which had less well-developed entertainment industries in 1900.

Another way to measure the contribution of film technology to the economy in the late 1930s is by using a social savings methodology. If we assume that cinema did not exist and all demand for entertainment (measured in spectator-hours) would have to be met by live entertainment, we can calculate the extra costs to society and thus the amount saved by film technology. In the U.S., these social savings amounted to as much as 2.2 percent ($2.5 billion) of GDP, in France to just 1.4 percent ($0.16 billion) and in Britain to only 0.3 percent ($0.07 billion) of GDP.

A third and different way to look at the contribution of film technology to the economy is to look at the consumer surplus generated by cinema. Contrary to the TFP and social savings techniques used above, which assume that cinema is a substitute for live entertainment, this approach assumes that cinema is a wholly new good and that therefore the entire consumer surplus generated by it is ‘new’ and would not have existed without cinema. For an individual consumer, the surplus is the difference between the price she was willing to pay and the ticket she actually paid. This difference varies from consumer to consumer, but with econometric techniques, one can estimate the sum of individual surpluses for an entire country. The resulting national consumer surpluses for entertainment varied from about a fifth of total entertainment expenditure in the U.S., to about half in Britain and as much as three quarters in France.

All the measures show that by the late 1930s cinema was making an essential contribution in increasing total welfare as well as the entertainment industry’s productivity.

Vertical Disintegration

After the Second World War, the Hollywood film industry disintegrated: production, distribution and exhibition became separate activities that were not always owned by the same organization. Three main causes brought about the vertical disintegration. First, the U.S. Supreme Court forced the studios to divest their cinema chains in 1948. Second, changes in the social-demographic structure in the U.S. brought about a shift towards entertainment within the home: many young couples started to live in the new suburbs and wanted to stay home for entertainment. Initially, they mainly used radio for this purpose and later they switched to television (Gomery 1985). Third, television broadcasting in itself (without the social-demographic changes that increased demand for it) constituted a new distribution channel for audiovisual entertainment and thus decreased the scarcity of distribution capacity. This meant that television took over the focus on the lowest common denominator from radio and cinema, while the latter two differentiated their output and started to focus more on specific market segments.

Figure 5

Real Cinema Box Office Revenue, Real Ticket Price and Number of Screens in the U.S., 1945-2002

Which European power pushed into the Canadian Hudson Bay region in pursuit of furs during the seventeenth century?

Note: The values are in dollars of 2002, using the EH.Net consumer price deflator.

Source: Adapted from Vogel 2004 and Robertson 2001.

The consequence was a sharp fall in real box office revenue in the decade after the war (Figure 5). After the mid-1950s, real revenue stabilized, and remained the same, with some fluctuations, until the mid-1990s. The decline in screens was more limited. After 1963 the number of screens increased again steadily to reach nearly twice the 1945 level in the 1990s. Since the 1990s there have been more movie screens in the U.S. than ever before. The proliferation of screens, coinciding with declining capacity per screen, facilitated market segmentation. Revenue per screen nearly halved in the decade after the war, then made a rebound during the 1960s, to start a long and steady decline from 1970 onwards. The real price of a cinema ticket was quite stable until the 1960s, after which it more than doubled. Since the early 1970s, the price has been declining again and nowadays the real admission price is about what it was in 1965.

It was in this adverse post-war climate that the vertical disintegration unfolded. It took place at three levels. First (obviously) the Hollywood studios divested their cinema-chains. Second, they outsourced part of their film production and most of their production factors to independent companies. This meant that the Hollywood studios would only produce part of the films they distributed themselves, that they changed the long-term, seven-year contracts with star actors for per-film contracts and that they sold off part of their studio facilities to rent them back for individual films. Third, the Hollywood studios’ main business became film distribution and financing. They specialized in planning and assembling a portfolio of films, contracting and financing most of them and marketing and distributing them world-wide.

The developments had three important effects. First, production by a few large companies was replaced by production by many small flexibly specialized companies. Southern California became an industrial district for the film industry and harbored an intricate network of these businesses, from set design companies and costume makers, to special effects firms and equipment rental outfits (Storper and Christopherson 1989). Only at the level of distribution and financing did concentration remain high. Second, films became more differentiated and tailored to specific market segments; they were now aimed at a younger and more affluent audience. Third, the European film market gained in importance: because the social-demographic changes (suburbanization) and the advent of television happened somewhat later in Europe, the drop in cinema attendance also happened later there. The result was that the Hollywood off-shored a large chunk – at times over half – of their production to Europe in the 1960s. This was stimulated by lower European production costs, difficulties in repatriating foreign film revenues and by the vertical disintegration in California, which severed the studios’ ties with their production units and facilitated outside contracting.

European production companies could better adapt to changes in post-war demand because they were already flexibly specialized. The British film production industry, for example, had been fragmented almost from its emergence in the 1890s. In the late 1930s, distribution became concentrated, mainly through the efforts of J. Arthur Rank, while the production sector, a network of flexibly specialized companies in and around London, boomed. After the war, the drop in admissions followed the U.S. with about a ten year delay (Figure 6). The drop in the number of screens experienced the same lag, but was more severe: about two-third of British cinema screens disappeared, versus only one-third in the U.S. In France, after the First World War film production had disintegrated rapidly and chaotically into a network of numerous small companies, while a few large firms dominated distribution and production finance. The result was a burgeoning industry, actually one of the fastest growing French industries in the 1930s.

Figure 6

Admissions and Number of Screens in Britain, 1945-2005

Which European power pushed into the Canadian Hudson Bay region in pursuit of furs during the seventeenth century?

Source: Screen Digest/Screen Finance/British Film Institute and Robertson 2001.

Several European companies attempted to (re-)enter international film distribution, such as Rank in the 1930s and 1950s, the International Film Finance Corporation in the 1960s, Gaumont in the 1970s, PolyGram in the 1970s and again in the 1990s, Cannon in the 1980s. All of them failed in terms of long-run survival, even if they made profits during some years. The only postwar entry strategy that was successful in terms of survival was the direct acquisition of a Hollywood studio (Bakker 2000).

The Come-Back of Hollywood

From the mid-1970s onwards, the Hollywood studios revived. The slide of box office revenue was brought to a standstill. Revenues were stabilized by the joint effect of seven different factors. First, the blockbuster movie increased cinema attendance. This movie was heavily marketed and supported by intensive television advertisement. Jaws was one of the first of these kind of movies and an enormous success. Second, the U.S. film industry received several kinds of tax breaks from the early 1970s onwards, which were kept in force until the mid-1980s, when Hollywood was in good shape again. Third, coinciding with the blockbuster movie and tax-breaks film budgets increased substantially, resulting in a higher perceived quality and higher quality difference with television, drawing more consumers into the cinema. Fourth, a rise in multiplex cinemas, cinemas with several screens, increased consumer choice and increased the appeal of cinema by offering more variety within a specific cinema, thus decreasing the difference with television in this respect. Fifth, one could argue that the process of flexible specialization of the California film industry was completed in the early 1970s, thus making the film industry ready to adapt more flexibly to changes in the market. MGM’s sale of its studio complex in 1970 marked the final ending of an era. Sixth, new income streams from video sales and rentals and cable television increased the revenues a high-quality film could generate. Seventh, European broadcasting deregulation increased the demand for films by television stations substantially.

From the 1990s onwards further growth was driven by newer markets in Eastern Europe and Asia. Film industries from outside the West also grew substantially, such as those of Japan, Hong Kong, India and China. At the same time, the European Union started a large scale subsidy program for its audiovisual film industry, with mixed economic effects. By 1997, ten years after the start of the program, a film made in the European Union cost 500,000 euros on average, was seventy to eighty percent state-financed, and grossed 800,000 euros world-wide, reaching an audience of 150,000 persons. In contrast, the average American film cost fifteen million euros, was nearly hundred percent privately financed, grossed 58 million euros, and reached 10.5 million persons (Dale 1997). This seventy-fold difference in performance is remarkable. Even when measured in gross return on investment or gross margin, the U.S. still had a fivefold and twofold lead over Europe, respectively.[1] In few other industries does such a pronounced difference exist.

During the 1990s, the film industry moved into television broadcasting. In Europe, broadcasters often co-funded small-scale boutique film production. In the U.S., the Hollywood studios started to merge with broadcasters. In the 1950s they had experienced difficulties with obtaining broadcasting licenses, because their reputation had been compromised by the antitrust actions. They had to wait for forty years before they could finally complete what they intended.[2] Disney, for example, bought the ABC network, Paramount’s owner Viacom bought CBS, and General Electric, owner of NBC, bought Universal. At the same time, the feature film industry was also becoming more connected to other entertainment industries, such as videogames, theme parks and musicals. With video game revenues now exceeding films’ box office revenues, it seems likely that feature films will simply be the flagship part of large entertainment supply system that will exploit the intellectual property in feature films in many different formats and markets.

Conclusion

The take-off of the film industry in the early twentieth century had been driven mainly by changes in demand. Cinema industrialized entertainment by standardizing it, automating it and making it tradable. After its early years, the industry experienced a quality race that led to increasing industrial concentration. Only later did geographical concentration take place, in Southern California. Cinema made a substantial contribution to productivity and total welfare, especially before television. After television, the industry experienced vertical disintegration, the flexible specialization of production, and a self-reinforcing process of increasing distribution channels and capacity as well as market growth. Cinema, then, was not only the first in a row of media industries that industrialized entertainment, but also the first in a series of international industries that industrialized services. The evolution of the film industry thus may give insight into technological change and its attendant welfare gains in many service industries to come.

Selected Bibliography

Allen, Robert C. Vaudeville and Film, 1895-1915. New York: Arno Press, 1980.

Bächlin, Peter, Der Film als Ware. Basel: Burg-Verlag, 1945.

Bakker, Gerben, “American Dreams: The European Film Industry from Dominance to Decline.” EUI Review (2000): 28-36.

Bakker, Gerben. “Stars and Stories: How Films Became Branded Products.” Enterprise and Society 2, no. 3 (2001a): 461-502.

Bakker, Gerben. Entertainment Industrialised: The Emergence of the International Film Industry, 1890-1940. Ph.D. dissertation, European University Institute, 2001b.

Bakker, Gerben. “Building Knowledge about the Consumer: The Emergence of Market Research in the Motion Picture Industry.” Business History 45, no. 1 (2003): 101-27.

Bakker, Gerben. “At the Origins of Increased Productivity Growth in Services: Productivity, Social Savings and the Consumer Surplus of the Film Industry, 1900-1938.” Working Papers in Economic History, No. 81, Department of Economic History, London School of Economics, 2004a.

Bakker, Gerben. “Selling French Films on Foreign Markets: The International Strategy of a Medium-Sized Film Company.” Enterprise and Society 5 (2004b): 45-76.

Bakker, Gerben. “The Decline and Fall of the European Film Industry: Sunk Costs, Market Size and Market Structure, 1895-1926.” Economic History Review 58, no. 2 (2005): 311-52.

Caves, Richard E. Creative Industries: Contracts between Art and Commerce. Cambridge, MA: Harvard University Press, 2000.

Christopherson, Susan, and Michael Storper. “Flexible Specialization and Regional Agglomerations: The Case of the U.S. Motion Picture Industry.” Annals of the Association of American Geographers 77, no. 1 (1987).

Christopherson, Susan, and Michael Storper. “The Effects of Flexible Specialization on Industrial Politics and the Labor Market: The Motion Picture Industry.” Industrial and Labor Relations Review 42, no. 3 (1989): 331-47.

Gomery, Douglas, The Coming of Sound to the American Cinema: A History of the Transformation of an Industry. Ph.D. dissertation, University of Wisconsin, 1975.

Gomery, Douglas, “The Coming of television and the ‘Lost’ Motion Picture Audience.” Journal of Film and Video 37, no. 3 (1985): 5-11.

Gomery, Douglas. The Hollywood Studio System. London: MacMillan/British Film Institute, 1986; reprinted 2005.

Kraft, James P. Stage to Studio: Musicians and the Sound Revolution, 1890-1950. Baltimore: Johns Hopkins University Press, 1996.

Krugman, Paul R., and Maurice Obstfeld, International Economics: Theory and Policy (sixth edition). Reading, MA: Addison-Wesley, 2003.

Low, Rachael, and Roger Manvell, The History of the British Film, 1896-1906. London, George Allen & Unwin, 1948.

Michaelis, Anthony R. “The Photographic Arts: Cinematography.” In A History of Technology, Vol. V: The Late Nineteenth Century, c. 1850 to c. 1900, edited by Charles Singer, 734-51. Oxford, Clarendon Press, 1958, reprint 1980.

Mokyr, Joel. The Lever of Riches: Technological Creativity and Economic Progress. Oxford: Oxford University Press, 1990.

Musser, Charles. The Emergence of Cinema: The American Screen to 1907. The History of American Cinema, Vol. I. New York: Scribner, 1990.

Sedgwick, John, “Product Differentiation at the Movies: Hollywood, 1946-65.” Journal of Economic History 63 (2002): 676-705.

Sedgwick, John, and Michael Pokorny. “The Film Business in Britain and the United States during the 1930s.” Economic History Review 57, no. 1 (2005): 79-112.

Sedgwick, John, and Mike Pokorny, editors. An Economic History of Film. London: Routledge, 2004.

Thompson, Kristin.. Exporting Entertainment: America in the World Film Market, 1907-1934. London: British Film Institute, 1985.

Vogel, Harold L. Entertainment Industry Economics: A Guide for Financial Analysis. Cambridge: Cambridge University Press, Sixth Edition, 2004.

Gerben Bakker may be contacted at gbakker at essex.ac.uk


[1] Gross return on investment, disregarding interest costs and distribution charges was 60 percent for European vs. 287 percent for U.S. films. Gross margin was 37 percent for European vs. 74 percent for U.S. films. Costs per viewer are 3.33 vs. 1.43 euros, revenues per viewer are 5.30 vs. 5.52 euros.

[2] The author is indebted to Douglas Gomery for this point.

Citation: Bakker, Gerben. “The Economic History of the International Film Industry”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-the-international-film-industry/

The Economics of American Farm Unrest, 1865-1900

James I. Stewart, Reed College

American farmers have often expressed dissatisfaction with their lot but the decades after the Civil War were extraordinary in this regard. The period was one of persistent and acute political unrest. The specific concerns of farmers were varied, but at their core was what farmers perceived to be their deteriorating political and economic status.

The defining feature of farm unrest was the efforts of farmers to join together for mutual gain. Farmers formed cooperatives, interest groups, and political parties to protest their declining fortunes and to increase their political and economic power. The first such group to appear was The Grange or Patrons of Husbandry, founded in the 1860s to address farmers’ grievances against the railroads and desire for greater cooperation in business matters. The agrarian-dominated Greenback Party followed in the 1870s. Its main goal was to increase the amount of money in circulation and thus to lower the costs of credit to farmers. The Farmers’ Alliance appeared in the 1880s. Its members practiced cooperative marketing and lobbied the government for various kinds of business and banking regulation. In the 1890s, aggrieved farmers took their most ambitious steps yet, forming the independent People’s or Populist Party to challenge the dominance of the unsympathetic Republican and Democratic parties.

Although farmers in every region of the country had cause for agitation, unrest was probably greatest in the northern prairie and Plains states. A series of droughts there between 1870 and 1900 created recurring hardships, and Midwestern grain farmers faced growing price competition from producers abroad. Farmers in the South also revolted, but their protests were muted by racism. Black farmers were excluded from most farm groups, and many white farmers were reluctant to join the attack on established politics and business for fear of undermining the system of social control that kept blacks inferior to whites (Goodwyn, 1978).

The Debate about the Causes of Farm Unrest

For a long time, a debate raged about the causes of farm unrest. Historians could not reconcile the complaints of farmers with evidence about the agricultural terms of trade— the prices farmers received for their output, especially relative to the prices of other goods and services farmers purchased such as transportation, credit, and manufactures. Now, however, there appears to be some consensus. Before exploring the basis for this consensus, it will be useful to examine briefly the complaints of farmers. What were farmers so upset about? Why did they feel so threatened?

The Complaints of Farmers

The complaints of farmers are well documented (Buck, 1913; Hicks, 1931) and relatively uncontroversial. They concerned farmers’ declining incomes and fractious business relationships primarily. First, farmers claimed that farm prices were falling and, as a consequence, so were their incomes. They generally blamed low prices on over-production. Second, farmers alleged that monopolistic railroads and grain elevators charged unfair prices for their services. Government regulation was the farmers’ solution to the problem of monopoly. Third, there was a perceived shortage of credit and money. Farmers believed that interest rates were too high because of monopolistic lenders, and the money supply was inadequate, producing deflation. A falling price level increased the real burden of debt, as farmers repaid loans with dollars worth significantly more than those they had borrowed. Farmers demanded ceilings on interest rates, public boards to mediate foreclosure proceedings, and the U.S. Treasury to coin silver freely to increase the money supply. Finally, farmers complained about the political influence of the railroads, big business, and money lenders. These interests had undue influence over policy making in the state legislatures and U.S. Congress. In short, farmers felt their economic and political interests were being shortchanged by a gang of greedy railroads, creditors, and industrialists.

The Puzzle of Farm Unrest

Economic historians have subjected the complaints of farmers to rigorous statistical testing. Each claim has been found inconsistent to some extent with the available evidence about the terms of trade.

First, consider farmers’ complaints about prices. Farm prices were falling, along with the prices of most other goods during this period. This does not imply, however, that farm incomes were also falling. First, real prices (farm prices relative to the general price level) are a better measure of the value that farmers were receiving for their output. When real prices over the post-Civil War period are examined, there is an approximately horizontal trend (North, 1974). Moreover, even if real farm prices had been falling, farmers were not necessarily worse off (Fogel and Rutner, 1972). Rising farm productivity could have offset the negative effects of falling real prices on incomes. Finally, direct evidence about the incomes of farmers is scarce, but estimates suggest that farm incomes were not falling (Bowman, 1965). Some regions experienced periods of distress—Iowa and Illinois in the 1870s and Kansas and Nebraska in the 1890s, for instance—but there was no general agricultural depression. If anything, data on wages, land rents, and returns to capital suggest that land in the West was opened to settlement too slowly (Fogel and Rutner, 1972).

Next, consider farmers’ claims about interest rates and mortgage debt. It is true that interest rates on the frontier were high, averaging two to three percentage points more than those in the Northeast. Naturally, frontier farmers complained bitterly about paying so much for credit. Lenders, however, may have been well justified in the rates they charged. The susceptibility of the frontier to drought and the financial insecurity of many settlers created above average lending risks for which creditors had to be compensated (Bogue, 1955). For instance, borrowers often defaulted, leaving land worth only a fraction of the loan as security. This story casts doubt on the exploitation hypothesis. Furthermore, when the claims of farmers were subjected to rigorous statistical testing, there was little evidence to substantiate the monopoly hypothesis (Eichengreen, 1984). Instead, consistent with the unique features of the frontier mortgage market, high rates of interest appear to have been compensation for the inherent risks of lending to frontier farmers. Finally, regarding the burden on borrowers of a falling price level, deflation may have been not as onerous as farmers alleged. The typical mortgage had a short term, less than five years, implying that lenders and borrowers could often anticipate changes in the price level (North, 1974).

Last, consider farmers’ complaints about the railroads. These appear to have the most merit. Nevertheless, for a long time, most historians dismissed farmers’ grievances, assuming that the real cost to farmers of shipping their produce to market must have been steadily falling because of productivity improvements in the railroad sector. As Robert Higgs (1970) shows, however, gains in productivity in rail shipping did not necessarily translate into lower rates for farmers and thus higher farm gate prices. Real rates (railroad rates relative to the prices farmers received for their output) were highly variable between 1865 and 1900. More important, over the whole period, there was little decrease in rail rates relative to farm prices. Only in the 1890s did the terms of trade begin to improve in farmers’ favor. Employing different data, Aldrich (1985) finds a downward trend in railroad rates before 1880 but then no trend or an increasing trend thereafter.

The Causes of Farm Unrest

Many of the complaints of farmers are weakly supported or even contradicted by the available evidence, leaving questions about the true causes of farm unrest. If the monopoly power of the railroads and creditors was not responsible for farmers’ woes, what or who was?

Most economic historians now believe that agrarian unrest reflected the growing risks and uncertainties of agriculture after the Civil War. Uncertainty or risk can be thought of as an economic force that reduces welfare. Today, farmers use sophisticated production technologies and agricultural futures markets to reduce their exposure to environmental and economic uncertainty at little cost. In the late 1800s, the avoidance of risk was much more costly. As a result, increases in risk and uncertainty made farmers worse off. These uncertainties and risks appear to have been particularly severe for farmers on the frontier.

What were the sources of risk? First, agriculture had become more commercial after the Civil War (Mayhew, 1972). Formerly self-sufficient farmers were now dependent on creditors, merchants, and railroads for their livelihoods. These relationships created opportunities for economic gain but also obligations, hardships, and risks that many farmers did not welcome. Second, world grain markets were becoming ever more integrated, creating competition in markets abroad once dominated by U.S. producers and greater price uncertainty (North, 1974). Third, agriculture was now occurring in the semi-arid region of the United States. In Kansas, Nebraska, and the Dakotas, farmers encountered unfamiliar and adverse growing conditions. Recurring but unpredictable droughts caused economic hardship for many Plains farmers. Their plights were made worse because of the greater price elasticity (responsiveness) of world agricultural supply (North, 1974). Drought-stricken farmers with diminished harvests could no longer count on higher domestic prices for their crops.

A growing body of research now supports the hypothesis that discontent was caused by increasing risks and uncertainties in U.S. agriculture. First, there are strong correlations between different measures of economic risk and uncertainty and the geographic distribution of unrest in fourteen northern states between 1866 and 1909 (McGuire, 1981; 1982). Farm unrest was closely tied to the variability in farm prices, yields, and incomes across the northern states. Second, unrest was highest in states with high rates of farm foreclosures (Stock, 1986). On the frontier, the typical farmer would have had a neighbor whose farm was seized by creditors and thus cause to worry about his own future financial security. Third, Populist agitation in Kansas in the 1890s coincided with unexpected variability in crop prices that resulted in lost profits and lower incomes (DeCanio, 1980). Finally, as mentioned already, high interest rates were not a sign of monopoly but rather compensation to creditors for the greater risks of frontier lending (Eichengreen, 1984).

The Historical Significance of Farm Unrest

Farm unrest had profound and lasting consequences for American economic development. Above all, it ushered in fundamental and lasting institutional change (Hughes, 1991; Libecap, 1992).

The change began in the 1870s. In response to the complaints of farmers, Midwestern state legislatures enacted a series of laws regulating the prices and practices of railroads, grain elevators, and warehouses. The “Grange” laws were a turning point because they reversed a longstanding trend of decreasing government regulation of the private sector. They also prompted a series of landmark court rulings affirming the regulatory prerogatives of government (Hughes, 1991). In Munn v. Illinois (1877), the U.S. Supreme Court rejected a challenge to the legality of the Granger laws, famously ruling that government had the legal right to regulate any commerce “affected with the public interest.”

Farmers also sought redress of their grievances at the federal level. In 1886, the U.S. Supreme Court had ruled in Wabash, St. Louis, and Pacific Railway v. Illinois that only the federal government had the right to regulate commerce between the states. This meant the states could not regulate many matters of concern to farmers. In 1887, Congress passed the Interstate Commerce Act, which gave the Interstate Commerce Commission regulatory oversight over long distance rail shipping. This legislation was followed by the Sherman Antitrust Act of 1890, which prohibited monopolies and certain conspiracies, combinations, and restraints of trade. Midwestern cattle farmers urged the passage of an antitrust law, alleging that the notorious Chicago meat packers had conspired to keep cattle prices artificially low (Libecap, 1992). Both laws marked the beginning of growing federal involvement in private economic activity (Hughes, 1991; Ulen, 1987).

Not all agrarian proposals were acted upon, but even demands that fell on deaf ears in Congress and the state legislatures had lasting impacts (Hicks, 1931). For instance, many Alliance and Populist demands such as the graduated income tax and the direct election of U.S. Senators became law during the Progressive Era.

Historians disagree about the legacy of the late nineteenth century farm movements. Some view their contributions to U.S. institutional development positively (Hicks, 1931), while others do not (Hughes, 1991). Nonetheless, few would dispute their impact. In fact, it is possible to see much institutional change in the U.S. over the last century as the logical consequence of political and legal developments initiated by farmers during the late 1800s (Hughes, 1991).

The Sources of Cooperation in the Farm Protest Movement

Nineteenth century farmers were remarkably successful at joining together to increase their economic and political power. Nevertheless, one aspect of farm unrest that has largely been neglected by scholars is the sources of cooperation in promoting agrarian interests. According to Olson (1965), large lobbying or interest groups like the Grange and the Farmers’ Alliance should have been plagued by free-riding: the incentives for individuals not to contribute to the collective production of public goods—those goods for which it is impossible or very costly to exclude others from enjoying. A rational and self-interested farmer would not join a lobbying group because he could enjoy the benefits of its work without incurring any of the costs.

Judging by their political power, most farm interest groups were, however, able to curb free-riding. Stewart (2006) studies how the Dakota Farmers’ Alliance did this between 1885 and 1890. First, the Dakota Farmers’ Alliance provided valuable goods and services to its members that were not available to outsiders, creating economic incentives for membership. These goods and services included better terms of trade through cooperative marketing and the sharing of productivity-enhancing information about agriculture. Second, the structure of the Dakota Farmers’ Alliance as a federation of township chapters enabled the group to monitor and sanction free-riders. Within townships, Alliance members were able to pressure others to join the group. This strategy appears to have succeeded among German and Norwegian immigrants, who were much more likely than others to join the Dakota Farmers’ Alliance and whose probability of joining was increasing in the share of their nativity group in the township population. This is consistent with long-standing social norms of cooperation in Germany and Norway and economic theory about the use of social norms to elicit cooperation in collective action.

References

Aldrich, Mark. “A Note on Railroad Rates and the Populist Uprising.” Agricultural History 41 (1985): 835-52.

Bogue, Allan G. Money at Interest: The Farm Mortgage on the Middle Border. Ithaca, NY: Cornell University Press, 1955.

Bowman, John. “An Economic Analysis of Midwestern Farm Values and Farm Land Income, 1860-1900.” Yale Economic Essays 5 (1965): 317-52.

Buck, Solon J. The Granger Movement: A Study of Agricultural Organization and Its Political, Economic, and Social Manifestations, 1870-1880. Cambridge: Harvard University Press. 1913.

DeCanio, Stephen J. “Economic Losses from Forecasting Error in Agriculture.” Journal of Political Economy 88 (1980): 234-57.

Eichengreen, Barry. “Mortgage Interest Rates in the Populist Era.” American Economic Review 74 (1984): 995-1015.

Fogel, Robert W. and Jack L. Rutner. “The Efficiency Effects of Federal Land Policy, 1850-1900: A Report of Some Provisional Findings.” In The Dimensions of Quantitative Research in History, edited by Wayne O. Aydelotte, Allan G. Bogue and Robert W. Fogel. Princeton, N.: Princeton University Press, 1972.

Goodwyn, Lawrence. The Populist Moment: A Short History of the Agrarian Revolt in America. New York: Oxford University Press, 1978.

Hicks, John D. The Populist Revolt: A History of the Farmers’ Alliance and the People’s Party. Minneapolis: University of Minnesota Press, 1931.

Higgs, Robert. “Railroad Rates and the Populist Uprising.” Agricultural History 44 (1970): 291-97.

Hughes, Jonathan T. The Government Habit Redux: Economic Controls from Colonial Times to the Present. Princeton, NJ: Princeton University Press, 1991.

Libecap, Gary D. “The Rise of the Chicago Packers and the Origins of Meat Inspection and Antitrust.” Economic Inquiry 30 (1992): 242-62.

Mayhew, Anne. “A Reappraisal of the Causes of the Farm Protest Movement in the United States, 1870-1900.” Journal of Economic History 32 (1972): 464-75.

McGuire, Robert A. “Economic Causes of Late Nineteenth Century Agrarian Unrest: New Evidence.” Journal of Economic History 41 (1981): 835-52.

McGuire, Robert A. “Economic Causes of Late Nineteenth Century Agrarian Unrest: Reply.” Journal of Economic History 42 (1981): 697-99.

North, Douglass. Growth and Welfare in the American Past: A New Economic History. Englewood Cliffs, NJ: Prentice Hall, 1974.

Olson, Mancur. The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, MA: Harvard University Press, 1965.

Stewart, James I. “Free-riding, Collective Action, and Farm Interest Group Membership.” Reed College Working Paper, 2006. Available at http://www.reed.edu/~stewartj.

Stock, James H. “Real Estate Mortgages, Foreclosures, and Midwestern Agrarian Unrest, 1865-1920.” Journal of Economic History 44 (1983): 89-105.

Ulen, Thomas C. “The Market for Regulation: The ICC from 1887 to 1920.” American Economic Review 70 (1980): 306-10.

Citation: Stewart, James. “The Economics of American Farm Unrest, 1865-1900”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/the-economics-of-american-farm-unrest-1865-1900/

The Euro and Its Antecedents

Jerry Mushin, Victoria University of Wellington

The establishment, in 1999, of the euro was not an isolated event. It was the latest installment in the continuing story of attempts to move towards economic and monetary integration in western Europe. Its relationship with developments since 1972, when the Bretton Woods system of fixed (but adjustable) exchange rates in terms of the United States dollar was collapsing, is of particular interest.

Political moves towards monetary cooperation in western Europe began at the end of the Second World War, but events before 1972 are beyond the scope of this article. Coffey and Presley (1971) have described and analyzed relevant events between 1945 and 1971.

The Snake

In May 1972, at the end of the Bretton Woods (adjustable-peg) system, many countries in western Europe attempted to stabilize their currencies in relation to each other’s currencies. The arrangements known as the Snake in the Tunnel (or, more frequently, as the Snake), which were set up by members of the European Economic Community (EEC), one of the forerunners of the European Union, lasted until 1979. Each member agreed to limit, by market intervention, the fluctuations of its currency’s exchange rate in terms of other members’ currencies. The maximum divergence between the strongest and the weakest currencies was 2.25%. The agreement meant that the French government, for example, would ensure that the value of the French franc would show very limited fluctuation in terms of the Italian lira or the Netherlands guilder, but that there would be no commitment to stabilize its fluctuations against the United States dollar, the Japanese yen, or other currencies outside the agreement.

This was a narrower objective than the aim of the adjustable-peg system, which was intended to stabilize the value of each currency in terms of the values of all other major currencies, but for which the amount of reserves held by governments had proved to be insufficient. It was felt that this limited objective could be achieved with the amount of reserves available to member governments.

The agreement also had a political dimension. Stable exchange rates are likely to encourage international trade, and it was hoped that the new exchange-rate regime would stimulate members’ trade within western Europe at the expense of their trade with the rest of the world. This was one of the objectives of the EEC from its inception.

Exchange rates within the group of currencies were to be managed by market intervention; member governments undertook to buy and sell their currencies in sufficiently large quantities to influence their exchange rates. There was an agreed maximum divergence between the strongest and weakest currencies. Exchange rates of the whole group of currencies fluctuated together against external denominators such as the United States dollar.

The Snake is generally regarded as a failure. Membership was very unstable; the United Kingdom and the Irish Republic withdrew after less than one month, and only the German Federal Republic remained a member for the whole of its existence. Other members withdrew and rejoined, and some did this several times. In addition, the political context of the Snake was not clearly defined. Sweden and Norway participated in the Snake although, at that time, neither of these countries was a member of the EEC and Sweden was not a candidate for admission.

The curious name of the Snake in the Tunnel comes from the appearance of exchange-rate graphs. In terms of a non-member currency, the value of each currency in the system could fluctuate but only within a narrow band that was also fluctuating. The trend of each exchange rate showed some resemblance to a snake inside the narrow confines of a tunnel.

European Monetary System

The Snake came to an end in 1979 and was replaced with the European Monetary System (EMS). The exchange-rate mechanism of the EMS had the same objectives as the Snake, but the procedure for allocating intervention responsibilities among member governments was more precisely specified.

The details of the EMS arrangements have been explained by Adams (1990). Membership of the EMS involved an obligation on each EMS-member government to undertake to stabilize its currency value with respect to the value of a basket of EMS-member currencies called the European Currency Unit (ECU). Each country’s currency had a weight in the ECU that was related to the importance of that country’s trade within the EEC. An autonomous shift in the external value of any EMS-member currency changed the value of the ECU and therefore imposed exchange-rate adjustment obligations on all members of the system. The magnitude of each of these obligations was related to the weight allocated to the currency experiencing the initial disturbance.

The effects of the EMS requirements on each individual member depended upon that country’s weight in the ECU. The system ensured that major members delegated to their smaller partners a greater proportion of their exchange-rate adjustment responsibilities than the less important members imposed on the dominant countries. The explanation for this lack of symmetry depends on the fact that a particular percentage shift in the external value of the currency of a major member of the EMS (with a high weight in the ECU) had a greater effect on the external value of the ECU than had the same percentage disturbance to the external value of the currency of a less important member. It therefore imposed greater exchange-rate adjustment responsibilities on the remaining members than did the same percentage shift applied to the external value of the less important currency. While each of the major members of the EMS could delegate to the remaining members a high proportion of its adjustment obligations, the same is not true for the smaller countries in the system. This burden was, however, seen by the smaller nations (including Denmark, Belgium, and Netherlands) as an acceptable price for exchange-rate stability with their main trading partners (including France and the German Federal Republic).

The position of the Irish Republic, which joined the EMS in 1979 despite both the very low weight of its currency in the ECU and the absence of the UK, its dominant trading partner, appears to be anomalous. The explanation of this decision is that it was principally concerned about the significant problem of imported inflation that was derived from the rising price level of its British imports. This was based on the assumption that, once the rigid link between the two currencies was broken, inflation in the UK would lead to a fall in the value of the British pound relative to the value of the Irish Republic pound. However, purchasing power is not the only determinant of exchange rates, and the value of the British pound increased sharply in 1979 causing increased imported inflation in the Irish Republic. The appreciation of the British pound was probably caused principally by developments in the UK oil industry and by the monetarist style of UK macroeconomic policy.

Partly because it had different rules for different countries, the EMS had a more stable membership than had the Snake. The standard maximum exchange-rate fluctuation from its reference value that was permitted for each EMS currency was ±2.25%. However, there were wider bands (±6%) for weaker members (Italy from 1979, Spain from 1989, and the UK from 1990) and the Netherlands observed a band of ±1%. The system was also subject to frequent realignments of the parity grid. The Irish Republic joined the EMS in 1979 but the UK did not, thus ending the link between the British pound and the Irish Republic pound. The UK joined in 1990 but, as a result of substantial international capital flows, left in 1992. The bands were increased in width to ±15% in 1992.

Incentives to join the EMS were comparable to those that applied to the Snake and included the desire for stable exchange rates with a country’s principal trading partners and the desire to encourage trade within the group of EMS members rather than with countries in the rest of the world. Cohen (2003), in his analysis of monetary unions, has explained the advantages and disadvantages of trans-national monetary integration.

The UK decided not to participate in the exchange-rate mechanism of the EMS at its inception. It was influenced by the fact that the weight allocated to the British pound (0.13) in the definition of the ECU was insufficient to allow the UK government to delegate to other EMS members a large proportion of the exchange-rate stabilization responsibilities that it would acquire under EMS rules. The outcome of EMS membership for the UK in 1979 would have been, therefore, in marked contrast to the outcome for France (with an ECU-weight of 0.20) and, especially, for the German Federal Republic (with an ECU-weight of 0.33). The proportion of the UK’s exports that, at that time, was sold in EMS countries was low relative to the proportion of any other EMS-member’s exports, and this was reflected in its ECU weight. The relationship between the weight assigned to an individual EMS-member’s currency in the definition of the ECU and the ability of that country to delegate adjustment responsibilities was that a particular percentage shift in the external value of the currency of a major member of the EMS had a greater effect on the value of the ECU than the same percentage disturbance to the external value of the currency of a less important member, and it therefore imposed greater exchange-rate adjustment responsibilities on the remaining EMS members than did the same percentage shift applied to the external value of the less important EMS-member currency.

A second reason for the refusal of the UK to join the EMS in 1979 was that membership would not have led to greater stability of its exchange rates with respect to the currencies of its major trading partners, which were, at that time, outside the EMS group of countries.

An important reason for the British government’s continued refusal, for more than eleven years, to participate in the EMS was its concern about the loss of sovereignty that membership would imply. A floating exchange rate (even a managed floating exchange rate such as was operated by the UK government from 1972 to 1990) permits an independent monetary policy, but EMS obligations make this impossible. Monetarist views on the efficacy of restraining the rate of inflation by controlling the rate of growth of the money supply were dominant during the early years of the EMS, and an independent monetary policy was seen as being particularly significant.

By 1990, when the UK government decided to join the EMS, a number of economic conditions had changed. It is significant that the proportion of UK exports sold in EMS countries had risen markedly. Following substantial speculative selling of British currency in September 1992, however, the UK withdrew from the EMS. One of the causes of this was the substantial flow of short-term capital from the UK, where interest rates were relatively low, to Germany, which was implementing a very tight monetary policy and hence had very high interest rates. This illustrates that a common monetary policy is one of the necessary conditions for the operation of agreements, such as the EMS, that are intended to limit exchange-rate fluctuations.

The Euro

Despite the partial collapse of the EMS in 1992, a common currency, the euro, was introduced in 1999 by eleven of the fifteen members of the European Union, and a twelfth country joined the euro zone in 2001. From 1999, each national currency in this group had a rigidly fixed exchange rate with the euro (and, hence, with each other). Fixed exchange rates, in national currency units per euro, are listed in Table 1. In 2002, euro notes and coins replaced national currencies in these countries. The intention of the new currency arrangement is to reduce transactions costs and encourage economic integration. The Snake and the EMS can perhaps be regarded as transitional structures leading to the introduction of the euro, which is the single currency of a single integrated economy.

Table 1 Value of the Euro (in terms of national currencies)

Austria 13.7603
Belgium 40.3399
Finland 5.94573
France 6.55957
Germany 1.95583
Greece 340.750
Irish Republic 0.787564
Italy 1936.27
Luxembourg 40.3399
Netherlands 2.20371
Portugal 200.482
Spain 166.386

Source: European Central Bank

Of the members of the European Union, to which participation in this innovation was restricted, Denmark, Sweden, and the UK chose not to introduce the euro in place of their existing currencies. The countries that adopted the euro in 1999 were Austria, Belgium, France, Finland, Germany, Irish Republic, Italy, Luxembourg, Netherlands, Portugal, and Spain.

Greece, which adopted the euro in 2001, was initially excluded from the new currency arrangement because it had failed to satisfy the conditions described in the Treaty of Maastricht, 1991. The maximum value for each of five variables for each country that was specified in the Treaty is listed in Table 2.

Table 2 Conditions for Euro Introduction (Treaty of Maastricht, 1991)

Inflation rate 1.5 percentage points above the average of the three euro countries with the lowest rates
Long-term interest rates 2.0 percentage points above the average of the three euro countries with the lowest rates
Exchange-rate stability fluctuations within the EMS band for at least two years
Budget deficit/GDP ratio 3%
Government debt/GDP ratio 60%

Source: The Economist, May 31, 1997.

The euro is also used in countries that, before 1999, used currencies that it has replaced: Andorra (French franc and Spanish peseta), Kosovo (German mark), Monaco (French franc), Montenegro (German mark), San Marino (Italian lira), and Vatican (Italian lira). The euro is also the currency of French Guiana, Guadeloupe, Martinique, Mayotte, Réunion, and St Pierre-Miquelon that, as départements d’outre-mer, are constitutionally part of France.

The euro was adopted by Slovenia in 2007, by Cyprus (South) and Malta in 2008, by Slovakia in 2009, by Estonia in 2011,  by Latvia in 2014, and by Lithuania in 2015. Table 3 shows the exchange rates between the euro and the currencies of these countries.

Table 3 Value of the Euro (in terms of national currencies)

         Cyprus (South) 0.585274
         Estonia 15.6466
         Latvia 0.702804
         Lithuania 3.4528
         Malta 0.4293
         Slovakia 30.126
         Slovenia 239.64

Source: European Central Bank

Currencies whose exchange rates were, in 1998, pegged to currencies that have been replaced by the euro have had exchange rates defined in terms of the euro since its inception. The Communauté Financière Africaine (CFA) franc, which is used by Benin, Burkina Faso, Cameroon, Central African Republic, Chad, Congo Republic, Côte d’Ivoire, Equatorial Guinea, Gabon, Guinea-Bissau, Mali, Niger, Sénégal, and Togo was defined in terms of the French franc until 1998, and is now pegged to the euro. The Comptoirs Français du Pacifique (CFP) franc, which is used in the three French territories in the south Pacific (Wallis and Futuna Islands, French Polynesia, and New Caledonia), was also defined in terms of the French franc and is now pegged to the euro. The Comoros franc has similarly moved from a French-franc peg to a euro peg. The Cape Verde escudo, which was pegged to the Portuguese escudo, is also now pegged to the euro. Bosnia-Herzegovina and Bulgaria, which previously operated currency-board arrangements with respect to the German mark, now fix the exchange rates of their currencies in terms of the euro. Albania, Botswana, Croatia, Czech Republic, Denmark, Hungary, Iran, North Macedonia, Poland, Romania, São Tomé-Príncipe, and Serbia also peg their currencies to the euro. Additional countries that peg their currencies to a basket that includes the euro are Algeria, Belarus, China, Fiji, Kuwait, Libya, Morocco, Samoa (Western), Singapore, Syria, Tunisia, Turkey, and Vanuatu. (European Central Bank, 2020).

The group of countries that use the euro or that have linked the values of their currencies to the euro might be called the “greater euro zone.” It is interesting that membership of this group of countries has been determined largely by historical accident. Its members exhibit a marked absence of macroeconomic commonality. Within this bloc, macroeconomic indicators, including the values of GDP and of GDP per person, have a wide range of values. The degree of financial integration with international markets also varies substantially in these countries. Countries that stabilize their exchange rates with respect to a basket of currencies that includes the euro have adjustment systems that are less closely related to its value. This weaker connection means that these countries should not be regarded as part of the greater euro zone.

The establishment of the euro is a remarkable development whose economic effects, especially in the long term, are uncertain. This type of exercise, involving the rigid fixing of certain exchange rates and then the replacement of a group of existing currencies, has rarely been undertaken in the recent past. Other than the introduction of the euro, and the much less significant case of the merger in 1990 of the former People’s Democratic Republic of Yemen (Aden) and the former Arab Republic of Yemen (Sana’a), the monetary union that accompanied the expansion of the German Federal Republic to incorporate the former German Democratic Republic in 1990 is the sole recent example. However, the very distinctive political situation of post-1945 Germany (and its economic consequences) make it difficult to draw relevant conclusions from this experience. The creation of the euro is especially noteworthy at a time when the majority, and an increasing proportion, of countries have chosen floating (or managed floating) exchange rates for their currencies. With the important exception of China, this includes most major economies. This statement should be treated with caution, however, because countries that claim to operate a managed floating exchange rate frequently aim, as described by Calvo and Reinhart (2002), to stabilize their currencies with respect to the United States dollar.

When the euro was established, it replaced national currencies. However, this is not the same as the process known as dollarization, in which a country adopts another country’s currency. For example, the United States dollar is the sole legal tender in Ecuador, El Salvador, Marshall Islands, Micronesia, Palau, Panama, Timor-Leste, and Zimbabwe. It is also the sole legal tender in the overseas possessions of the United States (American Samoa, Guam, Northern Mariana Islands, Puerto Rico, and U.S. Virgin Islands), in two British territories (Turks and Caicos Islands and British Virgin Islands) and in the Caribbean Netherlands. Like the countries that use the euro, a dollarized country cannot operate an independent monetary policy. A euro-using country will, however, have some input into the formation of monetary policy, whereas dollarized countries have none. In addition, unlike euro-using countries, dollarized countries probably receive none of the seigniorage that is derived from the issue of currency.

Prospects for the Euro

The expansion of the greater euro zone, which is likely to continue with the economic integration of the new members of the European Union, and with the probable admission of additional new members, has enhanced the importance of the euro. However, this expansion is unlikely to make the greater euro zone into a major currency bloc comparable to, for example, the Sterling Area even at the time of its collapse in 1972.  Mushin (2012) has described the nature and role of the Sterling Area

Mundell (2003) has predicted that the establishment of the euro will be the model for a new currency bloc in Asia. However, there is no evidence yet of any significant movement in this direction. Eichengreen et al (1995) have argued that monetary unification in the emerging industrial economies of Asia is unlikely to occur. A feature of Mundell’s paper is that he assumes that the benefits of joining a currency area almost necessarily exceed the costs, but this remains unproven.

The creation of the euro will have, and might already have had, macroeconomic consequences for the countries that comprise the greater euro zone. Since 1999, the influences on the import prices and export prices of these countries have included the effects of monetary policy run by the European Central Bank (www.ecb.int), a non-elected supra-national institution that is directly accountable neither to individual national governments nor to individual national parliaments, and developments, including capital flows, in world financial markets. Neither of these can be relied upon to ensure stable prices at an acceptable level in price-taking economies. The consequences of the introduction of the euro might be severe in some parts of the greater euro zone, especially in the low-GDP economies. For example, unemployment might increase if exports cease to have competitive prices. Further, domestic macroeconomic policy is not independent of exchange-rate policy. One of the costs of joining a monetary union is the loss of monetary-policy independence.

Data on Exchange-rate Policies

The best source of data on exchange-rate policies is probably the International Monetary Fund (IMF) (see www.imf.org). Almost all countries of significant size are members of the IMF; notable exceptions are Cuba (since 1964), the Republic of China (Taiwan) (since 1981), and the People’s Democratic Republic of Korea (North Korea). The most significant IMF publications that contain exchange-rate data are International Financial Statistics and the Annual Report on Exchange Arrangements and Exchange Restrictions.

Since 2009, the IMF has allocated each country’s exchange rate policy to one of ten categories. Unfortunately, the definitions of these mean that the members of the greater euro zone are not easy to identify. In this taxonomy, the exchange rate systems of countries that are part of a monetary union are classified according to the arrangements that govern the joint currency. The exchange rate policies of the eleven countries that introduced the euro in 1999, Cyprus (South), Estonia, Greece, Latvia, Lithuania, Malta, Slovakia, and Slovenia are classified as “Free floating.” Kosovo, Montenegro, and San Marino have “No separate legal tender.” Bosnia-Herzegovina and Bulgaria have “Currency boards.” Cape Verde, Comoros, Denmark, Fiji, Kuwait, Libya, São Tomé and Príncipe, and the fourteen African countries that use the CFA franc have “Conventional pegs.” Botswana has a “crawling peg.” Croatia, North Macedonia, and Morocco have a “Stabilized arrangement.” Romania and Singapore have a “Crawl-like arrangement.” Andorra, Monaco, Vatican, and the three territories in the south Pacific that use the CFP franc are not IMF members. Anderson, Habermeier, Kokenyne, and Veyrune (2009) explain and discuss the definitions of these categories and compare them to the definitions that were used by the International Monetary Fund until 2010. Information on the exchange-rate policy of each of its members is published by the International Monetary Fund (2020).

Other Monetary Unions in Europe

The establishment of the Snake, the EMS, and the euro have affected some of the other monetary unions in Europe. The monetary unions of Belgium-Luxembourg, of France-Monaco, and of Italy-Vatican-San Marino predate the Snake, survived within the EMS, and have now been absorbed into the euro zone. Unchanged by the introduction of the euro are the UK-Gibraltar-Guernsey-Isle of Man-Jersey monetary union (which is the remnant of the Sterling Area that also includes Falkland Islands and St. Helena), the Switzerland-Liechtenstein monetary union, and the use of the Turkish lira in Northern Cyprus.

The relationship between the currencies of the Irish Republic (previously the Irish Free State) and the UK is an interesting case study of the interaction of political and economic forces on the development of macroeconomic (including exchange-rate) policy. Despite the non-participation of the UK, the Irish Republic was a foundation member of the EMS. This ended the link between the British pound and the Irish Republic pound (also called the punt) that had existed since the establishment of the Irish currency following the partition of Ireland (1922), so that a step towards one monetary union destroyed another. Until 1979, the Irish Republic pound had a rigidly fixed exchange rate with the British pound, and each of the two banking systems cleared the other’s checks as if denominated in its own currency. These very close financial links meant that every policy decision of monetary importance in the UK coincided with an identical change in the Irish Republic, including the currency reforms of 1939 (US-dollar peg), 1949 (devaluation), 1967 (devaluation), 1971 (decimalization), 1972 (floating exchange rate), and 1972 (brief membership of the Snake). From 1979 until 1999, when the Irish Republic adopted the euro, there was a floating exchange rate between the British pound and the Irish Republic pound. South of the Irish border, the dominant political mood in the 1920s was the need to develop a distinct non-British national identity, but there were perceived to be good economic grounds for retaining a very close link with the British pound. By 1979, although political rhetoric still referred to the desire for a united Ireland, the economic situation had changed, and the decision to join the EMS without the membership of the UK meant that, for the first time, different currencies were used on each side of the Irish border. In both of these cases, political objectives were tempered by economic pressures.

Effects of the Global Financial Crisis

One of the ways of analyzing the significance of a new system is to observe the effects of circumstances that have not been predicted. The global financial crisis [GFC] that began in 2007 provides such an opportunity. In the UK and in the Irish Republic, whose business cycles are usually comparable, the problems that followed the GFC were similar in nature and in severity. In both of these countries, major banks (and therefore their depositors) were rescued from collapse by their governments. However, the macroeconomic outcomes have been different. The increase in the unemployment rate has been much greater in the Irish Republic than in the UK. The explanation for this is that an independent monetary policy is not possible in the Irish Republic, which is part of the euro zone. The UK, which does not use the euro, responded to the GFC by operating a very loose monetary policy (with a very low discount rate and large scale “quantitative easing”). The effects of this have been compounded by depreciation of the British pound. Although, partly because of the common language, labor is mobile between the UK and the Irish Republic, the unemployment rate in the Irish Republic remains high because its real exchange rate is high and its real interest rates are high. The effect of the GFC is that the Irish Republic now has an overvalued currency, which has made an inefficient economy more inefficient. Simultaneously, the more efficient economies in the euro zone (and some countries that are outside the euro zone, including the UK, whose currencies have depreciated) now have undervalued currencies, which have encouraged their economies to expand. This illustrates one of the consequences of membership of the euro zone. Had the GFC been predicted, the estimation of the economic benefits for the Irish Republic (and for Greece, Italy, Portugal, Spain, and other countries) would probably have been different. The political consequences for the more efficient countries in the euro zone, including Germany, might also be significant. At great cost, these countries have provided financial assistance to the weaker members of the euro zone, especially Greece.

Conclusion

The future role of the euro is uncertain. Especially in view of the British decision to withdraw from the European Union, even its survival is not guaranteed. It is clear, however, that the outcome will depend on both political and economic forces.

References:

Adams, J. J. “The Exchange-Rate Mechanism in the European Monetary System.” Bank of England Quarterly Bulletin 30, no. 4 (1990): 479-81.

Anderson, Harald, Karl Habermeier, Annamaria Kokenyne, and Romain Veyrune. Revised System for the Classification of Exchange Rate Arrangements, Washington DC: International Monetary Fund, 2009.

Calvo, Guillermo and Carmen Reinhart. “Fear of Floating.” Quarterly Journal of Economics 117, no 2 (2002): 379-408.

Coffey, Peter and John Presley. European Monetary Integration. London: Macmillan Press, 1971.

Cohen, Benjamin. “Monetary Unions.” In Encyclopedia of Economic and Business History, edited by Robert Whaples, 2003. http://eh.net/encyclopedia/monetary-unions/

Eichengreen, Barry, James Tobin, and Charles Wyplosz. “Two Cases for Sand in the Wheels of International Finance.” Economic Journal 105, no. 1 (1995): 162-72.

European Central Bank.  The International Role of the Euro.  2020.

International Monetary Fund. Annual Report of the Executive Board, 2020.

Mundell, Robert. “Prospects for an Asian Currency Area.” Journal of Asian Economics 14, no. 1 (2003): 1-10.

Mushin, Jerry. “The Sterling Area.” In Encyclopedia of Economic and Business History, edited by Robert Whaples, 2012.  http://eh.net/encyclopedia/the-sterling-area/

Endnote:

Jerry Mushin can be reached at  .  This article includes material from some of the author’s publications:

Mushin, Jerry. “A Simulation of the European Monetary System.” Computer Education 35 (1980): 8-19.

Mushin, Jerry. “The Irish Pound: Recent Developments.” Atlantic Economic Journal 8, no, 4 (1980): 100-10.

Mushin, Jerry. “Exchange-Rate Adjustment in a Multi-Currency Monetary System.” Simulation 36, no 5 (1981): 157-63.

Mushin, Jerry. “Non-Symmetry in the European Monetary System.” British Review of Economic Issues 8, no 2 (1986): 85-89.

Mushin, Jerry. “Exchange-Rate Stability and the Euro.” New Zealand Banker 11, no. 4 (1999): 27-32.

Mushin, Jerry. “A Taxonomy of Fixed Exchange Rates.” Australian Stock Exchange Perspective 7, no. 2 (2001): 28-32.

Mushin, Jerry. “Exchange-Rate Policy and the Efficacy of Aggregate Demand Management.” The Business Economist 33, no. 2 (2002): 16-24.

Mushin, Jerry. Output and the Role of Money. New York, London and Singapore: World Scientific Publishing Company, 2002.

Mushin, Jerry. “The Deceptive Resilience of Fixed Exchange Rates.” Journal of Economics, Business and Law 6, no. 1 (2004): 1-27.

Mushin, Jerry. “The Uncertain Prospect of Asian Monetary Integration.” International Economics and Finance Journal 1, no. 1 (2006): 89-94.

Mushin, Jerry. “Increasing Stability in the Mix of Exchange Rate Policies.” Studies in Business and Economics 14, no. 1 (2008): 17-30.

Mushin, Jerry. “Predicting Monetary Unions.” International Journal of Economic Research 5, no. 1 (2008): 27-33.

Mushin, Jerry. Interest Rates, Prices, and the Economy. Jodhpur: Scientific Publishers (India), 2009.

Mushin, Jerry. “Infrequently Asked Questions on the Monetary Union of the Countries of the Gulf Cooperation Council.” Economics and Business Journal: Inquiries and Perspectives, 3, no. 1, (2010): 1-12.

Mushin, Jerry. “Common Currencies: Economic and Political Causes and Consequences.” The Business Economist 42, no. 2, (2011): 19-26.

Mushin, Jerry. “Exchange Rates, Monetary Aggregates, and Inflation,” Bulletin of Political Economy 7, no. 1 (2013): 69-88.

Mushin, Jerry. “Monetary-Policy Targets and Exchange Rates.” Economics and Business Journal: Inquiries and Perspectives, 5, no 1, (2015): 1-12.

Mushin, Jerry and Uduakobong Edy-Ewoh. Output, Prices and Interest Rates, Ilishan-Remo: Babcock University Press, 2019.

Citation: Mushin, Jerry. “The Euro and Its Antecedents”. EH.Net Encyclopedia, edited by Robert Whaples. December 4, 2020. URL http://eh.net/encyclopedia/the-euro-and-its-antecedents/

History of Food and Drug Regulation in the United States

Marc T. Law, University of Vermont

Throughout history, governments have regulated food and drug products. In general, the focus of this regulation has been on ensuring the quality and safety of food and drugs. Food and drug regulation as we know it today in the United States had its roots in the late nineteenth century when state and local governments began to enact food and drug regulations in earnest. Federal regulation of the industry began on a large scale in the early twentieth century when Congress enacted the Pure Food and Drugs Act of 1906. The regulatory agency spawned by this law – the U.S. Food and Drug Administration (FDA) – now directly regulates between one-fifth and one-quarter of U.S. gross domestic product (GDP) and possesses significant power over product entry, the ways in which food and drugs are marketed to consumers, and the manufacturing practices of food and drug firms. This article will focus on the evolution of food and drug regulation in the United States from the middle of the nineteenth century until the present day.1

General Issues in Food and Drug Regulation

Perhaps the most enduring problem in the food and drug industry has been the issue of “adulteration” – the cheapening of products through the addition of impure or inferior ingredients. Since ancient times, producers of food and drug products have attempted to alter their wares in an effort to obtain dear prices for cheaper goods. For instance, water has often been added to wine, the cream skimmed from milk, and chalk added to bread. Hence, regulations governing what could or could not be added to food and drug products have been very common, as have regulations that require the use of official weights and measures. Because the adulteration of food and drugs may pose both economic and health risks to consumers, the stated public interest motivation for food and drug regulation has generally been to protect consumers from fraudulent and/or unsafe food and drug products.

From an economic perspective, regulations like these may be justified in markets where producers know more about product quality than consumers. As Akerlof (1970) demonstrates, when consumers have less information about product quality than producers, lower quality products (which are generally cheaper to produce) may drive out higher quality products. Asymmetric information about product quality may thus result in lower quality products – the so-called “lemons” – dominating the market. To the extent that regulators are better informed about quality than consumers, regulation that punishes firms that cheat on quality or that requires firms to disclose information about product quality can improve efficiency. Thus, regulations governing what can or cannot be added to products, how products are labeled, and whether certain products can be safely sold to consumers, can be justified in the public interest if consumers do not possess the information to accurately discern these aspects of product quality on their own. Regulations that solve the asymmetric information problem benefit consumers who desire better information about product quality, as well as producers of higher quality products, who desire to segment the market for their wares.

For certain products, it may be relatively easy for consumers to know whether or not they have been deceived into purchasing a low quality product after consuming it. For such goods, sometimes called “experience goods,” market mechanisms like branding or repeat purchase may be adequate to solve the asymmetric information problem. Consumers can “punish” firms that cheat on quality by taking their business elsewhere (Klein and Leffler 1981). Hence, as long as consumers are able to identify whether or not they have been cheated, regulation may not be needed to solve the asymmetric information problem. However, for those products where quality is not easily ascertained by consumers even after consuming the product, market mechanisms are unlikely to be adequate since it is impossible for consumers to punish cheaters if they cannot determine whether or not they have in fact been cheated (Darby and Karni 1973; McCluskey 2000). For such “credence goods,” market mechanisms may therefore be insufficient to ensure that the right level of quality is delivered. Like all goods, food and drugs are multidimensional in terms of product quality. Some dimensions of quality (for instance, flavor or texture) are experience goods because they can be easily determined upon consumption. Other dimensions (for instance, the ingredients contained in certain foods, the caloric content of foods, whether or not an item is “organic,” or the therapeutic merits of medicines) are better characterized as credence goods since it may not be obvious to even a sophisticated consumer whether or not he has been cheated. Hence, there are a priori reasons to believe that market forces will not be adequate to solve the asymmetric information problem that plagues many dimensions of food and drug quality.

Economists have long recognized that regulation is not always enacted to improve efficiency and advance the public interest. Indeed, since Stigler (1971) and Peltzman (1976), it has often been argued that regulation is sought by specific industry groups in order to tilt the competitive playing field to their advantage. For instance, by functioning as an entry barrier, regulation may raise the profits of incumbent firms by precluding the entry of new firms and new products. In these instances of “regulatory capture,” regulation harms efficiency by limiting the extent of competition and innovation in the market. In the context of product quality regulations like those applying to food and drugs, regulation may help incumbent producers by making it more costly for newer products to enter the market. Indeed, regulations that require producers to meet certain minimum standards or that ban the use of certain additives may benefit incumbent producers at the expense of producers of cheaper substitutes. Such regulations may also harm consumers, whose needs may be better met by these new prohibited products. The observation that select producer interests are often among the most vocal proponents of regulation is consistent with this regulatory capture explanation for regulation. Indeed, as we will see, a desire to shift the competitive playing field in favor of the producers of certain products has historically been an important motivation for food and drug regulation.

The fact that producer groups are often among the most important political constituencies in favor of regulation need not, however, imply that regulation necessarily advances the interests of these producers at the expense of efficiency. As noted earlier, to the extent that regulation reduces informational asymmetries about product quality, regulation may benefit producers of higher quality items as well as the consumers of such goods. Indeed, such efficiency-enhancing regulation may be particularly desirable for those producers whose goods are least amenable to market-based solutions to the asymmetric information problem (i.e., credence goods) precisely because it helps these producers expand the market for their wares and increase their profits. Hence, because it is possible for regulation that benefits certain producers to also improve welfare, producer support for regulation should not be taken as prima facie evidence of Stiglerian regulation.

United States’ Experience with Food and Drug Regulation

From colonial times until the mid to late nineteenth century, most food and drug regulation in America was enacted at the state and local level. Additionally, these regulations were generally targeted toward specific food products (Hutt and Hutt 1984). For instance, in 1641 Massachusetts introduced its first food adulteration law, which required the official inspection of beef, pork and fish; this was followed in the 1650s with legislation that regulated the quality of bread. Meanwhile, Virginia in the 1600s enacted laws to regulate weights and measures for corn and to outlaw the sale of adulterated wines.

During the latter half of the nineteenth century, the scale and scope of state level food regulation expanded considerably. Several factors contributed to this growth in legislation. For instance:

  • Specialization and urbanization made households more dependent on food purchased in impersonal markets. While these forces increased the variety of foods available, it also increased uncertainty about quality, since the more specialized and urbanized consumers became, the less they knew about the quality of products purchased from others (Wallis and North 1986).
  • Technological change in food manufacturing gave rise to new products and increased product complexity. The late nineteenth century witnessed the introduction of several new food products including alum-based baking powders, oleomargarine (the first viable substitute for butter), glucose, canned foods, “dressed” (i.e. refrigerated) beef, blended whiskey, chemical preservatives, and so on (Strasser 1989; Young 1989; Goodwin 1999). Unfamiliarity with these new products generated consumer concerns about food safety and food adulteration. Moreover, because many of these new products directly challenged the dominant position enjoyed by more traditional foods, these developments also give rise to demands for regulation on the part of traditional food producers who desired regulation to disadvantage these new competitors (Wood 1986).
  • Related to the previous point, the rise of analytic chemistry facilitated the “cheapening” of food in ways that were difficult for consumers to detect. For instance, the introduction of preservatives made it possible for food manufacturers to mask food deterioration. Additionally, the development of glucose as a cheap alternative to sugar facilitated deception on the part of producers of high priced products like maple syrup. Hence, concerns about adulteration were increasingly felt. Curiously, however, the rise of analytic chemistry also improved the ability of experts to detect these more subtle forms of food adulteration.
  • Because food adulteration became more difficult to detect, market mechanisms that relied on the ability of consumers to detect cheating ex post became less effective in solving the food adulteration problem. Hence, there was a growing perception that regulation by experts was necessary.2

Given this environment, it is perhaps unsurprising that a mixture of incentives gave rise to food regulation in the late nineteenth century. General pure food and dairy laws that required producers to properly label their products to indicate whether mixtures or impurities were added were likely enacted to help reduce asymmetric information about product quality (Law 2003). While producers of “pure” items also played a role in demanding these regulations, consumer groups – specifically women’s groups and leaders of the fledgling home economics movement – were also an important constituency in favor of regulation because they desired better information about food ingredients (Young 1989; Goodwin 1999). In contrast, narrow producer interest motivations seem to have been more important in generating a demand for more specific food regulations. For instance, state and federal oleomargarine restrictions were clearly enacted at the behest of dairy producing interests, who wanted to limit the availability of oleomargarine (Dupré 1999). Additionally, state and federal meat inspection laws were introduced to placate local butchers and local slaughterhouses in eastern markets who desired to reduce the competitive threat posed by the large mid-western meat packers (Yeager 1981; Libecap 1992).

Federal regulation of the food and drug industry was mostly piecemeal until the early 1900s. In 1848, Congress enacted the Drug Importation Act to curb the import of adulterated medicines. The 1886 oleomargarinetax required margarine manufacturers to stamp their product in various ways, imposed an internal revenue tax of 2 cents per pound on all oleomargarine produced in the United States, and levied a fee of $600 per year on oleomargarine producers, $480 per year on oleomargarine wholesalers, and $48 per year on oleomargarine retailers (Lee 1973; Dupré 1999). The 1891 Meat Inspection Act mandated the inspection of all live cattle for export as well as for all live cattle that were to be slaughtered and the meat exported. In 1897 the Tea Importation Act was passed which required Customs inspection of tea imported into the United States. Finally, in 1902 Congress enacted the Biologics Control Act to regulate the safety of vaccinations and serums used to prevent diseases in humans.

The 1906 Pure Food and Drugs Act and the 1906 Meat Inspection Act

The first general pure food and drug law at the federal level was not enacted until 1906 with the passage of the Pure Food and Drugs Act. While interest in federal regulation arose contemporaneously with interest in state regulation, conflict among competing interest groups regarding the provisions of a federal law made it difficult to build an effective political constituency in favor of federal regulation (Anderson 1958; Young 1989; Law and Libecap 2004). The law that emerged from this long legislative battle was similar in character to the state pure food laws that preceded it in that its focus was on accurate product labeling: it outlawed interstate trade in “adulterated” and “misbranded” foods, and required producers to indicate the presence of mixtures and/or impurities on product labels. Unlike earlier state legislation, however, the adulteration and misbranding provisions of this law also applied to drugs. Additionally, drugs listed in the United States Pharmacopoeia (USP) and the National Formulary (NF) were required to conform to USPand NF standards. Congress enacted the Pure Food and Drug Act along with the 1906 Meat Inspection Act, which tightened the USDA’s oversight of meat production. This new meat inspection law mandated ante and post mortem inspection of livestock, established sanitary standards for slaughterhouses and processing plants, and required continuous USDA inspection of meat processing and packaging. While the desire to create more uniform national food regulations was an important underlying motivation for regulation, it is noteworthy that both of these laws were enacted following a flurry of investigative journalism about the quality of meat and patent medicines. Specifically, the publication of Upton Sinclair’s The Jungle, with its vivid description of the conditions of the meat packing industry, as well as a series of articles by Samuel Hopkins Adams published in Collier’s Weekly about the dangers associated with patent medicine use, played a key role in provoking legislators to enact federal regulation of food and drugs (Wood 1986; Young 1989; Carpenter 2001; Law and Libecap 2004).3

Responsibility for enforcing the Pure Food and Drugs Act fell to the Bureau of Chemistry, a division within the USDA, which conducted some of the earliest studies of food adulteration within the United States. The Bureau of Chemistry was renamed the Food, Drug, and Insecticide Administration in 1927. In 1931 the name was shortened to the Food and Drug Administration (FDA). In 1940 the FDA was transferred from the USDA to the Federal Security Agency, which, in 1953, was renamed the Department of Health, Education and Welfare.

Whether the 1906 Pure Food and Drugs Act was enacted to advance special interests or to improve efficiency is a subject of some debate. Kolko (1967), for instance, suggests that the law reflected regulatory capture by large, national food manufacturers, who wanted to use federal legislation to disadvantage smaller, local firms. Coppin and High (1999) argue that rent-seeking on the part of bureaucrats within the government – specifically, Dr. Harvey Wiley, chief of the Bureau of Chemistry – was a critical factor in the emergence of this law. According to Coppin and High, Wiley was a “bureaucratic entrepreneur” who sought to ensure the future of his agency. By building ties with pro-regulation interest groups and lobbying in favor of a federal food and drug law, Wiley secured a lasting policy area for his organization. Law and Libecap (2004) argue that a mixture of bureaucratic, producer and consumer interests were in favor of federal food and drugs regulation, but that the last-minute onset of consumer interest in regulation (provoked by muckraking journalism about food and drug quality) played a key role in influencing the timing of regulation.

Enforcement of the Pure Food and Drugs Act met with mixed success. Indeed, the evidence from the enforcement of this law suggests that neither the pure industry capture nor public interest hypotheses provide an adequate account for regulation. On the one hand, some evidence suggests that the fledgling FDA’s enforcement work helped raise standards and reduce informational asymmetries about food quality. For instance, under the Net Weight Amendment of 1919, food and drug packages shipped in interstate commerce were required to be “plainly and conspicuously marked to show the quantity of contents in terms of weight, measure, and numerical count” (Weber 1928, p. 28). Similarly, under the Seafood Amendment of 1934, Gulf coast shrimp packaged under FDA supervision was required to be stamped with a label stating “Production supervised by the U.S. Food and Drug Administration” as a mechanism for ensuring quality and freshness. Additionally, during this period, investigators from the FDA played a key role in helping manufacturers improve the quality and reliability of processed foods, poultry products, food colorings, and canned items (Robinson 1900; Young 1992; Law 2003). On the other hand, the FDA’s efforts to regulate the patent medicine industry – specifically, to regulate the therapeutic claims that patent medicine firms made about their products – were largely unsuccessful. In U.S. vs. Johnson (1911), the Supreme Court ruled that therapeutic claims were essentially subjective and hence beyond the reach of this law. This situation was partially alleviated by the Sherley Amendment of 1912, which made it possible for the government to prosecute patent medicine producers who intended to defraud consumers. Effective regulation of pharmaceuticals was generally not possible, however, because under this amendment the government needed to prove fraud in order to successfully prosecute a patent medicine firm for making false therapeutic claims about its products (Young 1967). Hence, until new legislation was enacted in 1938, the patent medicine industry continued to escape effective federal control.

The 1938 Food, Drugs and Cosmetics Act

Like the law it replaced (the 1906 Pure Food and Drugs Act), the Food, Drugs and Cosmetics Act of 1938 was enacted following a protracted legislative battle. In the early 1930s, the FDA and its Congressional supporters began to lobby in favor of replacing the Pure Food and Drugs Act with stronger legislation that would give the agency greater authority to regulate the patent medicine industry. These efforts were successfully challenged by the patent medicine industry and its Congressional allies until 1938, when the so-called “Elixir Sulfanilamide tragedy” made it impossible for Congress to continue to ignore demands for tighter regulation. The story behind the Elixir Sulfanilamide tragedy is as follows. In 1937, Massengill, a Tennessee drug company, began to market a liquid sulfa drug called Elixir Sulfanilamide. Unfortunately, the solvent in this drug was a highly toxic variant of antifreeze; as a result, over 100 people died from taking this drug. Public outcry over this tragedy was critical in breaking the Congressional deadlock over tighter regulation (Young 1967; Jackson 1970; Carpenter and Sin 2002).

Under the 1938 law, the FDA was given considerably greater authority over the food and drug industry. The FDA was granted the power to regulate the therapeutic claims drug manufacturers printed on their product labels; authority over drug advertising, however, rested with the Federal Trade Commission (FTC) under the Wheeler-Lea Act of 1938. Additionally, the new law required that drugs be marketed with adequate directions for safe use, and FDA authority was extended to include medical devices and cosmetics. Perhaps the most striking and novel feature of the 1938 law was that it introduced mandatory pre-market approval for new drugs. Under this new law, drug manufacturers were required to demonstrate to the FDA that a new drug was safe before it could be released to the market. This feature of the legislation was clearly a reaction to the Elixir Sulfanilamide incident; food and drug bills introduced in Congress prior to 1938 did not include provisions requiring mandatory pre-market approval of new drugs.

Within a short period of time, the FDA began to deem some drugs to be so dangerous that no adequate directions could be written for direct use by patients. As a consequence, the FDA created a new class of drugs which would only be available with a physician’s prescription. Ambiguity over whether certain medicines – specifically, amphetamines and barbiturates – could be safely marketed directly to consumers or required a physician’s prescription led to disagreements between physicians, pharmacists, drug companies, and the FDA (Temin 1980). The political response to these conflicts was the Humphrey-Durham Amendment in 1951, which permitted a drug to be sold directly to patients “unless, because of its toxicity or other potential for harmful effect or because the method of collateral measures necessary to its use, it may safely be sold and used only under the supervision of a practitioner.”

The most significant expansion in FDA authority over drugs in the post World War II period occurred when Congress enacted the 1962 Drug Amendments (also known as the Kefauver-Harris Amendments) to the Food, Drugs and Cosmetics Act. Like the 1938 law, the 1962 Drug Amendments were passed in response to a therapeutic crisis – in this instance, the discovery that the use of thalidomide (a sedative that was marketed to combat the symptoms associated with morning sickness) by pregnant women caused birth deformities in thousands of babies in Europe.4 As a result of these amendments, drug companies were required to establish that drugs were both safe and effective prior to market release (the 1938 law only required proof of safety) and the FDA was granted greater authority to oversee clinical trials for new drugs. Under the 1962 Drug Amendments, responsibility for regulating prescription drug advertising was transferred from the FTC to the FDA; furthermore, the FDA was given the authority to establish good manufacturing practices in the drug industry and the power to access company records to monitor these practices. As a result of these amendments, the United States today has among the toughest drug approval regimes in the developed world.

A large and growing body of scholarship has been devoted to analyzing the economics and politics of the drug approval process. Early work has focused on the extent to which the FDA’s pre-market approval process has affected the rate of innovation and the availability of new pharmaceuticals.5 Peltzman (1973), among others, argues that 1962 Drug Amendments significantly reduced the flow of new drugs onto the market and imposed large welfare losses on society. These views have been challenged by Temin (1980) who maintains that much of the decline in new drug introductions occurred prior to the 1962 Drug Amendments. More recent work, however, suggests that the FDA’s pre-market approval process has indeed reduced the availability of new medicines (Wiggins 1981). In international comparisons, scholars have also found that new medicines generally become available more quickly in Europe than in America, suggesting that tighter regulation in the U.S. has induced a drug-lag (Wardell and Lasagna 1975; Grabowsky and Vernon 1983; Kaitin and Brown 1995). Some critics believe that the costs of this drug lag are large relative to the benefits because delay in the introduction of new drugs prevents patients from accessing new and more effective medicines. Gieringer (1985), for instance, estimates that the number of deaths that can be attributed to the drug lag far exceeds the number of lives saved by extra caution on the part of the FDA. Hence, according to these authors, the 1962 Drug Amendments may have had adverse consequences for overall welfare.

Other scholarship has examined the pattern of drug approval times in the post 1962 period. It is commonly observed that larger pharmaceutical firms receive faster drug approvals than smaller firms. One interpretation of this fact is that larger firms have “captured” the drug approval process and use the process to disadvantage their smaller competitors. Empirical work by Olson (1997) and Carpenter (2002), however, casts some doubt on this Stiglerian interpretation.6 These authors find that while larger firms do generally receive quicker drug approvals, drug approval times are also responsive to several other factors, including the specific disease at which a drug is directed, the number of applications submitted by the drug company, and the existence of a disease-specific interest group. Indeed, in other work, Carpenter (2004a) demonstrates that a regulator that seeks to maximize its reputation for protecting consumer safety may approve new drugs in ways that appear to benefit large firms.7 Hence, the fact that large pharmaceutical firms obtain faster drug approvals than small firms need not imply that the FDA has been “captured” by these corporations.

Food and Drug Regulation since the 1960s

Since the passage of the 1962 Drug Amendments, federal food and drug regulation in the United States has evolved along several lines. In some cases, regulation has strengthened the government’s authority over various aspects of the food and drug trade. For instance, the 1976 Medical Device Amendments required medical device manufacturers to register with the FDA and to follow quality control guideline. These amendments also established pre-market approval guidelines for medical devices. Along similar lines, the 1990 Nutrition Labeling and Education Act required all packaged foods to contain standardized nutritional information and standardized information on serving sizes.8

In other cases, regulations have been enacted to streamline the pre-market approval process for new drugs. Concerns that mandatory pre-market approval of new drugs may have reduced the rate at which new pharmaceuticals become available to consumers prompted the FDA to issue new rules in 1991 to accelerate the review of drugs for life-threatening diseases. Similar concerns also motivated Congress to enact the Prescription Drug User Fee Act of 1992 which required drug manufacturers to pay fees to the FDA to review drug approval applications and required the FDA to use these fees to pay for more reviewers to assess these new drug applications.9 Speedier drug approval times have not, however, come without costs. Evidence presented by Olson (2002) suggests that faster drug approval times have also contributed to a higher incidence of adverse drug reactions from new pharmaceuticals.

Finally, in a few instances, legislation has weakened government’s authority over food and drug products. For example, the 1976 Vitamins and Minerals Amendments precluded the FDA from establishing standards that limited the potency of vitamins and minerals added to foods. Similarly, the 1994 Dietary Supplements and Nutritional Labeling Act weakened the FDA’s ability to regulate dietary supplements by classifying them as foods rather than drugs. In these cases, the consumers and producers of “natural” or “herbal” remedies played a key role in pushing Congress to limit the FDA’s authority.

References

Akerlof, George A. “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism.” Quarterly Journal of Economics 84, no. 3 (1970): 488-500

Anderson, Oscar E. Jr. The Health of a Nation: Harvey W. Wiley and the Fight for Pure Food. Chicago: University of Chicago Press, 1958.

Carpenter, Daniel P. The Forging of Bureaucratic Autonomy: Reputation, Networks, and Policy Innovation in Executive Agencies, 1862-1928. Princeton: Princeton University Press, 2001.

Carpenter, Daniel P. “Groups, the Media, Agency Waiting Costs, and FDA Drug Approval.” American Journal of Political Science 46, no. 2 (2002):490-505

Carpenter, Daniel P. “Protection without Capture: Drug Approval by a Political Responsive, Bayesian Regulator.” American Political Science Review, (2004a), Forthcoming.

Carpenter, Daniel P. “The Political Economy of FDA Drug Review: Processing, Politics, and Lessons for Policy.” Health Affairs 23, no. 1 (2004b):52-63.

Carpenter, Daniel P. and Gisela Sin. “Crisis and the Emergence of Economic Regulation: The Food, Drug and Cosmetics Act of 1938.” University of Michigan, Department of Political Science, unpublished manuscript, 2002.

Comanor, William S. “The Political Economy of the Pharmaceutical Industry.” Journal of Economic Literature 24, no. 3 (1986): 1178-1217.

Coppin, Clayton and Jack High. The Politics of Purity: Harvey Washington Wiley and the Origins of Federal Food Policy. Ann Arbor: University of Michigan Press, 1999.

Darby, Michael R. and Edi Karni. “Free Competition and the Optimal Amount of Fraud.” Journal of Law and Economics 16, no. 1 (1973): 67-88.

Dupré, Ruth. “If It’s Yellow, It Must be Butter: Margarine Regulation in North America since 1886.” Journal of Economic History 59, no 2 (1999): 353-71.

French, Michael and Jim Phillips. Cheated Not Poisoned? Food Regulation in the United Kingdom, 1875-1938. Manchester: Manchester University Press, 2000.

Gieringer, Dale H. “The Safety and Efficacy of New Drug Approvals.” Cato Journal 5, no. 1 (1985): 177-201.

Goodwin, Lorine S. The Pure Food, Drink, and Drug Crusaders, 1879-1914. Jefferson, NC: McFarland & Company, 1999.

Grabowski, Henry G. and John M. Vernon. The Regulation of Pharmaceuticals: Balancing the Benefits and Risks. Washington, DC: American Enterprise Institute, 1983.

Harris, Steven B. “The Right Lesson to Learn from Thalidomide.” 1992. Available at: http://w3.aces.uiuc.edu:8001/Liberty/Tales/Thalidomide.html.

Hutt, Peter Barton and Peter Barton Hutt II. “A History of Government Regulation of Adulteration and Misbranding of Food.” Food, Drug and Cosmetic Law Journal 39 (1984): 2-73.

Ippolito, Pauline M. and Janis K. Pappalardo. Advertising, Nutrition, and Health: Evidence from Food Advertising, 1977-1997. Bureau of Economics Staff Report. Washington, DC: Federal Trade Commission, 2002.

Jackson, Charles O. Food and Drug Legislation in the New Deal. Princeton: Princeton University Press, 1970.

Kaitin, Kenneth I. and Jeffrey S. Brown. “A Drug Lag Update.” Drug Information Journal 29, no. 2 (1995): 361-73.

Klein, Benjamin and Keith B. Leffler. “The Role of Market Forces in Assuring Contractual Performance.” Journal of Political Economy 89, no. 4 (1981): 615-41.

Kolko, Gabriel. The Triumph of Conservatism: A Reinterpretation of American History. New York: MacMillan, 1967.

Law, Marc T. “The Origins of State Pure Food Regulation.” Journal of Economic History 63, no. 4 (2003): 1103-1130.

Law, Marc T. “How Do Regulators Regulate? Enforcement of the Pure Food and Drugs Act, 1907-38.” University of Vermont, Department of Economics, unpublished manuscript, 2003.

Law, Marc T. and Gary D. Libecap. “The Determinants of Progressive Era Reform: The Pure Food and Drug Act of 1906.” In Corruption and Reform: Lessons from America’s History, edited by Edward Glaeser and Claudia Goldin. Chicago: University of Chicago Press, 2004 (forthcoming).

Lee, R. Alton. A History of Regulatory Taxation. Lexington: University of Kentucky Press, 1973.

Libecap, Gary D. “The Rise of the Chicago Packers and the Origins of Meat Inspection and Antitrust.” Economic Inquiry 30, no. 2 (1992): 242-262.

Mathios, Alan D. “The Impact of Mandatory Disclosure Laws on Product Choices: An Analysis of the Salad Dressing Market.” Journal of Law and Economics 43, no. 2 (2002): 651-77.

McCluskey, Jill J. “A Game Theoretic Approach to Organic Foods: An Analysis of Asymmetric Information and Policy.” Agricultural and Resource Economics Review 29, no. 1 (2000): 1-9.

Olson, Mary K. “Regulatory Agency Discretion Among Competing Industries: Inside the FDA.” Journal of Law, Economics, and Organization 11, no. 2 (1995): 379-401.

Olson, Mary K. “Explaining Regulatory Behavior in the FDA: Political Control vs. Agency Discretion.” In Advances in the Study of Entrepreneurship, Innovation, and Economic Growth, edited by Gary D. Libecap, 71-108, Greenwich: JAI Press, 1996a.

Olson, Mary K. “Substitution in Regulatory Agencies: FDA Enforcement Alternatives.” Journal of Law, Economics, and Organization 12, no. 2 (1996b): 376-407.

Olson, Mary K. “Firms’ Influences on FDA Drug Approval.” Journal of Economics and Management Strategy 6, no. 2 (1997): 377-401.

Olson, Mary K. “Regulatory Reform and Bureaucratic Responsiveness to Firms: The Impact of User Fees in the FDA.” Journal of Economics and Management Strategy 9, no. 3 (2000): 363-95.

Olson, Mary K. “Pharmaceutical Policy Change and the Safety of New Drugs.” Journal of Law and Economics 45, no 2, Part II (2002): 615-42.

Peltzman, Sam. “An Evaluation of Consumer Protection Legislation: The 1962 Drug Amendments.” Journal of Political Economy 81, no. 5 (1973): 1049-1091

Peltzman, Sam. “Toward a More General Theory of Regulation.” Journal of Law and Economics 19, no. 2 (1976): 211-40.

Robinson, Lisa M. “Regulating What We Eat: Mary Engle Pennington and the Food Research Laboratory.” Agricultural History 64 (1990): 143-53.

Stigler, George J. “The Theory of Economic Regulation.” Bell Journal of Economics and Management Science 2, no. 1 (1971): 3-21.

Strasser, Susan. Satisfaction Guaranteed: The Making of the American Mass Market. New York: Pantheon Books, 1989.

Temin, Peter. Taking Your Medicine: Drug Regulation in the United States. Cambridge: Harvard University Press, 1980.

Wallis, John J. and Douglass C. North. “Measuring the Transaction Sector of the American Economic, 1870-1970.” In Long Term Factors in American Economic Growth, edited by Stanley Engerman and Robert Gallman, 95-148. Chicago: University of Chicago Press, 1986.

Wardell, William M. and Louis Lasagna. Regulation and Drug Development. Washington, DC: American Enterprise Institute, 1975.

Weber, Gustavus. The Food, Drug and Insecticide Administration: Its History, Activities, and Organization. Baltimore: Johns Hopkins University Press, 1928.

Wiggins, Steven N. “Product Quality Regulation and New Drug Introductions: Some New Evidence from the 1970s.” Review of Economics and Statistics 63, no. 4 (1981): 615-19.

Wood, Donna J. The Strategic Use of Public Policy: Business and Government in the Progressive Era. Marshfield, MA: Pitman Publishing, 1986.

Yeager, Mary A. Competition and Regulation: The Development of Oligopoly in the Meat Packing Industry. Greenwich, CT: JAI Press, 1981.

Young, James H. The Medical Messiahs: A Social History of Quackery in Twentieth Century America. Princeton: Princeton University Press, 1967.

Young, James H. Pure Food: Securing the Federal Food and Drugs Act of 1906. Princeton: Princeton University Press. 1986.

Young, James H. “Food and Drug Enforcers in the 1920s: Restraining and Educating Business.” Business and Economic History 21 (1992): 119-128.

1 See Hutt and Hutt (1984) for an excellent survey of the history of food regulation in earlier times. French and Phillips (2000) discuss the development of food regulation in the United Kingdom.

3 It is noteworthy that in writing The Jungle, Sinclair’s motivation was not to obtain federal meat inspection legislation, but rather, to provoke public outrage over industrial working conditions. “I aimed at the public’s heart,” he later wrote, “and by accident I hit it in the stomach.” (Quoted in Kolko 1967, p. 103.)

4 Thalidomide was not approved for sale in the U.S. The fact that an FDA official – Dr. Frances Kelsey, an FDA drug examiner – played a key role in blocking its availability in the United States gave even more legitimacy to the view that the FDA’s authority over pharmaceuticals needed to be strengthened. See Temin (1980, pp. 123-24). Ironically, Dr. Kelsey’s efforts to block the introduction of thalidomide in the United States stemmed not from knowledge about the fact that thalidomide caused birth defects, but rather, from concerns that thalidomide might cause neuropathy (a disease of the nervous system) in some of its users. Indeed, the association between thalidomide and birth defects was discovered by researchers in Europe, not by drug investigators at the FDA. Hence, the FDA may not in fact have deserved the credit it was given in preventing the thalidomide tragedy from spreading to the U.S. (Harris 1992).

5 See Comanor (1986) for a summary of this literature.

6 Along these lines, Olson (1995, 1996a, 1996b) also finds that other aspects of the FDA’s enforcement work from the 1970s until the present are generally responsive to pressures from multiple interest groups including firms, consumer groups, the media, and Congress.

7 For a very readable discussion of this perspective see Carpenter (2004b).

8 See Mathios (2000) and Ippolito and Pappalardo (2002) for analyses of the effects of this law on food consumption choices.

9 See Olson (2000) for analysis of the effects of these user fees on approval times.

Citation: Law, Marc. “History of Food and Drug Regulation in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. October 11, 2004. URL http://eh.net/encyclopedia/history-of-food-and-drug-regulation-in-the-united-states/

An Economic History of Denmark

Ingrid Henriksen, University of Copenhagen

Denmark is located in Northern Europe between the North Sea and the Baltic. Today Denmark consists of the Jutland Peninsula bordering Germany and the Danish Isles and covers 43,069 square kilometers (16,629 square miles). 1 The present nation is the result of several cessions of territory throughout history. The last of the former Danish territories in southern Sweden were lost to Sweden in 1658, following one of the numerous wars between the two nations, which especially marred the sixteenth and seventeenth centuries. Following defeat in the Napoleonic Wars, Norway was separated from Denmark in 1814. After the last major war, the Second Schleswig War in 1864, Danish territory was further reduced by a third when Schleswig and Holstein were ceded to Germany. After a regional referendum in 1920 only North-Schleswig returned to Denmark. Finally, Iceland, withdrew from the union with Denmark in 1944. The following will deal with the geographical unit of today’s Denmark.

Prerequisites of Growth

Throughout history a number of advantageous factors have shaped the Danish economy. From this perspective it may not be surprising to find today’s Denmark among the richest societies in the world. According to the OECD, it ranked seventh in 2004, with income of $29.231 per capita (PPP). Although we can identify a number of turning points and breaks, for the time period over which we have quantitative evidence this long-run position has changed little. Thus Maddison (2001) in his estimate of GDP per capita around 1600 places Denmark as number six. One interpretation could be that favorable circumstances, rather than ingenious institutions or policies, have determined Danish economic development. Nevertheless, this article also deals with time periods in which the Danish economy was either diverging from or converging towards the leading economies.

Table 1:
Average Annual GDP Growth (at factor costs)
Total Per capita
1870-1880 1.9% 0.9%
1880-1890 2.5% 1.5%
1890-1900 2.9% 1.8%
1900-1913 3.2% 2.0%
1913-1929 3.0% 1.6%
1929-1938 2.2% 1.4%
1938-1950 2.4% 1.4%
1950-1960 3.4% 2.6%
1960-1973 4.6% 3.8%
1973-1982 1.5% 1.3%
1982-1993 1.6% 1.5%
1993-2004 2.2% 2.0%

Sources: Johansen (1985) and Statistics Denmark ‘Statistikbanken’ online.

Denmark’s geographical location in close proximity of the most dynamic nations of sixteenth-century Europe, the Netherlands and the United Kingdom, no doubt exerted a positive influence on the Danish economy and Danish institutions. The North German area influenced Denmark both through long-term economic links and through the Lutheran Protestant Reformation which the Danes embraced in 1536.

The Danish economy traditionally specialized in agriculture like most other small and medium-sized European countries. It is, however, rather unique to find a rich European country in the late-nineteenth and mid-twentieth century which retained such a strong agrarian bias. Only in the late 1950s did the workforce of manufacturing industry overtake that of agriculture. Thus an economic history of Denmark must take its point of departure in agricultural development for quite a long stretch of time.

Looking at resource endowments, Denmark enjoyed a relatively high agricultural land-to-labor ratio compared to other European countries, with the exception of the UK. This was significant for several reasons since it, in this case, was accompanied by a comparatively wealthy peasantry.

Denmark had no mineral resources to speak of until the exploitation of oil and gas in the North Sea began in 1972 and 1984, respectively. From 1991 on Denmark has been a net exporter of energy although on a very modest scale compared to neighboring Norway and Britain. The small deposits are currently projected to be depleted by the end of the second decade of the twenty-first century.

Which European power pushed into the Canadian Hudson Bay region in pursuit of furs during the seventeenth century?

Source: Johansen (1985) and Statistics Denmark ’Nationalregnskaber’

Good logistic can be regarded as a resource in pre-industrial economies. The Danish coast line of 7,314 km and the fact that no point is more than 50 km from the sea were advantages in an age in which transport by sea was more economical than land transport.

Decline and Transformation, 1500-1750

The year of the Lutheran Reformation (1536) conventionally marks the end of the Middle Ages in Danish historiography. Only around 1500 did population growth begin to pick up after the devastating effect of the Black Death. Growth thereafter was modest and at times probably stagnant with large fluctuations in mortality following major wars, particularly during the seventeenth century, and years of bad harvests. About 80-85 percent of the population lived from subsistence agriculture in small rural communities and this did not change. Exports are estimated to have been about 5 percent of GDP between 1550 and 1650. The main export products were oxen and grain. The period after 1650 was characterized by a long lasting slump with a marked decline in exports to the neighboring countries, the Netherlands in particular.

The institutional development after the Black Death showed a return to more archaic forms. Unlike other parts of northwestern Europe, the peasantry on the Danish Isles afterwards became a victim of a process of re-feudalization during the last decades of the fifteenth century. A likely explanation is the low population density that encouraged large landowners to hold on to their labor by all means. Freehold tenure among peasants effectively disappeared during the seventeenth century. Institutions like bonded labor that forced peasants to stay on the estate where they were born, and labor services on the demesne as part of the land rent bring to mind similar arrangements in Europe east of the Elbe River. One exception to the East European model was crucial, however. The demesne land, that is the land worked directly under the estate, never made up more than nine percent of total land by the mid eighteenth century. Although some estate owners saw an interest in encroaching on peasant land, the state protected the latter as production units and, more importantly, as a tax base. Bonded labor was codified in the all-encompassing Danish Law of Christian V in 1683. It was further intensified by being extended, though under another label, to all Denmark during 1733-88, as a means for the state to tide the large landlords over an agrarian crisis. One explanation for the long life of such an authoritarian institution could be that the tenants were relatively well off, with 25-50 acres of land on average. Another reason could be that reality differed from the formal rigor of the institutions.

Following the Protestant Reformation in 1536, the Crown took over all church land, thereby making it the owner of 50 percent of all land. The costs of warfare during most of the sixteenth century could still be covered by the revenue of these substantial possessions. Around 1600 the income from taxation and customs, mostly Sound Toll collected from ships that passed the narrow strait between Denmark and today’s Sweden, on the one hand and Crown land revenues on the other were equally large. About 50 years later, after a major fiscal crisis had led to the sale of about half of all Crown lands, the revenue from royal demesnes declined relatively to about one third, and after 1660 the full transition from domain state to tax state was completed.

The bulk of the former Crown land had been sold to nobles and a few common owners of estates. Consequently, although the Danish constitution of 1665 was the most stringent version of absolutism found anywhere in Europe at the time, the Crown depended heavily on estate owners to perform a number of important local tasks. Thus, conscription of troops for warfare, collection of land taxes and maintenance of law and order enhanced the landlords’ power over their tenants.

Reform and International Market Integration, 1750-1870

The driving force of Danish economic growth, which took off during the late eighteenth century was population growth at home and abroad – which triggered technological and institutional innovation. Whereas the Danish population during the previous hundred years grew by about 0.4 percent per annum, growth climbed to about 0.6 percent, accelerating after 1775 and especially from the second decade of the nineteenth century (Johansen 2002). Like elsewhere in Northern Europe, accelerating growth can be ascribed to a decline in mortality, mainly child mortality. Probably this development was initiated by fewer spells of epidemic diseases due to fewer wars and to greater inherited immunity against contagious diseases. Vaccination against smallpox and formal education of midwives from the early nineteenth century might have played a role (Banggård 2004). Land reforms that entailed some scattering of the farm population may also have had a positive influence. Prices rose from the late eighteenth century in response to the increase in populations in Northern Europe, but also following a number of international conflicts. This again caused a boom in Danish transit shipping and in grain exports.

Population growth rendered the old institutional set up obsolete. Landlords no longer needed to bind labor to their estate, as a new class of landless laborers or cottagers with little land emerged. The work of these day-laborers was to replace the labor services of tenant farmers on the demesnes. The old system of labor services obviously presented an incentive problem all the more since it was often carried by the live-in servants of the tenant farmers. Thus, the labor days on the demesnes represented a loss to both landlords and tenants (Henriksen 2003). Part of the land rent was originally paid in grain. Some of it had been converted to money which meant that real rents declined during the inflation. The solution to these problems was massive land sales both from the remaining crown lands and from private landlords to their tenants. As a result two-thirds of all Danish farmers became owner-occupiers compared to only ten percent in the mid-eighteenth century. This development was halted during the next two and a half decades but resumed as the business cycle picked up during the 1840s and 1850s. It was to become of vital importance to the modernization of Danish agriculture towards the end of the nineteenth century that 75 percent of all agricultural land was farmed by owners of middle-sized farms of about 50 acres. Population growth may also have put a pressure on common lands in the villages. At any rate enclosure begun in the 1760s, accelerated in the 1790s supported by legislation and was almost complete in the third decade of the nineteenth century.

The initiative for the sweeping land reforms from the 1780s is thought to have come from below – that is from the landlords and in some instances also from the peasantry. The absolute monarch and his counselors were, however, strongly supportive of these measures. The desire for peasant land as a tax base weighed heavily and the reforms were believed to enhance the efficiency of peasant farming. Besides, the central government was by now more powerful than in the preceding centuries and less dependent on landlords for local administrative tasks.

Production per capita rose modestly before the 1830s and more pronouncedly thereafter when a better allocation of labor and land followed the reforms and when some new crops like clover and potatoes were introduced at a larger scale. Most importantly, the Danes no longer lived at the margin of hunger. No longer do we find a correlation between demographic variables, deaths and births, and bad harvest years (Johansen 2002).

A liberalization of import tariffs in 1797 marked the end of a short spell of late mercantilism. Further liberalizations during the nineteenth and the beginning of the twentieth century established the Danish liberal tradition in international trade that was only to be broken by the protectionism of the 1930s.

Following the loss of the secured Norwegian market for grain in 1814, Danish exports began to target the British market. The great rush forward came as the British Corn Law was repealed in 1846. The export share of the production value in agriculture rose from roughly 10 to around 30 percent between 1800 and 1870.

In 1849 absolute monarchy was peacefully replaced by a free constitution. The long-term benefits of fundamental principles such as the inviolability of private property rights, the freedom of contracting and the freedom of association were probably essential to future growth though hard to quantify.

Modernization and Convergence, 1870-1914

During this period Danish economic growth outperformed that of most other European countries. A convergence in real wages towards the richest countries, Britain and the U.S., as shown by O’Rourke and Williamsson (1999), can only in part be explained by open economy forces. Denmark became a net importer of foreign capital from the 1890s and foreign debt was well above 40 percent of GDP on the eve of WWI. Overseas emigration reduced the potential workforce but as mortality declined population growth stayed around one percent per annum. The increase in foreign trade was substantial, as in many other economies during the heyday of the gold standard. Thus the export share of Danish agriculture surged to a 60 percent.

The background for the latter development has featured prominently in many international comparative analyses. Part of the explanation for the success, as in other Protestant parts of Northern Europe, was a high rate of literacy that allowed a fast spread of new ideas and new technology.

The driving force of growth was that of a small open economy, which responded effectively to a change in international product prices, in this instance caused by the invasion of cheap grain to Western Europe from North America and Eastern Europe. Like Britain, the Netherlands and Belgium, Denmark did not impose a tariff on grain, in spite of the strong agrarian dominance in society and politics.

Proposals to impose tariffs on grain, and later on cattle and butter, were turned down by Danish farmers. The majority seems to have realized the advantages accruing from the free imports of cheap animal feed during the ongoing process of transition from vegetable to animal production, at a time when the prices of animal products did not decline as much as grain prices. The dominant middle-sized farm was inefficient for wheat but had its comparative advantage in intensive animal farming with the given technology. O’Rourke (1997) found that the grain invasion only lowered Danish rents by 4-5 percent, while real wages rose (according to expectation) but more than in any other agrarian economy and more than in industrialized Britain.

The move from grain exports to exports of animal products, mainly butter and bacon, was to a great extent facilitated by the spread of agricultural cooperatives. This organization allowed the middle-sized and small farms that dominated Danish agriculture to benefit from the economy of scale in processing and marketing. The newly invented steam-driven continuous cream separator skimmed more cream from a kilo of milk than conventional methods and had the further advantage of allowing transported milk brought together from a number of suppliers to be skimmed. From the 1880s the majority of these creameries in Denmark were established as cooperatives and about 20 years later, in 1903, the owners of 81 percent of all milk cows supplied to a cooperative (Henriksen 1999). The Danish dairy industry captured over a third of the rapidly expanding British butter-import market, establishing a reputation for consistent quality that was reflected in high prices. Furthermore, the cooperatives played an active role in persuading the dairy farmers to expand production from summer to year-round dairying. The costs of intensive feeding during the wintertime were more than made up for by a winter price premium (Henriksen and O’Rourke 2005). Year-round dairying resulted in a higher rate of utilization of agrarian capital – that is of farm animals and of the modern cooperative creameries. Not least did this intensive production mean a higher utilization of hitherto underemployed labor. From the late 1890’s, in particular, labor productivity in agriculture rose at an unanticipated speed at par with productivity increase in the urban trades.

Industrialization in Denmark took its modest beginning in the 1870s with a temporary acceleration in the late 1890s. It may be a prime example of an industrialization process governed by domestic demand for industrial goods. Industry’s export never exceeded 10 percent of value added before 1914, compared to agriculture’s export share of 60 percent. The export drive of agriculture towards the end of the nineteenth century was a major force in developing other sectors of the economy not least transport, trade and finance.

Weathering War and Depression, 1914-1950

Denmark, as a neutral nation, escaped the devastating effects of World War I and was even allowed to carry on exports to both sides in the conflict. The ensuing trade surplus resulted in a trebling of the money supply. As the monetary authorities failed to contain the inflationary effects of this development, the value of the Danish currency slumped to about 60 percent of its pre-war value in 1920. The effects of monetary policy failure were aggravated by a decision to return to the gold standard at the 1913 level. When monetary policy was finally tightened in 1924, it resulted in fierce speculation in an appreciation of the Krone. During 1925-26 the currency returned quickly to its pre-war parity. As this was not counterbalanced by an equal decline in prices, the result was a sharp real appreciation and a subsequent deterioration in Denmark’s competitive position (Klovland 1997).

Which European power pushed into the Canadian Hudson Bay region in pursuit of furs during the seventeenth century?

Source: Abildgren (2005)

Note: Trade with Germany is included in the calculation of the real effective exchange rate for the whole period, including 1921-23.

When, in September 1931, Britain decided to leave the gold standard again, Denmark, together with Sweden and Norway, followed only a week later. This move was beneficial as the large real depreciation lead to a long-lasting improvement in Denmark’s competitiveness in the 1930s. It was, no doubt, the single most important policy decision during the depression years. Keynesian demand management, even if it had been fully understood, was barred by a small public sector, only about 13 percent of GDP. As it was, fiscal orthodoxy ruled and policy was slightly procyclical as taxes were raised to cover the deficit created by crisis and unemployment (Topp 1995).

Structural development during the 1920s, surprisingly for a rich nation at this stage, was in favor of agriculture. The total labor force in Danish agriculture grew by 5 percent from 1920 to 1930. The number of employees in agriculture was stagnating whereas the number of self-employed farmers increased by a larger number. The development in relative incomes cannot account for this trend but part of the explanation must be found in a flawed Danish land policy, which actively supported a further parceling out of land into small holdings and restricted the consolidation into larger more viable farms. It took until the early 1960s before this policy began to be unwound.

When the world depression hit Denmark with a minor time lag, agriculture still employed one-third of the total workforce while its contribution to total GDP was a bit less than one-fifth. Perhaps more importantly, agricultural goods still made up 80 percent of total exports.

Denmark’s terms of trade, as a consequence, declined by 24 percent from 1930 to 1932. In 1933 and 1934 bilateral trade agreements were forced upon Denmark by Britain and Germany. In 1932 Denmark had adopted exchange control, a harsh measure even for its time, to stem the net flow of foreign exchange out of the country. By rationing imports exchange control also offered some protection of domestic industry. At the end of the decade manufacture’s GDP had surpassed that of agriculture. In spite of the protectionist policy, unemployment soared to 13-15 percent of the workforce.

The policy mistakes during World War I and its immediate aftermath served as a lesson for policymakers during World War II. The German occupation force (April 9, 1940 until May 5, 1945) drew the funds for its sustenance and for exports to Germany on the Danish central bank whereby the money supply more than doubled. In response the Danish authorities in 1943 launched a policy of absorbing money through open market operations and, for the first time in history, through a surplus on the state budget.

Economic reconstruction after World War II was swift, as again Denmark had been spared the worst consequences of a major war. In 1946 GDP recovered its highest pre-war level. In spite of this, Denmark received relatively generous support through the Marshall Plan of 1948-52, when measured in dollars per capita.

From Riches to Crisis, 1950-1973: Liberalizations and International Integration Once Again

The growth performance during 1950-1957 was markedly lower than the Western European average. The main reason was the high share of agricultural goods in Danish exports, 63 percent in 1950. International trade in agricultural products to a large extent remained regulated. Large deteriorations in the terms of trade caused by the British devaluation 1949, when Denmark followed suit, the outbreak of the Korean War in 1950, and the Suez-crisis of 1956 made matters worse. The ensuing deficits on the balance of payment led the government to contractionary policy measures which restrained growth.

The liberalization of the flow of goods and capital in Western Europe within the framework of the OEEC (the Organization for European Economic Cooperation) during the 1950s probably dealt a blow to some of the Danish manufacturing firms, especially in the textile industry, that had been sheltered through exchange control and wartime. Nevertheless, the export share of industrial production doubled from 10 percent to 20 percent before 1957, at the same time as employment in industry surpassed agricultural employment.

On the question of European economic integration Denmark linked up with its largest trading partner, Britain. After the establishment of the European Common Market in 1958 and when the attempts to create a large European free trade area failed, Denmark entered the European Free Trade Association (EFTA) created under British leadership in 1960. When Britain was finally able to join the European Economic Community (EEC) in 1973, Denmark followed, after a referendum on the issue. Long before admission to the EEC, the advantages to Danish agriculture from the Common Agricultural Policy (CAP) had been emphasized. The higher prices within the EEC were capitalized into higher land prices at the same time that investments were increased based on the expected gains from membership. As a result the most indebted farmers who had borrowed at fixed interests rates were hit hard by two developments from the early 1980s. The EEC started to reduce the producers’ benefits of the CAP because of overproduction and, after 1982, the Danish economy adjusted to a lower level of inflation, and therefore, nominal interest rates. According to Andersen (2001) Danish farmers were left with the highest interest burden of all European Union (EU) farmers in the 1990’s.

Denmark’s relations with the EU, while enthusiastic at the beginning, have since been characterized by a certain amount of reserve. A national referendum in 1992 turned down the treaty on the European Union, the Maastricht Treaty. The Danes, then, opted out of four areas, common citizenship, a common currency, common foreign and defense politics and a common policy on police and legal matters. Once more, in 2000, adoption of the common currency, the Euro, was turned down by the Danish electorate. In the debate leading up to the referendum the possible economic advantages of the Euro in the form of lower transaction costs were considered to be modest, compared to the existent regime of fixed exchange rates vis-à-vis the Euro. All the major political parties, nevertheless, are pro-European, with only the extreme Right and the extreme Left being against. It seems that there is a discrepancy between the general public and the politicians on this particular issue.

As far as domestic economic policy is concerned, the heritage from the 1940s was a new commitment to high employment modified by a balance of payment constraint. The Danish policy differed from that of some other parts of Europe in that the remains of the planned economy from the war and reconstruction period in the form of rationing and price control were dismantled around 1950 and that no nationalizations took place.

Instead of direct regulation, economic policy relied on demand management with fiscal policy as its main instrument. Monetary policy remained a bone of contention between politicians and economists. Coordination of policies was the buzzword but within that framework monetary policy was allotted a passive role. The major political parties for a long time were wary of letting the market rate of interest clear the loan market. Instead, some quantitative measures were carried out with the purpose of dampening the demand for loans.

From Agricultural Society to Service Society: The Growth of the Welfare State

Structural problems in foreign trade extended into the high growth period of 1958-73, as Danish agricultural exports were met with constraints both from the then EEC-member countries and most EFTA countries, as well. During the same decade, the 1960s, as the importance of agriculture was declining the share of employment in the public sector grew rapidly until 1983. Building and construction also took a growing share of the workforce until 1970. These developments left manufacturing industry with a secondary position. Consequently, as pointed out by Pedersen (1995) the sheltered sectors in the economy crowded out the sectors that were exposed to international competition, that is mostly industry and agriculture, by putting a pressure on labor and other costs during the years of strong expansion.

Perhaps the most conspicuous feature of the Danish economy during the Golden Age was the steep increase in welfare-related costs from the mid 1960s and not least the corresponding increases in the number of public employees. Although the seeds of the modern Scandinavian welfare state were sown at a much earlier date, the 1960s was the time when public expenditure as a share of GDP exceeded that of most other countries.

As in other modern welfare states, important elements in the growth of the public sector during the 1960s were the expansion in public health care and education, both free for all citizens. The background for much of the increase in the number of public employees from the late 1960s was the rise in labor participation by married women from the late 1960s until about 1990, partly at least as a consequence. In response, the public day care facilities for young children and old people were expanded. Whereas in 1965 7 percent of 0-6 year olds were in a day nursery or kindergarten, this share rose to 77 per cent in 2000. This again spawned more employment opportunities for women in the public sector. Today the labor participation for women, around 75 percent of 16-66 year olds, is among the highest in the world.

Originally social welfare programs targeted low income earners who were encouraged to take out insurance against sickness (1892), unemployment (1907) and disability (1922). The public subsidized these schemes and initiated a program for the poor among old people (1891). The high unemployment period in the 1930s inspired some temporary relief and some administrative reform, but little fundamental change.

Welfare policy in the first four decades following World War II is commonly believed to have been strongly influenced by the Social Democrat party which held around 30 percent of the votes in general elections and was the party in power for long periods of time. One of the distinctive features of the Danish welfare state has been its focus on the needs of the individual person rather than on the family context. Another important characteristic is the universal nature of a number of benefits starting with a basic old age pension for all in 1956. The compensation rates in a number of schedules are high in international comparison, particularly for low income earners. Public transfers gained a larger share in total public outlays both because standards were raised – that is benefits became higher – and because the number of recipients increased dramatically following the high unemployment regime from the mid 1970s to the mid 1990s. To pay for the high transfers and the large public sector – around 30 percent of the work force – the tax load is also high in international perspective. The share public sector and social expenditure has risen to above 50 percent of GDP, only second to the share in Sweden.

Which European power pushed into the Canadian Hudson Bay region in pursuit of furs during the seventeenth century?

Source: Statistics Denmark ‘50 års-oversigten’ and ADAM’s databank

The Danish labor market model has recently attracted favorable international attention (OECD 2005). It has been declared successful in fighting unemployment – especially compared to the policies of countries like Germany and France. The so-called Flexicurity model rests on three pillars. The first is low employment protection, the second is relatively high compensation rates for the unemployed and the third is the requirement for active participation by the unemployed. Low employment protection has a long tradition in Denmark and there is no change in this factor when comparing the twenty years of high unemployment – 8-12 per cent of the labor force – from the mid 1970s to the mid 1990s, to the past ten years when unemployment has declined to a mere 4.5 percent in 2006. The rules governing compensation to the unemployed were tightened from 1994, limiting the number of years the unemployed could receive benefits from 7 to 4. Most noticeably labor market policy in 1994 turned from ‘passive’ measures – besides unemployment benefits, an early retirement scheme and a temporary paid leave scheme – toward ‘active’ measures that were devoted to getting people back to work by providing training and jobs. It is commonly supposed that the strengthening of economic incentives helped to lower unemployment. However, as Andersen and Svarer (2006) point out, while unemployment has declined substantially a large and growing share of Danes of employable age receives transfers other than unemployment benefit – that is benefits related to sickness or social problems of various kinds, early retirement benefits, etc. This makes it hazardous to compare the Danish labor market model with that of many other countries.

Exchange Rates and Macroeconomic Policy

Denmark has traditionally adhered to a fixed exchange rate regime. The belief is that for a small and open economy, a floating exchange rate could lead to very volatile exchange rates which would harm foreign trade. After having abandoned the gold standard in 1931, the Danish currency (the Krone) was, for a while, pegged to the British pound, only to join the IMF system of fixed but adjustable exchange rates, the so-called Bretton Woods system after World War II. The close link with the British economy still manifested itself when the Danish currency was devaluated along with the pound in 1949 and, half way, in 1967. The devaluation also reflected that after 1960, Denmark’s international competitiveness had gradually been eroded by rising real wages, corresponding to a 30 percent real appreciation of the currency (Pedersen 1996).

When the Bretton Woods system broke down in the early 1970s, Denmark joined the European exchange rate cooperation, the “Snake” arrangement, set up in 1972, an arrangement that was to be continued in the form of the Exchange Rate Mechanism within the European Monetary System from 1979. The Deutschmark was effectively the nominal anchor in European currency cooperation until the launch of the Euro in 1999, a fact that put Danish competitiveness under severe pressure because of markedly higher inflation in Denmark compared to Germany. In the end the Danish government gave way before the pressure and undertook four discrete devaluations from 1979 to 1982. Since compensatory increases in wages were held back, the balance of trade improved perceptibly.

This improvement could, however, not make up for the soaring costs of old loans at a time when the international real rates of interests were high. The Danish devaluation strategy exacerbated this problem. The anticipation of further devaluations was mirrored in a steep increase in the long-term rate of interest. It peaked at 22 percent in nominal terms in 1982, with an interest spread to Germany of 10 percent. Combined with the effects of the second oil crisis on the Danish terms of trade, unemployment rose to 10 percent of the labor force. Given the relatively high compensation ratios for the unemployed, the public deficit increased rapidly and public debt grew to about 70 percent of GDP.

Which European power pushed into the Canadian Hudson Bay region in pursuit of furs during the seventeenth century?

Source: Statistics Denmark Statistical Yearbooks and ADAM’s Databank

In September 1982 the Social Democrat minority government resigned without a general election and was relieved by a Conservative-Liberal minority government. The new government launched a program to improve the competitiveness of the private sector and to rebalance public finances. An important element was a disinflationary economic policy based on fixed exchange rates pegging the Krone to the participants of the EMS and, from 1999, to the Euro. Furthermore, automatic wage indexation that had occurred, with short interruptions since 1920 (with a short lag and high coverage), was abolished. Fiscal policy was tightened, thus bringing an end to the real increases in public expenditure that had lasted since the 1960’s.

The stabilization policy was successful in bringing down inflation and long interest rates. Pedersen (1995) finds that this process, nevertheless, was slower than might have been expected. In view of former Danish exchange rate policy it took some time for the market to believe in the credible commitment to fixed exchange rates. From the late 1990s the interest spread to Germany/ Euroland has been negligible, however.

The initial success of the stabilization policy brought a boom to the Danish economy that, once again, caused overheating in the form of high wage increases (in 1987) and a deterioration of the current account. The solution to this was a number of reforms in 1986-87 aiming at encouraging private savings that had by then fallen to an historical low. Most notable was the reform that reduced tax deductibility of private interest on debts. These measures resulted in a hard landing to the economy caused by the collapse of the housing market.

The period of low growth was further prolonged by the international recession in 1992. In 1993 yet another shift of regime occurred in Danish economic policy. A new Social Democrat government decided to ‘kick start’ the economy by means of a moderate fiscal expansion whereas, in 1994, the same government tightened labor market policies substantially, as we have seen. Mainly as a consequence of these measures the Danish economy from 1994 entered a period of moderate growth with unemployment steadily falling to the level of the 1970s. A new feature that still puzzles Danish economists is that the decline in unemployment over these years has not yet resulted in any increase in wage inflation.

Denmark at the beginning of the twenty-first century in many ways fits the description of a Small Successful European Economy according to Mokyr (2006). Unlike in most of the other small economies, however, Danish exports are broad based and have no “niche” in the world market. Like some other small European countries, Ireland, Finland and Sweden, the short term economic fluctuations as described above have not followed the European business cycle very closely for the past thirty years (Andersen 2001). Domestic demand and domestic economic policy has, after all, played a crucial role even in a very small and very open economy.

References

Abildgren, Kim. “Real Effective Exchange Rates and Purchasing-Power-parity Convergence: Empirical Evidence for Denmark, 1875-2002.” Scandinavian Economic History Review 53, no. 3 (2005): 58-70.

Andersen, Torben M. et al. The Danish Economy: An international Perspective. Copenhagen: DJØF Publishing, 2001.

Andersen, Torben M. and Michael Svarer. “Flexicurity: den danska arbetsmarknadsmodellen.” Ekonomisk debatt 34, no. 1 (2006): 17-29.

Banggaard, Grethe. Befolkningsfremmende foranstaltninger og faldende børnedødelighed. Danmark, ca. 1750-1850. Odense: Syddansk Universitetsforlag, 2004

Hansen, Sv. Aage. Økonomisk vækst i Danmark: Volume I: 1720-1914 and Volume II: 1914-1983. København: Akademisk Forlag, 1984.

Henriksen, Ingrid. “Avoiding Lock-in: Cooperative Creameries in Denmark, 1882-1903.” European Review of Economic History 3, no. 1 (1999): 57-78

Henriksen, Ingrid. “Freehold Tenure in Late Eighteenth-Century Denmark.” Advances in Agricultural Economic History 2 (2003): 21-40.

Henriksen, Ingrid and Kevin H. O’Rourke. “Incentives, Technology and the Shift to Year-round Dairying in Late Nineteenth-century Denmark.” Economic History Review 58, no. 3 (2005):.520-54.

Johansen, Hans Chr. Danish Population History, 1600-1939. Odense: University Press of Southern Denmark, 2002.

Johansen, Hans Chr. Dansk historisk statistik, 1814-1980. København: Gyldendal, 1985.

Klovland, Jan T. “Monetary Policy and Business Cycles in the Interwar Years: The Scandinavian Experience.” European Review of Economic History 2, no. 3 (1998): 309-44.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001

Mokyr, Joel. “Successful Small Open Economies and the Importance of Good Institutions.” In The Road to Prosperity. An Economic History of Finland, edited by Jari Ojala, Jari Eloranta and Jukka Jalava, 8-14. Helsinki: SKS, 2006.

Pedersen, Peder J. “Postwar Growth of the Danish Economy.” In Economic Growth in Europe since 1945, edited by Nicholas Crafts and Gianni Toniolo. Cambridge: Cambridge University Press, 1995.

OECD, Employment Outlook, 2005.

O’Rourke, Kevin H. “The European Grain Invasion, 1870-1913.” Journal of Economic History 57, no. 4 (1997): 775-99.

O’Rourke, Kevin H. and Jeffrey G. Williamson. Globalization and History: The Evolution of a Nineteenth-century Atlantic Economy. Cambridge, MA: MIT Press, 1999

Topp, Niels-Henrik. “Influence of the Public Sector on Activity in Denmark, 1929-39.” Scandinavian Economic History Review 43, no. 3 (1995): 339-56.


Footnotes

1 Denmark also includes the Faeroe Islands, with home rule since 1948, and Greenland, with home rule since 1979, both in the North Atlantic. These territories are left out of this account.

Citation: Henriksen, Ingrid. “An Economic History of Denmark”. EH.Net Encyclopedia, edited by Robert Whaples. October 6, 2006. URL http://eh.net/encyclopedia/an-economic-history-of-denmark/

Council of Economic Advisers

Robert Stanley Herren, North Dakota State University

“The Council of Economic Advisers was established by the Employment Act of 1946 to provide the President with objective economic analysis and advice on the development and implementation of a wide range of domestic and international economic policy issues” (Economic Report of the President 2001: 257). Although it has been the most enduring and important result of the Employment Act of 1946, the Council of Economic Advisers (CEA) was not the legislation’s major focus. As the Second World War ended, many feared that the United States would return to being a depressed economy. Many felt that the United States had the ability, through discretionary fiscal policy, to prevent such an economic collapse but needed legislation to force the federal government to promote continued economic prosperity. Thus, Keynesian economists in government convinced their congressional allies to introduce the Full Employment Act of 1945. Because critics thought the proposed legislation would result in higher inflation, the final legislation (Employment Act of 1946) included vague goals of “maximum production and employment consistent with price stability.”

Neither Congress nor President Truman possessed a clear vision concerning the purpose of the three-member Council of Economic Advisers (CEA). President Truman complicated the CEA’s early years by appointing three people (Edwin Nourse, chair; Leon Keyserling, vice-chair; and John D. Clark) who held disparate views concerning the CEA’s purpose and economic policies. Nourse preferred the CEA to provide impartial economic advice to the President and to avoid the political process; for example, he did not believe that it was appropriate for CEA members to participate in Congressional hearings. Keyserling, who came to Washington during the 1930s to work in President Franklin Roosevelt’s administration, wanted to participate in the political process by being a forceful advocate of the President’s economic program. The squabbling continued until Nourse resigned, and Keyserling became the CEA’s second chair in 1949.

During the first months of the Eisenhower administration, there was substantial debate about whether the three-member form of the CEA should be continued. Critics of Truman’s CEA noted that it did not always speak with a unified voice; more damaging was the belief that Keyserling had become a Democratic partisan in his vigorous defense of presidential initiatives. President Eisenhower wanted to maintain the CEA in some form because he appreciated receiving expert advice from his staff. He chose Arthur Burns to chair his first CEA and to reorganize the CEA. Burns kept three members but eliminated the vice-chair position to make it clear that the chair controlled the CEA; this structure still exists.

The three-member Council of Economic Advisers has continually provided professional economic advice to presidents, who have appointed to the CEA many prominent mainstream economists including several recipients of the Nobel Prize in economics. Its staff has remained small with between 25 and 30 people including senior staff economists (usually on leave from universities), junior staff economists (most often graduate students), and several permanent statisticians. Writing its annual Economic Report provides a CEA with the opportunity to explain the economic rationale for an administration’s economic programs.

Advocacy of Economic Growth

Each Council of Economic Advisers has stressed the importance of adopting policies to ensure a high rate of economic growth. CEAs have been advocates within administrations for emphasizing economic growth as a national priority. CEAs have been most successful in promoting economic growth by consistently supporting microeconomic policies to promote competition and to make markets work better. Because they contend that free international trade improves a nation’s economic growth, CEAs have supported presidential efforts to enact policies that would result in more open trade among nations. Former CEA members have often noted that much of their and the staff’s time dealt with microeconomic policies, often to provide arguments against ill-conceived proposals coming from other parts of the administration or from Congress. Clinton’s CEA described well this function: “The Council’s mission within the Executive Office of the President is unique: it serves as a tenacious advocate for policies that facilitate the workings of the market and that emphasize the importance of incentives, efficiency, productivity, and long-term growth. …The Council has also been important in helping to weed out proposals that are ill-advised or unworkable, proposals that cannot be supported by the existing economic data, and proposals that could have damaging consequences for the economy” (Economic Report of the President 1996:11).

Although CEAs in both Democratic and Republican administrations have given similar advice regarding microeconomic and international trade policies, they have not agreed on how to use fiscal policy to increase the growth of potential real output. Republican CEAs, particularly in the Reagan and Bush administrations, have recommended lower marginal tax rates to increase work effort, saving, and investment. Democratic CEAs have generally thought that such effects are small. For example, Clinton’s CEA vigorously defended the increase in marginal tax rates imposed by the Omnibus Budget Reconciliation Act of 1993. It argued, similar to other Democratic CEAs, that an increase in marginal tax rates would not adversely affect economic growth because it would not significantly reduce work effort, saving, and investment.

Fiscal Policies and Business Cycles

The Employment Act of 1946 focused on using discretionary fiscal policy to prevent another Great Depression. CEAs have contributed to convincing presidents during recessions not to raise tax rates or to reduce government expenditures in an attempt to balance the budget. This effort started early in the CEA’s history with the recessions of 1948-1949 and 1953-1954 because both Truman’s CEA and Eisenhower’s CEA accepted the idea that budgets should be balanced over the business cycle, rather than annually.

Although it was easy to avoid using contractionary fiscal policy during an economic downturn, it was more difficult to know when to advocate expansionary fiscal policy. For example, many have criticized the Eisenhower administration for not moving more aggressively in using fiscal policy to stimulate aggregate demand between 1958 and 1960. Eisenhower’s CEA, however, never found an appropriate time to recommend a tax cut. It perceived the economy to be too strong in 1958 to warrant additional demand. It saw the economic slowdown in 1959 to be caused by a supply disturbance (a lengthy steel strike) rather than a lack of aggregate demand. The fear in 1960 was any potential tax legislation, enacted during a presidential election year, would contain too many provisions that would adversely affect long-term economic growth.

The CEA’s most famous success in using discretionary fiscal policy occurred during the 1960s. President Kennedy appointed Walter Heller as his first chair. Heller, joined by Kermit Gordon and James Tobin, formed the most Keynesian CEA ever. They thought that unemployment could be reduced from the current level of seven percent to four percent without increasing inflation. In its 1962 report, the CEA explicitly set four percent unemployment as the interim target for the full-employment rate of unemployment. Heller’s excellent rapport with President Kennedy allowed the CEA to successfully promote the investment tax credit (1962) and reduction of marginal tax rates for personal income (1964); the latter legislation was primarily designed to increase consumer demand.

However even this success demonstrated the extensive time period required to enact fiscal policy. Later in the 1960s during President Johnson’s administration, aggregate demand increased faster than expected due to increasing government spending arising from both military expenditures in Vietnam and the creation of many new government programs. To prevent inflation the CEA recommended a tax increase. President Johnson did not immediately accept this advice; he ultimately proposed and obtained a tax surcharge (1968) that was too little and too late to prevent rising inflation.

Over time, there has been a growing realization that the political process reduces the opportunities for timely enactment of discretionary fiscal policies. Moreover a long and variable effectiveness (impact) lag combined with uncertainty in the magnitude of fiscal policy multipliers further weaken the case for discretionary fiscal policy in reducing cyclical fluctuations. Instead, CEAs have stressed the importance of strengthening the automatic stabilizing aspects of the fiscal system.

Monetary Policy

While fiscal policy has declined in importance as a countercyclical tool, monetary policy has become relatively more important. The CEA does not directly influence monetary policy, but it does regularly communicate with the Federal Reserve in an attempt to provide it with the CEA’s view of the economy. It is uniquely qualified to explain the economic consequences of monetary policy to the President and White House staff.

Most CEAs have publicly supported the concept of an independent Federal Reserve; the most notable exception was Truman’s CEA, which under chair Leon Keyserling opposed the 1951 Treasury-Federal Reserve Accord. Although later they were often frustrated by the Federal Reserve’s monetary policy, particularly when CEAs preferred a more expansionary policy, CEAs vigorously attempted to prevent administrations from excessive criticism of the Federal Reserve’s monetary policy. CEAs viewed such “Fed-bashing” as counterproductive for several reasons. The Federal Reserve vigorously protects the appearance of its independence; it does not want to appear to be caving into congressional or presidential pressure. Moreover since the early 1980s, the CEA has not wanted to undermine the Federal Reserve’s credibility of successfully restraining inflation because CEAs believe that the Federal Reserve can best promote economic growth by keeping inflation low and stable.

Inflation

Although since 1980 CEAs have agreed that monetary policy is the primary long-run determinant of inflation, earlier CEAs held a variety of views concerning methods to prevent inflation. Truman’s CEA contended that a lack of supply in specific sectors, rather than excess aggregate demand, was the underlying cause of inflation; it recommended selective price and wage controls rather than contractionary monetary policy to reduce inflation.

A perceived problem during the 1950s and 1960s was that administered prices and cost-push inflation caused inflation to rise before the economy could reach full employment. Eisenhower’s CEA used a policy of exhortation, appealing for voluntary restraint with business and labor sharing responsibility for obtaining price stability. Kennedy-Johnson CEAs formulated wage-price guideposts that provided a quantitative aspect for its exhortation; these guideposts crumbled when aggregate demand grew too fast.

President Nixon’s CEA faced the challenge of devising a policy to reduce inflation without causing a major recession. The CEA recommended using monetary and fiscal policy to gradually reduce the growth of aggregate demand. However, inflation did not slow even though the nation went through a recession. The slow fall in inflation resulted in the Nixon administration formulating the “New Economic Policy” in August 1971, which suspended convertibility of the dollar into gold and instituted a temporary comprehensive freeze on wages and prices. Nixon’s CEA, which initially opposed imposition of mandatory wage and price controls, would spend much of the next three years helping to provide an orderly transition from the freeze. Subsequent CEAs, with exception of Carter’s CEA, did not consider wage-price policies to be a viable tool in preventing inflation.

During the 1960s, the Kennedy-Johnson CEAs believed that the relationship between inflation and unemployment (the Phillips curve) was relatively flat at unemployment rates greater than four percent; lower unemployment rates were associated with higher rates of inflation. Since 1969, CEAs with the exception of Carter’s CEA have used a natural rate theory of inflation. The natural rate theory indicates that there is not a permanent tradeoff between inflation and unemployment; instead the economy tends to move toward a given level of unemployment often termed the natural rate of unemployment or full-employment rate of unemployment. Nixon and Ford CEAs both thought that the natural rate of unemployment had risen since the early 1960s, but for political reasons the CEA was reluctant to abandon the 4 percent target established in 1962. Finally in 1977, it wrote that the full-employment unemployment rate had risen to at least 4.9 percent due to demographic shifts; other factors may have raised it to 5.5 percent. Between 1981 and 1996, the CEA generally thought the natural rate of unemployment was about 6 percent. During the latter half of the 1990s, it reduced its estimate because unemployment fell without inflation increasing. Both the last report written by Clinton’s CEA (2001) and the most recent report written by Bush’s CEA (2004) consider the natural rate of unemployment to be currently about 5 percent.

Evolving Role and Influence

The CEA have been most influential in affecting economic policy when its chair has been able to develop an excellent rapport with the President; examples include Walter Heller with President Kennedy and Alan Greenspan with President Ford. CEAs have rarely disagreed with the President or his staff in public even though they have lost many battles. Often they do not even mention policies, with which they disagree, in their annual reports. If the disagreements are serious enough, members have preferred to quietly resign. A notable exception occurred when Martin Feldstein’s public feuding with White House staff concerning budgetary policy in 1983 and 1984 reduced the CEA’s influence; in 1984 Reagan’s White House staff considered terminating the CEA.

Over time more departments and agencies have hired professional economists, thereby eroding the “monopoly” of economic expertise once held by the CEA in the White House and executive branch. Moreover, each administration adopts a different organization for its decision making and flow of information; these organizational differences may affect the CEA’s impact on the formulation of economic policies. For example, President Clinton established a National Economic Council (NEC) to coordinate economic policies within his administration. Laura Tyson, Clinton’s first CEA chair, resigned to become director of the NEC; some interpreted this move as indicating the latter position was more influential in affecting economic policy. President Bush continued the NEC.

The CEA retains influence with its chief constituent – the President – because it does not represent a specific sector or department. It can focus on providing economic advice to promote the use of incentives to obtain economic efficiency and economic growth.

Further Reading

The CEA’s annual reports documents changes in thinking in “mainstream economics.”

Presidential libraries contain many files from the CEA and its individual members. Many former members have written articles and books reflecting about their experiences. There has been much written about the ideas and politics involved in making specific economic policies. The works listed below constitute just a small part of a vast literature; I have chosen the literature that I have found to be most useful in understanding the role of the Council of Economic Advisers in advising the President about economic policies.

Bailey, Stephen. Congress Makes a Law: The Story Behind the Employment Act of 1946. New York: Columbia University Press, 1950. Bailey’s work remains the definitive study regarding the legislative debates that resulted in the Employment Act of 1946.

DeLong, J. Bradford. “Keynesianism, Pennsylvania Avenue Style: Some Economic Consequences of the Employment Act of 1946.” Journal of Economic Perspectives 10, no. 3 (1996): 41-53. DeLong places the CEA’s ideas and influence within a broader context of the profession’s changing views concerning economic stabilization.

Feldstein, Martin. “American Economic Policy in the 1980s: A Personal View.” In American Economic Policy in the 1980s, edited by Martin Feldstein, 1-79. Chicago: University of Chicago Press, 1994. Feldstein was CEA chair (1982-1984); he often clashed with other White House staff members.

Goodwin, Craufurd, editor. Exhortation and Controls: The Search for a Wage-Price Policy, 1945-1971. Washington: Brookings Institution, 1975. The authors of the essays extensively used documents in presidential libraries and interviews with many economists who participated in developing wage-price policies.

Hargrove, Edwin C. and Samuel A. Morley, editors. The President and the Council of Economic Advisers: Interviews with CEA Chairmen. Boulder: Westview Press, 1984. The editors interviewed nine of the first ten CEA chairs (Edwin Nourse had already died). In addition to the interviews, the editors included an introductory essay that summarized the major themes of the interviews.

Herren, Robert Stanley. “The Council of Economic Advisers’ View of the Full-Employment Unemployment Rate: 1962-1998.” Journal of Economics 24, no. 2(1998): 49-62. This article discusses how various CEAs have viewed the “maximum employment” provision of the 1946 Employment Act.

Orszag, Jonathan M., Peter R. Orszag, and Laura D. Tyson. “The Process of Economic Policy-Making during the Clinton Administration.” In American Economic Policy in the 1990s, edited by Jeffrey Frankel and Peter Orszag, 983-1027. Cambridge, MA: MIT Press, 2002. Tyson was CEA chair (1993-1995). The authors briefly discuss attempts to coordinate economic policy prior to the Clinton administration. The authors emphasize activities of the National Economic Council and its interactions with the CEA.

Porter, Roger. “The Council of Economic Advisers.” In Executive Leadership in Anglo-American Systems, edited by Colin Campbell and Margaret Jane Wyszomirzki, 171-193. Pittsburgh, PA: University of Pittsburgh Press, 1991. Porter provides a brief history of the evolving role and functions of the CEA.

Saulnier, Raymond. Constructive Years: The U.S. Economy under Eisenhower. Lanham, MD: University Press of America, 1991. Saulnier was CEA member (1955-1956) and chair (1956-1961). He provides his views about the economic ideas of Eisenhower’s CEA.

Schultze, Charles L. “The CEA: An Inside Voice for Mainstream Economics.” Journal of Economic Perspectives 10, no. 3 (1996): 23-39. Schultze was CEA chair (1977-1981).

Sobel, Robert and Bernard S. Katz, editors. Biographical Directory of the Council of Economic Advisers. New York: Greenwood Press, 1988. The essays emphasize the economic ideas and careers of the forty-five economists who served in the CEA from 1947 to 1985.

Stein, Herbert. Presidential Economics: The Making of Economic Policy from Roosevelt to Clinton. Third revised edition. Washington: American Enterprise Institute for Public Policy Research, 1994. Stein was CEA member (1969-1971) and chair (1972-1974). He focuses on the general context, including the advice of CEAs, in which Presidents formulated economic policies.

Stiglitz, Joseph E. The Roaring Nineties: A New History of the World’s Most Prosperous Decade. New York: W.W. Norton, 2003. Stiglitz was CEA member (1993-1995) and chair (1995-1997). He provides substantial information concerning the ideas that affected economic policy during President Clinton’s administration.

United States, President. The Economic Report of the President. Washington: United States Government Printing Office, 1947-2004. The reports since 1995 have been available on-line at http://www.gpoaccess.gov/eop. The most recent report, and other general information about the CEA, can be found at http://www.whitehouse.gov/cea/

Citation: Herren, Robert. “Council of Economic Advisers”. EH.Net Encyclopedia, edited by Robert Whaples. August 18, 2004. URL http://eh.net/encyclopedia/council-of-economic-advisers/

B. Zorina Khan, Bowdoin College

Introduction

Copyright is a form of intellectual property that provides legal protection against unauthorized copying of the producer’s original expression in products such as art, music, books, articles, and software. Economists have paid relatively little scholarly attention to copyrights, although recent debates about piracy and “the digital dilemma” (free use of digital property) have prompted closer attention to theoretical and historical issues. Like other forms of intellectual property, copyright is directed to the protection of cultural creations that are nonrivalrous and nonexclusive in nature. It is generally proposed that, in the absence of private or public forms of exclusion, prices will tend to be driven down to the low or zero marginal costs and the original producer would be unable to recover the initial investment.

Part of the debate about copyright exists because it is still not clear whether state enforcement is necessary to enable owners to gain returns, or whether the producers of copyrightable products respond significantly to financial incentives. Producers of these public goods might still be able to appropriate returns without copyright laws or in the face of widespread infringement, through such strategies as encryption, cartelization, the provision of complementary products, private monitoring and enforcement, market segmentation, network externalities, first mover effects and product differentiation. Patronage, taxation, subsidies, or public provision, might also comprise alternatives to copyright protection. In some instances “authors” (broadly defined) might be more concerned about nonfinancial rewards such as enhanced reputations or more extensive diffusion.

During the past three centuries great controversy has always been associated with the grant of property rights to authors, ranging from the notion that cultural creativity should be rewarded with perpetual rights, through the complete rejection of any intellectual property rights at all for copyrightable commodities. However, historically, the primary emphasis has been on the provision of copyright protection through the formal legal system. Europeans have generally tended to adopt the philosophical position that authorship embodies rights of personhood or moral rights that should be accorded strong protections. The American approach to copyright has been more utilitarian: policies were based on a comparison of costs and benefits, and the primary emphasis of early copyright policies was on the advancement of public welfare. However, the harmonization of international laws has created a melding of these two approaches. The tendency at present is toward stronger enforcement of copyrights, prompted by the lobbying of publishers and the globalization of culture and commerce. Technological change has always exerted an exogenous force for change in copyright laws, and modern innovations in particular provoke questions about the extent to which copyright systems can respond effectively to such challenges.

In the early years of printing, books and other written matter became part of the public domain when they were published. Like patents, the grant of book privileges originated in the Republic of Venice in the fifteenth century, a practice which was soon prevalent in a number of other European countries. Donatus Bossius, a Milan author, petitioned the duke in 1492 for an exclusive privilege for his book, and successfully argued that he would be unjustly deprived of the benefits from his efforts if others were able to freely copy his work. He was given the privilege for a term of ten years. However, authorship was not required for the grant of a privilege, and printers and publishers obtained monopolies over existing books as well as new works. Since privileges were granted on a case by case basis, they varied in geographical scope, duration, and breadth of coverage, as well as in terms of the attendant penalties for their violation. Grantors included religious orders and authorities, universities, political figures, and the representatives of the Crown.

The French privilege system was introduced in 1498 and was well-developed by the end of the sixteenth century. Privileges were granted under the auspices of the monarch, generally for a brief period of two to three years, although the term could be as much as ten years. Protection was granted to new books or translations, maps, type designs, engravings and artwork. Petitioners paid formal fees and informal gratuities to the officials concerned. Since applications could only be sealed if the King were present, petitions had to be carefully timed to take advantage of his route or his return from trips and campaigns. It became somewhat more convenient when the courts of appeal such as the Parlement de Paris began to issue grants that were privileges in all but name, although this could lead to conflicting rights if another authority had already allocated the monopoly elsewhere. The courts sometimes imposed limits on the rights conferred, in the form of stipulations about the prices that could be charged. Privileges were property that could be assigned or licensed to another party, and their infringement was punished by a fine and at times confiscation of all the output of “pirates.”

After 1566, the Edict of Moulins required that all new books had to be approved and licensed by the Crown. Favored parties were able to get renewals of their monopolies that also allowed them to lay claim to works that were already in the public domain. By the late eighteenth century an extensive administrative procedure was in place that was designed to restrict the number of presses and engage in surveillance and censorship of the publishing industry. Manuscripts first had to be read by a censor, and only after a permit was requested and granted could the book be printed, although the permit could later be revoked if complaints were lodged by sufficiently influential individuals. Decrees in 1777 established that authors who did not alienate their property were entitled to exclusive rights in perpetuity. Since few authors had the will or resources to publish and distribute books, their privileges were likely to be sold outright to professional publishers. However, the law made a distinction in the rights accorded to publishers, because if the right was sold the privilege was only accorded a limited duration of at least ten years, the exact term to be determined in accordance with the value of the work, and once the publisher’s term expired, the work passed into the public domain. The fee for a privilege was thirty six livres. Approvals to print a work, or a “permission simple” which did not entail exclusive rights could also be obtained after payment of a substantial fee. Between 1700 and 1789, a total of 2,586 petitions for exclusive privileges were filed, and about two thirds were granted. The result was a system that resulted in “odious monopolies,” higher prices and greater scarcity, large transfers to officials of the Crown and their allies, and pervasive censorship. It likewise disadvantaged smaller book producers, provincial publishers, and the academic and broader community.

The French Revolutionary decrees of 1791 and 1793 replaced the idea of privilege with that of uniform statutory claims to literary property, based on the principle that “the most sacred, the most unassailable and the most personal of possessions is the fruit of a writer’s thought.” The subject matter of copyrights covered books, dramatic productions and the output of the “beaux arts” including designs and sculpture. Authors were required to deposit two copies of their books with the Bibliothèque Nationale or risk losing their copyright. Some observers felt that copyrights in France were the least protected of all property rights, since they were enforced with a care to protecting the public domain and social welfare. Although France is associated with the author’s rights approach to copyright and proclamations of the “droit d’auteur,” these ideas evolved slowly and hesitatingly, mainly in order to meet the self-interest of the various members of the book trade. During the ancien régime, the rhetoric of authors’ rights had been promoted by French owners of book privileges as a way of deflecting criticism of monopoly grants and of protecting their profits, and by their critics as a means of attacking the same monopolies and profits. This language was retained in the statutes after the Revolution, so the changes in interpretation and enforcement may not have been universally evident.

By the middle of the nineteenth century, French jurisprudence and philosophy tended to explicate copyrights in terms of rights of personality but the idea of the moral claim of authors to property rights was not incorporated in the law until early in the twentieth century. The droit d’auteur first appeared in a law of April 1910. In 1920 visual artists were granted a “droit de suite” or a claim to a portion of the revenues from resale of their works. Subsequent evolution of French copyright laws led to the recognition of the right of disclosure, the right of retraction, the right of attribution, and the right of integrity. These moral rights are (at least in theory) perpetual, inalienable, and thus can be bequeathed to the heirs of the author or artist, regardless of whether or not the work was sold to someone else. The self-interested rhetoric of the owners of monopoly privileges now fully emerged as the keystone of the “French system of literary property” that would shape international copyright laws in the twenty first century.

England similarly experienced a period during which privileges were granted, such as a seven year grant from the Chancellor of Oxford University for an 1518 work. In 1557, the Worshipful Company of Stationers, a publishers’ guild, was founded on the authority of a royal charter and controlled the book trade for next one hundred and fifty years. This company created and controlled the right of their constituent members to make copies, so in effect their “copy right” was a private property right that existed in perpetuity, independently of state or statutory rights. Enforcement and regulation were carried out by the corporation itself through its Court of Assistants. The Stationers’ Company maintained a register of books, issued licenses, and sanctioned individuals who violated their regulations. Thus, in both England and France, copyright law began as a monopoly grant to benefit and regulate the printers’ guilds, and as a form of surveillance and censorship over public opinion on behalf of the Crown.

The English system of privileges was replaced in 1710 by a copyright statute (the “Statute of Anne” or “An Act for the Encouragement of Learning, by Vesting the Copies of Printed Books in the Authors or Purchasers of Such Copies, During the Times Therein Mentioned,” 1709-10, 8 Anne, ch. 19.) The statute was not directed toward the authors of books and their rights. Rather, its intent was to restrain the publishing industry and destroy its monopoly power. According to the law, the grant of copyright was available to anyone, not just to the Stationers. Instead of a perpetual right, the term was limited to fourteen years, with a right of renewal, after which the work would enter the public domain. The statute also permitted the importation of books in foreign languages.

Subsequent litigation and judicial interpretation added a new and fundamentally different dimension to copyright. In order to protect their perpetual copyright, publishers tried to promote the idea that copyright was based on the natural rights of authors or creative individuals and, as the agent of the author, those rights devolved to the publisher. If indeed copyrights derived from these inherent principles, they represented property that existed independently of statutory provisions and could be protected under common law. The booksellers engaged in a series of strategic litigation that culminated in their defeat in the landmark case, Donaldson v. Beckett [98 Eng. Rep. 257 (1774)]. The court ruled that authors had a common law right in their unpublished works, but on publication that right was extinguished by the statute, whose provisions determined the nature and scope of any copyright claims. This transition from publisher’s rights to statutory author’s rights implied that copyright had transmuted from a straightforward license to protect monopoly profits into an expanding property right whose boundaries would henceforth increase at the expense of the public domain.

Between 1735 and 1875 fourteen Acts of Parliament amended the copyright legislation. Copyrights extended to sheet music, maps, charts, books, sculptures, paintings, photographs, dramatic works and songs sung in a dramatic fashion, and lectures outside of educational institutions. Copyright owners had no remedies at law unless they complied with a number of stipulations which included registration, the payment of fees, the delivery of free copies of every edition to the British Museum (delinquents were fined), as well as complimentary copies for four libraries, including the Bodleian and Trinity College. The ubiquitous Stationers’ Company administered registration, and the registrar personally benefited from the monetary fees of 5 shillings when the book was registered and an equal amount for each assignment and each copy of an entry, along with one shilling for each entry searched. Foreigners could only obtain copyrights if they presented themselves in a part of the British Empire at the time of publication. The book had to be published in the United Kingdom, and prior publication in a foreign country – even in a British colony – was an obstacle to copyright protection.

The term of the copyright in books was for the longer of 42 years from publication or the lifetime of the author plus seven years, and after the death of the author a compulsory license could be issued to ensure that works of sufficient public benefit would be published. The “work for hire” doctrine was in force for books, reviews, newspapers, magazines and essays unless a distinct contractual clause specified that the copyright was to accrue to the author. Similarly, unauthorized use of a publication was permitted for the purposes of “fair use.” Only the copyright holder and his agents were allowed to import the protected works into Britain.

The British Commission that reported on the state of the copyright system in 1878 felt that the laws were “obscure, arbitrary and piecemeal” and were compounded by the confused state of the common law. The numerous uncoordinated laws that were simultaneously in force led to conflicts and unintended defects in the system. The report discussed but did not recommend an alternative to the grant of copyrights, in the form of a royalty system where “any person would be entitled to copy or republish the work on paying or securing to the owner a remuneration, taking the form of royalty or definite sum prescribed by law.” The main benefit would be to be public in the form of early access to cheap editions, whereas the main cost would be to the publishers whose risk and return would be negatively affected.

The Commission noted that the implications for the colonies were “anomalous and unsatisfactory.” The publishers in England practiced price discrimination, modifying the initial high prices for copyrighted material through discounts given to reading clubs, circulating libraries and the like, benefits which were not available in the colonies. In 1846 the Colonial Office acknowledged “the injurious effects produced upon our more distant colonists” and passed the Foreign Reprints Act in the following year. This allowed the colonies who adopted the terms of British copyright legislation to import cheap reprints of British copyrighted material with a tariff of 12.5 percent, the proceeds of which were to be remitted to the copyright owners. However, enforcement of the tariff seems to have been less than vigorous since, between 1866 and 1876 only £1155 was received from the 19 colonies who took advantage of the legislation (£1084 from Canada which benefited significantly from the American reprint trade). The Canadians argued that it was difficult to monitor imports, so it would be more effective to allow them to publish the reprints themselves and collect taxes for the benefit of the copyright owners. This proposal was rejected, but under the Canadian Copyright Act of 1875 British copyright owners could obtain Canadian copyrights for Canadian editions that were sold at much lower prices than in Britain or even in the United States.

The Commission made two recommendations. First, the bigger colonies with domestic publishing facilities should be allowed to reprint copyrighted material on payment of a license to be set by law. Second, the benefits to the smaller colonies of access to British literature should take precedence over lobbies to repeal the Foreign Reprints Act, which should be better enforced rather than removed entirely. Some had argued that the public interest required that Britain should allow the importation of cheap colonial reprints since the high prices of books were “altogether prohibitory to the great mass of the reading public” but the Commission felt that this should only be adopted with the consent of the copyright owner. They also devoted a great deal of attention to what was termed “The American Question” but took the “highest public ground” and recommended against retaliatory policies.

In the period before the Declaration of Independence individual American states recognized and promoted patenting activity, but copyright protection was not considered to be of equal importance, for a number of reasons. First, in a democracy the claims of the public and the wish to foster freedom of expression were paramount. Second, to a new colony, pragmatic concerns were likely of greater importance than the arts, and the more substantial literary works were imported. Markets were sufficiently narrow that an individual could saturate the market with a first run printing, and most local publishers produced ephemera such as newspapers, almanacs, and bills. Third, it was unclear that copyright protection was needed as an incentive for creativity, especially since a significant fraction of output was devoted to works such as medical treatises and religious tracts whose authors wished simply to maximize the number of readers, rather than the amount of income they received.

In 1783, Connecticut became the first state to approve an “Act for the encouragement of literature and genius” because “it is perfectly agreeable to the principles of natural equity and justice, that every author should be secured in receiving the profits that may arise from the sale of his works, and such security may encourage men of learning and genius to publish their writings; which may do honor to their country, and service to mankind.” Although this preamble might seem to strongly favor author’s rights, the statute also specified that books were to be offered at reasonable prices and in sufficient quantities, or else a compulsory license would issue.

Despite their common source in the intellectual property clause of the U.S. Constitution, copyright policies provided a marked contrast to the patent system. According to Wheaton v. Peters, 33 U.S. 591, 684 (1834): “It has been argued at the bar, that as the promotion of the progress of science and the useful arts is here united in the same clause in the constitution, the rights of the authors and inventors were considered as standing on the same footing; but this, I think, is a non sequitur, for when congress came to execute this power by legislation, the subjects are kept distinct, and very different provisions are made respecting them.”

The earliest federal statute to protect the product of authors was approved on May 31 1790, “for the encouragement of learning, by securing the copies of maps, charts, and books to the authors and proprietors of such copies, during the times therein mentioned.” John Barry obtained the first federal copyright when he registered his spelling book in the District Court of Pennsylvania, and early grants reflected the same utilitarian character. Policy makers felt that copyright protection would serve to increase the flow of learning and information, and by encouraging publication would contribute to democratic principles of free speech. The diffusion of knowledge would also ensure broad-based access to the benefits of social and economic development. The copyright act required authors and proprietors to deposit a copy of the title of their work in the office of the district court in the area where they lived, for a nominal fee of sixty cents. Registration secured the right to print, publish and sell maps, charts and books for a term of fourteen years, with the possibility of an extension for another like term. Amendments to the original act extended protection to other works including musical compositions, plays and performances, engravings and photographs. Legislators refused to grant perpetual terms, but the length of protection was extended in the general revision of the laws in 1831, and 1909.

In the case of patents, the rights of inventors, whether domestic or foreign, were widely viewed as coincident with public welfare. In stark contrast, policymakers showed from the very beginning an acute sensitivity to trade-offs between the rights of authors (or publishers) and social welfare. The protections provided to authors under copyrights were as a result much more limited than those provided by the laws based on moral rights that were applied in many European countries. Of relevance here are stipulations regarding first sale, work for hire, and fair use. Under a moral rights-based system, an artist or his heirs can claim remedies if subsequent owners alter or distort the work in a way that allegedly injures the artist’s honor or reputation. According to the first sale doctrine, the copyright holder lost all rights after the work was sold. In the American system, if the copyright holder’s welfare were enhanced by nonmonetary concerns, these individualized concerns could be addressed and enforced through contract law, rather than through a generic federal statutory clause that would affect all property holders. Similarly, “work for hire” doctrines also repudiated the right of personality, in favor of facilitating market transactions. For example, in 1895 Thomas Donaldson filed a complaint that Carroll D. Wright’s editing of Donaldson’s report for the Census Bureau was “damaging and injurious to the plaintiff, and to his reputation” as a scholar. The court rejected his claim and ruled that as a paid employee he had no rights in the bulletin; to rule otherwise would create problems in situations where employees were hired to prepare data and statistics.

This difficult quest for balance between private and public good was most evident in the copyright doctrine of “fair use” that (unlike with patents) allowed unauthorized access to copyrighted works under certain conditions. Joseph Story ruled in [Folsom v. Marsh, 9 F. Cas. 342 (1841)]: “we must often, in deciding questions of this sort, look to the nature and objects of the selections made, the quantity and value of the materials used, and the degree in which the use may prejudice the sale, or diminish the profits, or supersede the objects, of the original work.” One of the striking features of the fair use doctrine is the extent to which property rights were defined in terms of market valuations, or the impact on sales and profits, as opposed to a clear holding of the exclusivity of property. Fair use doctrine thus illustrates the extent to which the early policy makers weighed the costs and benefits of private property rights against the rights of the public and the provisions for a democratic society. If copyrights were as strictly construed as patents, it would serve to reduce scholarship, prohibit public access for noncommercial purposes, increase transactions costs for potential users, and inhibit learning which the statutes were meant to promote.

Nevertheless, like other forms of intellectual property, the copyright system evolved to encompass improvements in technology and changes in the marketplace. Technological changes in nineteenth-century printing included the use of stereotyping which lowered the costs of reprints, improvements in paper making machinery, and the advent of steam powered printing presses. Graphic design also benefited from innovations, most notably the development of lithography and photography. The number of new products also expanded significantly, encompassing recorded music and moving pictures by the end of the nineteenth century; and commercial television, video recordings, audiotapes, and digital music in the twentieth century.

The subject matter, scope and duration of copyrights expanded over the course of the nineteenth century to include musical compositions, plays, engravings, sculpture, and photographs. By 1910 the original copyright holder was granted derivative rights such as to translations of literary works into other languages; to performances; and the rights to adapt musical works, among others. Congress also lengthened the term of copyright several times, although by 1890 the term of copyright protection in Greece and the United States were the most abbreviated in the world. New technologies stimulated change by creating new subjects for copyright protection, and by lowering the costs of infringement of copyrighted works. In Edison v. Lubin, 122 F. Cas. 240 (1903), the lower court rejected Edison’s copyright of moving pictures under the statutory category of photographs. This decision was overturned by the appellate court: “[Congress] must have recognized there would be change and advance in making photographs, just as there has been in making books, printing chromos, and other subjects of copyright protection.” Copyright enforcement was largely the concern of commercial interests, and not of the creative individual. The fraction of copyright plaintiffs who were authors (broadly defined) was initially quite low, and fell continuously during the nineteenth century. By 1900-1909, only 8.6 percent of all plaintiffs in copyright cases were the creators of the item that was the subject of the litigation. Instead, by the same period, the majority of parties bringing cases were publishers and other assignees of copyrights.

In 1909 Congress revised the copyright law and composers were given the right to make the first mechanical reproductions of their music. However, after the first recording, the statute permitted a compulsory license to issue for copyrighted musical compositions: that is to say, anyone could subsequently make their own recording of the composition on payment of a fee that was set by the statute at two cents per recording. In effect, the property right was transformed into a liability rule. The next major legislative change in 1976 similarly allowed compulsory licenses to issue for works that are broadcast on cable television. The prevalence of compulsory licenses for copyrighted material is worth noting for a number of reasons: they underline some of the statutory differences between patents and copyrights in the United States; they reflect economic reasons for such distinctions; and they are also the result of political compromises among the various interest groups that are affected.

Allied Rights

The debate about the scope of patents and copyrights often underestimates or ignores the importance of allied rights that are available through other forms of the law such as contract and unfair competition. A noticeable feature of the case law is the willingness of the judiciary in the nineteenth century to extend protection to noncopyrighted works under alternative doctrines in the common law. More than 10 percent of copyright cases dealt with issues of unfair competition, and 7.7 percent with contracts; a further 12 percent encompassed issues of right to privacy, trade secrets, and misappropriation. For instance, in Keene v. Wheatley et al., 14 F. Cas. 180 (1860), the plaintiff did not have a statutory copyright in the play that was infringed. However, she was awarded damages on the basis of her proprietary common law right in an unpublished work, and because the defendants had taken advantage of a breach of confidence by one of her former employees. Similarly, the courts offered protection against misappropriation of information, such as occurred when the defendants in Chamber of Commerce of Minneapolis v. Wells et al., 111 N.W. 157 (1907) surreptitiously obtained stock market information by peering in windows, eavesdropping, and spying.

Several other examples relate to the more traditional copyright subject of the book trade. E. P. Dutton & Company published a series of Christmas books which another publisher photographed, and offered as a series with similar appearance and style but at lower prices. Dutton claimed to have been injured by a loss of profits and a loss of reputation as a maker of fine books. The firm did not have copyrights in the series, but they essentially claimed a right in the “look and feel” of the books. The court agreed: “the decisive fact is that the defendants are unfairly and fraudulently attempting to trade upon the reputation which plaintiff has built up for its books. The right to injunctive relief in such a case is too firmly established to require the citation of authorities.” In a case that will resonate with academics, a surgery professor at the University of Pennsylvania was held to have a common law property right in the lectures he presented, and a student could not publish them without his permission. Titles could not be copyrighted, but were protected as trade marks and under unfair competition doctrines. In this way, in numerous lawsuits G. C. Merriam & Co, the original publishers of Webster’s Dictionary, restrained the actions of competitors who published the dictionary once the copyrights had expired.

International Copyrights in the United States

The U.S. was long a net importer of literary and artistic works, especially from England, which implied that recognition of foreign copyrights would have led to a net deficit in international royalty payments. The Copyright Act recognized this when it specified that “nothing in this act shall be construed to extend to prohibit the importation or vending, reprinting or publishing within the United States, of any map, chart, book or books … by any person not a citizen of the United States.” Thus, the statutes explicitly authorized Americans to take free advantage of the cultural output of other countries. As a result, it was alleged that American publishers “indiscriminately reprinted books by foreign authors without even the pretence of acknowledgement.” The tendency to reprint foreign works was encouraged by the existence of tariffs on imported books that ranged as high as 25 percent.

The United States stood out in contrast to countries such as France, where Louis Napoleon’s Decree of 1852 prohibited counterfeiting of both foreign and domestic works. Other countries which were affected by American piracy retaliated by refusing to recognize American copyrights. Despite the lobbying of numerous authors and celebrities on both sides of the Atlantic, the American copyright statutes did not allow for copyright protection of foreign works for fully one century. As a result, American publishers and producers freely pirated foreign literature, art, and drama.

What were the effects of piracy? First, did the American industry suffer from cheaper foreign books being dumped on the domestic market? This does not seem to have been the case. After controlling for the type of work, the cost of the work, and other variables, the prices of American books were lower than prices of foreign books. American book prices may have been lower to reflect lower perceived quality or other factors that caused imperfect substitutability between foreign and local products. As might be expected, prices were not exogenously and arbitrarily fixed, but varied in accordance with a publisher’s estimation of market factors such as the degree of competition and the responsiveness of demand to determinants. The reading public appears to have gained from the lack of copyright, which increased access to the superior products of more developed markets in Europe, and in the long run this likely improved both the demand and supply of domestic science and literature.

Second, according to observers, professional authorship in the United States was discouraged because it was difficult to compete with established authors such as Scott, Dickens and Tennyson. Whether native authors were deterred by foreign competition would depend on the extent to which foreign works prevailed in the American market. Early in American history the majority of books were reprints of foreign titles. However, nonfiction titles written by foreigners were less likely to be substitutable for nonfiction written by Americans; consequently, the supply of nonfiction soon tended to be provided by native authors. From an early period grammars, readers, and juvenile texts were also written by Americans. Geology, geography, history and similar works would have to be adapted or completely rewritten to be appropriate for an American market which reduced their attractiveness as reprints. Thus, publishers of schoolbooks, medical volumes and other nonfiction did not feel that the reforms of 1891 were relevant to their undertakings. Academic and religious books are less likely to be written for monetary returns, and their authors probably benefited from the wider circulation that lack of international copyright encouraged. However, the writers of these works declined in importance relative to writers of fiction, a category which grew from 6.4 percent before 1830 to 26.4 percent by the 1870s.

On the other hand, foreign authors dominated the field of fiction for much of the century. One study estimates about fifty percent of all fiction best sellers in antebellum period were pirated from foreign works. In 1895 American authors accounted for two of the top ten best sellers but by 1910 nine of the top ten were written by Americans. This fall over time in the fraction of foreign authorship may have been due to a natural evolutionary process, as the development of the market for domestic literature over time encouraged specialization. The growth in fiction authors was associated with the increase in the number of books per author over the same period. Improvements in transportation and the increase in the academic population probably played a large role in enabling individuals who lived outside the major publishing centers to become writers despite the distance. As the market expanded, a larger fraction of writers could become professionals.

Although the lack of copyright protection may not have discouraged authors, this does not imply that intellectual property policy in this dimension had no costs. It is likely that the lack of foreign copyrights led to some misallocation of efforts or resources, such as in attempting to circumvent the rules. Authors changed their residence temporarily when books were about to be published in order to qualify for copyright. Others obtained copyrights by arranging to co-author with a foreign citizen. T. H. Huxley adopted this strategy, arranging to co-author with “a young Yankee friend … Otherwise the thing would be pillaged at once.” An American publisher suggested that Kipling should find “a hack writer, whose name would be of use simply on account of its carrying the copyright.” Harriet Beecher Stowe proposed a partnership with Elizabeth Gaskell, so they could “secure copyright mutually in our respective countries and divide the profits.”

It is widely acknowledged that copyrights in books tended to be the concern of publishers rather than of authors (although the two are naturally not independent of each other). As a result of lack of legal copyrights in foreign works, publishers raced to be first on the market with the “new” pirated books, and the industry experienced several decades of intense, if not quite “ruinous” competition. These were problems that publishers in England had faced before, in the market for books that were uncopyrighted, such as Shakespeare and Fielding. Their solution was to collude in the form of strictly regulated cartels or “printing congers.” The congers created divisible property in books that they traded, such as a one hundred and sixtieth share in Johnson’s Dictionary that was sold for £23 in 1805. Cooperation resulted in risk sharing and a greater ability to cover expenses. The unstable races in the United States similarly settled down during the 1840s to collusive standards that were termed “trade custom” or “courtesy of the trade.”

The industry achieved relative stability because the dominant firms cooperated in establishing synthetic property rights in foreign-authored books. American publishers made payments (termed “copyrights”) to foreign authors to secure early sheets, and other firms recognized their exclusive property in the “authorized reprint”. Advance payments to foreign authors not only served to ensure the coincidence of publishers’ and authors’ interests – they were also recognized by “reputable” publishers as “copyrights.” These exclusive rights were tradable, and enforced by threats of predatory pricing and retaliation. Such practices suggest that publishers were able to simulate the legal grant through private means.

However, private rights naturally did not confer property rights that could be enforced at law. The case of Sheldon v. Houghton 21 F. Cas 1239 (1865) illustrates that these rights were considered to be “very valuable, and is often made the subject of contracts, sales, and transfers, among booksellers and publishers.” The very fact that a firm would file a plea for the court to protect their claim indicates how vested a right it had become. The plaintiff argued that “such custom is a reasonable one, and tends to prevent injurious competition in business, and to the investment of capital in publishing enterprises that are of advantage to the reading public.” The courts rejected this claim, since synthetic rights differed from copyrights in the degree of security that was offered by the enforcement power of the courts. Nevertheless, these title-specific of rights exclusion decreased uncertainty, enabled publishers to recoup their fixed costs, and avoided the wasteful duplication of resources that would otherwise have occurred.

It was not until 1891 that the Chace Act granted copyright protection to selected foreign residents. Thus, after a century of lobbying by interested parties on both sides of the Atlantic, based on reasons that ranged from the economic to the moral, copyright laws only changed when the United States became more competitive in the international market for literary and artistic works. However, the act also included significant concessions to printers’ unions and printing establishments in the form of “manufacturing clauses.” First, a book had to be published in the U.S. before or at the same time as the publication date in its country of origin. Second, the work had to be printed here, or printed from type set in the United States or from plates made from type set in the United States. Copyright protection still depended on conformity with stipulations such as formal registration of the work. These clauses resulted in U.S. failure to qualify for admission to the international Berne Convention until 1988, more than one hundred years after the first Convention.

After the copyright reforms in 1891, both English and American authors were disappointed to find that the change in the law did not lead to significant gains. Foreign authors realized they may even have benefited from the lack of copyright protection in the United States. Despite the cartelization of publishing, competition for these synthetic copyrights ensured that foreign authors were able to obtain payments that American firms made to secure the right to be first on the market. It can also be argued that foreign authors were able to reap higher total returns from the expansion of the market through piracy. The lack of copyright protection may have functioned as a form of price discrimination, where the product was sold at a higher price in the developed country, and at a lower or zero price in the poorer country. Returns under such circumstances may have been higher for goods with demand externalities or network effects, such as “bestsellers” where consumer valuation of the book increased with the size of the market. For example, Charles Dickens, Anthony Trollope, and other foreign writers were able to gain considerable income from complementary lecture tours in the extensive United States market.

In view of the strong protection accorded to inventors under the U.S. patent system, to foreign observers its copyright policies appeared to be all the more reprehensible. The United States, the most liberal in its policies towards patentees, had led the movement for harmonization of patent laws. In marked contrast, throughout the history of the U.S. system, its copyright grants in general were more abridged than almost all other countries in the world. The term of copyright grants to American citizens was among the shortest in the world, the country applied the broadest interpretation of fair use doctrines, and the validity of the copyright depended on strict compliance with the requirements. U.S. failure to recognize the rights of foreign authors was also unique among the major industrial nations. Throughout the nineteenth century proposals to reform the law and to acknowledge foreign copyrights were repeatedly brought before Congress and rejected. Even the bill that finally recognized international copyrights almost failed, only passed at the last possible moment, and required longstanding exemptions in favor of workers and printing enterprises.

In a parallel fashion to the status of the United States in patent matters, France’s influence was evident in the subsequent evolution of international copyright laws. Other countries had long recognized the rights of foreign authors in national laws and bilateral treaties, but France stood out in its favorable treatment of domestic and foreign copyrights as “the foremost of all nations in the protection it accords to literary property.” This was especially true of its concessions to foreign authors and artists. For instance, France allowed copyrights to foreigners conditioned on manufacturing clauses in 1810, and granted foreign and domestic authors equal rights in 1852. In the following decade France entered into almost two dozen bilateral treaties, prompting a movement towards multilateral negotiations, such as the Congress on Literary and Artistic Property in 1858. The International Literary and Artistic Association, which the French novelist Victor Hugo helped to establish, conceived of and organized the Convention which first met in Berne in 1883.

The Berne Convention included a number of countries that wished to establish an “International Union for the Protection of Literary and Artistic Works.” The preamble declared their intent to “protect effectively, and in as uniform a manner as possible, the rights of authors over their literary and artistic works.” The actual Articles were more modest in scope, requiring national treatment of authors belonging to the Union and minimum protection for translation and public performance rights. The Convention authorized the establishment of a physical office in Switzerland, whose official language would be French. The rules were revised in 1908 to extend the duration of copyright and to include modern technologies. Perhaps the most significant aspect of the convention was not its specific provisions, but the underlying property rights philosophy which was decidedly from the natural rights school. Berne abolished compliance with formalities as a prerequisite for copyright protection since the creative act itself was regarded as the source of the property right. This measure had far-reaching consequences, because it implied that copyright was now the default, whereas additions to the public domain would have to be achieved through affirmative actions and by means of specific limited exemptions. In 1928 the Berne Convention followed the French precedent and acknowledged the moral rights of authors and artists.

Unlike its leadership in patent conventions, the United States declined an invitation to the pivotal copyright conference in Berne in 1883; it attended but refused to sign the 1886 agreement of the Berne Convention. Instead, the United States pursued international copyright policies in the context of the weaker Universal Copyright Convention (UCC), which was adopted in 1952 and formalized in 1955 as a complementary agreement to the Berne Convention. The UCC membership included many developing countries that did not wish to comply with the Berne Convention because they viewed its provisions as overly favorable to the developed world. The United States was among the last wave of entrants into the Berne Convention when it finally joined in 1988. In order to do so it complied by removing prerequisites for copyright protection such as registration, and also lengthened the term of copyrights. However, it still has not introduced federal legislation in accordance with Article 6bis, which declares the moral rights of authors “independently of the author’s economic rights, and even after the transfer of the said rights.” Similarly, individual countries continue to differ in the extent to which multilateral provisions governed domestic legislation and practices.

The quest for harmonization of intellectual property laws resulted in a “race to the top,” directed by the efforts and self interest of the countries which had the strongest property rights. The movement to harmonize patents was driven by American efforts to ensure that its extraordinary patenting activity was remunerated beyond as well as within its borders. At the same time, the United States ignored international conventions to unify copyright legislation. Nevertheless, the harmonization of copyright laws proceeded, promoted by France and other civil law regimes which urged stronger protection for authors based on their “natural rights” although at the same time they infringed on the rights of foreign inventors. The net result was that international pressure was applied to developing countries in the twentieth century to establish strong patents and strong copyrights, although no individual developed country had adhered to both concepts simultaneously during their own early growth phase. This occurred even though theoretical models did not offer persuasive support for intellectual property harmonization, and indeed suggested that uniform policies might be detrimental even to some developed countries and to overall global welfare.

Conclusion

The past three centuries stand out in terms of the diversity across nations in intellectual property institutions, but the nineteenth century saw the origins of the movement towards the “harmonization” of laws that at present dominates global debates. Among the now-developed countries, the United States stood out for its conviction that broad access to intellectual property rules and standards was key to achieving economic development. Europeans were less concerned about enhancing mass literacy and public education, and viewed copyright owners as inherently meritorious and deserving of strong protection. European copyright regimes thus evolved in the direction of author’s rights, while the United States lagged behind the rest of the world in terms of both domestic and foreign copyright protection.

By design, American statutes differentiated between patents and copyrights in ways that seemed warranted if the objective was to increase social welfare. The patent system early on discriminated between nonresident and domestic inventors, but within a few decades changed to protect the right of any inventor who filed for an American patent regardless of nationality. The copyright statutes, in contrast, openly encouraged piracy of foreign goods on an astonishing scale for one hundred years, in defiance of the recriminations and pressures exerted by other countries. The American patent system required an initial search and examination that ensured the patentee was the “first and true” creator of the invention in the world, whereas copyrights were granted through mere registration. Patents were based on the assumption of novelty and held invalid if this assumption was violated, whereas essentially similar but independent creation was copyrightable. Copyright holders were granted the right to derivative works, whereas the patent holder was not. Unauthorized use of patented inventions was prohibited, whereas “fair use” of copyrighted material was permissible if certain conditions were met. Patented inventions involved greater initial investments, effort, and novelty than copyrighted products and tended to be more responsive to material incentives; whereas in many cases cultural goods would still be produced or only slightly reduced in the absence of such incentives. Fair use was not allowed in the case of patents because the disincentive effect was likely to be higher, while the costs of negotiation between the patentee and the more narrow market of potential users would generally be lower. If copyrights were as strongly enforced as patents it would benefit publishers and a small literary elite at the cost of social investments in learning and education.

The United States created a utilitarian market-based model of intellectual property grants which created incentives for invention, but always with the primary objective of increasing social welfare and protecting the public domain. The checks and balances of interest group lobbies, the legislature and the judiciary worked effectively as long as each institution was relatively well-matched in terms of size and influence. However, a number of legal and economic scholars are increasingly concerned that the political influence of corporate interests, the vast number of uncoordinated users over whom the social costs are spread, and international harmonization of laws have upset these counterchecks, leading to over-enforcement at both the private and public levels.

International harmonization with European doctrines introduced significant distortions in the fundamental principles of American copyright and its democratic provisions. One of the most significant of these changes was also one of the least debated: compliance with the precepts of the Berne Convention accorded automatic copyright protection to all creations on their fixation in tangible form. This rule reversed the relationship between copyright and the public domain that the U.S. Constitution stipulated. According to original U.S. copyright doctrines, the public domain was the default, and copyright merely comprised a limited exemption to the public domain; after the alignment with Berne, copyright became the default, and the rights of the public and of the public domain now merely comprise a limited exception to the primacy of copyright. The pervasive uncertainty that characterizes the intellectual property arena today leads risk-averse individuals and educational institutions to err on the side of abandoning their right to free access rather than invite potential challenges and costly litigation. A number of commentators are equally concerned about other dimensions of the globalization of intellectual property rights, such as the movement to emulate European grants of property rights in databases, which has the potential to inhibit diffusion and learning.

Copyright law and policy has always altered and been altered by social, economic and technological changes, in the United States and elsewhere. However, the one constant feature across the centuries is that copyright protection involves crucial political questions to a far greater extent than its economic implications.

Additional Readings

Economic History

B. Zorina Khan. The Democratization of Invention: Patents and Copyrights in American Economic Development, 1790-1920. New York: Cambridge University Press, 2005.

Law and Economics

Besen, Stanley, and L. Raskind. “An Introduction to the Law and Economics of Intellectual Property.” Journal of Economic Perspectives 5 (1991): 3-27.

Breyer, Stephen. “The Uneasy Case for Copyright: A Study of Copyright in Books, Photocopies and Computer Programs.” Harvard Law Review 84 (1970): 281-351.

Gallini, Nancy and S. Scotchmer. “Intellectual Property: When Is It the Best Incentive System?” Innovation Policy and the Economy 2 (2002): 51-78.

Gordon, Wendy, and R. Watt, editors. The Economics of Copyright: Developments in Research and Analysis. Cheltenham, UK: Edward Elgar, 2002.

Hurt, Robert M., and Robert M. Shuchman. “The Economic Rationale of Copyright.” American Economic Review Papers and Proceedings 56 (1966): 421-32.

Johnson, William R. “The Economics of Copying.” Journal of Political Economy 93 (1985): 1581-74.

Landes, William M., and Richard A. Posner. “An Economic Analysis of Copyright Law.” Journal of Legal Studies 18 (1989): 325-63.

Landes, William M., and Richard A. Posner. The Economic Structure of Intellectual Property Law. Cambridge, MA: Harvard University Press, 2003.

Liebowitz, S. J. “Copying and Indirect Appropriability: Photocopying of Journals.” Journal of Political Economy 93 (1985): 945-57.

Merges, Robert P. “Contracting into Liability Rules: Intellectual Property Rights and Collective Rights Organizations.” California Law Review 84, no. 5 (1996): 1293-1393.

Meurer, Michael J. “Copyright Law and Price Discrimination.” Cardozo Law Review 23 (2001): 55-148.

Novos, Ian E., and Michael Waldman. “The Effects of Increased Copyright Protection: An Analytic Approach.” Journal of Political Economy 92 (1984): 236-46.

Plant, Arnold. “The Economic Aspects of Copyright in Books.” Economica 1 (1934): 167-95.

Takeyama, L. “The Welfare Implications of Unauthorized Reproduction of Intellectual Property in the Presence of Demand Network Externalities.” Journal of Industrial Economics 42 (1994): 155–66.

Takeyama, L. “The Intertemporal Consequences of Unauthorized Reproduction of Intellectual Property.” Journal of Law and Economics 40 (1997): 511–22.

Varian, Hal. “Buying, Sharing and Renting Information Goods.” Journal of Industrial Economics 48, no. 4 (2000): 473–88.

Varian, Hal. “Copying and Copyright.” Journal of Economic Perspectives 19, no. 2 (2005): 121-38.

Watt, Richard. Copyright and Economic Theory: Friends or Foes? Cheltenham, UK: Edward Elgar, 2000.

History of Economic Thought

Hadfield, Gilliam K. “The Economics of Copyright: A Historical Perspective.” Copyright Law Symposium (ASCAP) 38 (1992): 1-46.

History

Armstrong, Elizabeth. Before Copyright: The French Book-Privilege System, 1498-1526. Cambridge: Cambridge University Press, 1990.

Birn, Raymond. “The Profits of Ideas: Privileges en librairie in Eighteenth-century France.” Eighteenth-Century Studies 4, no. 2 (1970-71): 131-68.

Bugbee, Bruce. The Genesis of American Patent and Copyright Law. Washington, DC: Public Affairs Press, 1967.

Dawson, Robert L. The French Booktrade and the “Permission Simple” of 1777: Copyright and the Public Domain. Oxford: Voltaire Foundation, 1992.

Hackett, Alice P., and James Henry Burke. Eighty Years of Best Sellers, 1895-1975. New York: Bowker, 1977.

Nowell-Smith, Simon. International Copyright Law and the Publisher in the Reign of Queen Victoria. Oxford: Clarendon Press, 1968.

Patterson, Lyman. Copyright in Historical Perspective. Nashville: Vanderbilt University Press, 1968.

Rose, Mark. Authors and Owners: The Invention of Copyright. Cambridge: Harvard University Press, 1993.

Saunders, David. Authorship and Copyright. London: Routledge, 1992.

Citation: Khan, B. “An Economic History of Copyright in Europe and the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/an-economic-history-of-copyright-in-europe-and-the-united-states/

Economic Interests and the Adoption of the United States Constitution

Robert A. McGuire, University of Akron

The adoption of the Constitution greatly strengthened the national government at the expense of the states. This article examines how our Founding Fathers designed the Constitution, examining findings on the political and economic factors behind the provisions included in the Constitution and its ratification. The article discusses the views of Charles Beard and his critics and focuses on recent quantitative findings that explain the making of the Constitution. These findings suggest that personal interests of the Founding Fathers, as well as constituents’ interests, played an important role in drafting the Constitution. They also suggest that economic and other interests played important roles at the ratifying conventions.

The Adoption of the Constitution

During the summer of 1787, fifty-five men attended the constitutional convention in Philadelphia that drafted the Constitution of the United States. In less than a year after the convention finished, New Hampshire, on June 21, 1788, became the ninth state to have ratified the Constitution that was drafted. As a result, Congress declared the Constitution to be in force beginning March 4, 1789, because ratification by only nine of the thirteen states was required for the Constitution to be considered adopted by the ratifying states. The Constitution thus replaced the Articles of Confederation and Perpetual Union as the law of the land. Under the Articles, which had been in effect only since 1781, the American political system consisted of a loose confederation of largely independent states with a very weak central government. Under the Constitution, the Articles were replaced with a political system that consisted of a powerful central government with, ultimately, little state sovereignty.

Fiscal and Economic Problems under the Articles of Confederation

Under the Articles of Confederation, the central (federal) government had little or no power to raise revenues and had difficulty repaying its domestic and foreign debt. The fiscal problems under the Articles were twofold. First, the primary source of revenues to fund the federal government was requisitions to the state governments asking them to send to the federal government state-collected tax revenues. Yet the Articles did not include any enforcement mechanism to ensure that the state governments would send in the full amount of the funds requested of them, which they never did. Second, each state had a single vote in the federal Congress and the unanimous consent of the thirteen states was required for the Congress to enact any federal taxes. A single state could thus block federal tax legislation. This de facto veto power on the part of each state created substantial decision-making costs for Congress and prevented proposed federal imposts (import duties) from being enacted under the Articles. The central government also lacked the legal power to enforce uniform commercial or trade regulations – either at home or abroad – that might have been conducive to the development of a common economic trading area. Likewise, the Confederation government possessed uncertain authority to deal with foreign powers. Its problems raising revenues and repaying existing debts created uncertainty about the financial viability of the federal government. Although state and local interference in trade was not a major problem at the time, many commercial interests apparently feared that local and state barriers to trade could develop in the future under the Articles of Confederation. Western landowners also were often impatient with the federal government because of its inability to establish order on the frontiers.

How the Constitution Strengthened the Power of the Central Government

Under the Constitution, the power to tax, along with the authority to settle past federal debts, was firmly delegated to the central (national) government, improving the central government’s financial future as well as improving capital markets (the markets for funds). The Constitution, unlike the Articles, required only a simple majority vote of the representatives in both chambers of the national Congress to enact tax legislation. There were, and are, checks on simple majority voting though. The president can veto congressional legislation and a two-thirds vote in Congress can override the presidential veto. But neither of these constraints on majority voting creates the magnitude of decision-making costs that unanimous voting under the Articles created. The assignment of the sole right “To coin money, [and] regulate the value thereof,” to the national government and the prohibition on states from emitting “bills of credit” (paper money) also were expected to improve capital markets. A national judiciary was created under the Constitution and the power to make treaties with foreign nations was firmly delegated to the central government.

How a Strong Central Government Affected the Economy

With respect to interstate trade, Gary M. Walton and James F. Shepherd (1979) suggest “the possibility of such barriers [to interstate commerce] loomed as a threat until the Constitution specifically granted the regulation of interstate commerce to the federal government” (pp. 187-88). Walton and Shepherd conclude that the most important changes associated with the Constitution “were those changes that strengthened the framework for protection of private property and enforcement of contracts” (pp. 187-88). These changes were most important because they increased the benefits of exchange (the cornerstone of a market economy) and created incentives for individuals to specialize in economic activities in which they had a particular advantage and then engage in mutually advantageous exchange (trade) with individuals specializing in other economic activities. Specific provisions in the Constitution that helped to increase the benefits of exchange were those that prohibited the national and state governments from enacting ex-post-facto laws (retroactive laws) and a provision that prohibited the state governments from passing any “law impairing the obligation of contracts.” These prohibitions were important to the development of a market economy because they constrained governments from interfering in economic exchange, making the returns to economic activity more secure.

Because the economies of the thirteen states were not highly interconnected in the 1780s, the immediate consequences for the nation of adopting the Constitution were not at all large. But the change in our fundamental political institution was ultimately to have a profound influence on our nation’s history, because the Constitution over time became the foundation of the supremacy of the national government in the United States.

The “Important Question”: How Did Constitutional Change Come About?

How did this fundamental change come about? Why did our nation’s Founding Fathers replace the Articles of Confederation, our first “constitution,” with the United States Constitution? In defending the Constitution in late 1787, Alexander Hamilton observed “It has been frequently remarked that it seems to have been reserved to the people of this country . . . to decide the important question, whether societies of men are really capable or not of establishing good government from reflection and choice, or whether they are forever destined to depend for their political constitutions on accident and force” (Hamilton, Jay and Madison, 1937, No. 1, p. 3). To paraphrase Hamilton: How did “this country” decide “the important question”?

Since the middle of the nineteenth century, hundreds of scholars have studied and debated the possible explanations for such an important change in the fundamental political institution of our nation. Many historians have concluded that the Constitution was drafted and adopted as a result of a consensus that the Articles of Confederation were fatally flawed. Other scholars have argued that the limitations of the Articles could have been eliminated without fundamentally altering the balance of power between the states and the central government. Others have suggested that the adoption of the Constitution was the product of conflict between various economic and financial interests within the nation, a conflict between those who, because of their interests, wanted a strengthened, more powerful national government and those who, because of their interests, did not.

Charles Beard’s “Economic” Interpretation

In 1913, Charles A. Beard (1913 [1935]) consolidated various scholarly views of the Constitution and, in the process, offered what became identified as “the” economic interpretation of the Constitution. Beard (pp. 16-18) argued that the formation of the Constitution was a conflict based upon competing economic interests – interests of both the proponents and opponents. In his view, the Federalists, the founders who supported a strong, centralized government and favored the Constitution during its drafting and ratification, were individuals whose primary economic interests were tied to personal property. They were mainly merchants, shippers, bankers, speculators, and private and public securities holders, according to Beard (pp. 31-51). The Anti-federalists, the opponents of the Constitution and supporters of a more decentralized government, were individuals whose primary economic interests were tied to real property. Beard (pp. 26-30) contended these opponents consisted primarily of more isolated, less-commercial farmers, who often were also debtors, and northern manorial planters along the Hudson River. However, Beard (pp. 29-30) maintained that many southern slaveowning planters, who held much of their wealth in personal property, had much in common with northern merchants and financiers, and should be included as supporters of the Constitution.

Beard (pp. 31-51) claimed that support for his argument could be found in the economic conditions prevailing during the 1780s. As a result, he suggested that the primary beneficiaries under the Constitution would have been individuals with commercial and financial interests – particularly, those with public securities holdings who, according to Beard, had a clause included in the Constitution requiring the assumption of existing federal debt by the new national government. Commercial and financial interests also would benefit because of more certainty in the rules of commerce, trade, and credit markets under the Constitution. More isolated less-commercial farmers, debtors, paper money advocates, and the northern planters along the Hudson would be the primary beneficiaries under the status quo. They would have had greater ability at the state level with decentralized government to avoid heavy land taxation – levied to pay off the public debt – and to promote paper money and debt moratorium issues that advanced their interests. Consequently, they opposed the Constitution.

Criticisms of Beard’s View: Brown and McDonald

Beard’s thesis soon emerged as the standard historical interpretation and remained so until the 1950s, when it began to face serious scholarly challenges. The most influential and lasting of the challenges were those by Robert E. Brown (1956) and Forrest McDonald (1958). Robert E. Brown’s (1956) critique dismisses an economic interpretation as utterly without merit, attacking Beard’s conclusions in their entirety. Brown counters Beard’s views that eighteenth-century America was not very democratic, that the wealthy were strong supporters of the Constitution, and that those without personal property generally opposed the Constitution. Brown examines the support for the Constitution among various economic and social classes, the democratic nature of the nation, and the franchise within the states in eighteenth-century America. He maintains that Beard was plain wrong, eighteenth-century America was democratic, the franchise was common, and there was widespread support for the Constitution.

In contrast, Forrest McDonald’s (1958) study empirically examines the wealth, economic interests, and the votes of the delegates to the constitutional convention in Philadelphia that drafted the Constitution in 1787 and of the delegates to the thirteen ratifying conventions that considered its adoption afterward. McDonald’s primary interest is in testing Charles A. Beard’s thesis. Based on his evidence collected from the Philadelphia convention, McDonald (1958, p. 110) concludes, “anyone wishing to rewrite the history of those proceedings largely or exclusively in terms of the economic interests represented there would find the facts to be insurmountable obstacles.” With respect to the ratification of the Constitution, McDonald (1958. p. 357) likewise concludes, “On all counts, then, Beard’s thesis is entirely incompatible with the facts.”

Neither Brown nor McDonald, however, offered any modern rigor (no formal or statistical analysis of any type) in testing the behavior of the Founding Fathers during the drafting or ratification of the Constitution. Yet Brown and McDonald are still credited by many with delivering the fatal blows to Beard’s economic interpretation of the Constitution. (Examples of economists, historians, political scientists, and legal scholars who credit Brown and McDonald, or both, with proving Beard incorrect include Buchanan and Tullock (1962), Wood (1969), Riker (1987), and Ackerman (1991).

The New Quantitative Approach

Recently economic historians have begun to reexamine the behavior of our Founding Fathers concerning the Constitution. This reexamination, which employs formal economics and modern statistical techniques, involves the application of an economic model of voting behavior during the drafting and ratification processes and the collection and processing of large amounts of data on the economic and financial interests and other characteristics of the men who drafted and ratified the Constitution. The findings of this reexamination, which have become the accepted view among quantitative economic historians today (Robert Whaples, 1995), provide answers to many heretofore-unresolved issues involving the adoption of the Constitution.

What factors explain the behavior of George Washington, James Madison, Alexander Hamilton, and the other Founding Fathers regarding the Constitution? Why did they include a prohibition on state paper-money issues in the Constitution? Why did they decide to allow for duties (taxes) on imports but not on exports? Why did they fail to adopt a clause giving the national government an absolute veto over state laws? Were the economic, financial, and other interests of the founders significant factors in their support for the Constitution, or their support for specific clauses in it, or their support for ratification? Were, for example, the slaveholdings of the founders a significant factor in their behavior? Were the founders’ commercial activities significant factors? Were the private or public securities holdings significant factors?

The Rational Choice Model

The critical reexamination of the adoption of the Constitution, which began in the mid-1980s (Robert A. McGuire and Robert L. Ohsfeldt, 1984), offers an economic model of the founders that is based on rational choice and methodological individualism, and employs formal statistical techniques. Methodologically, such an approach analyzes the choices of the individuals involved in the drafting and ratification of the Constitution. The object of analysis is the behavior of the individual Founding Fathers not the behavior of some social class or group. The economic model presumes that a founder was motivated by self-interest to maximize the satisfaction he received from the choices he made at the constitutional convention attended. But neither self-interest nor economic rationality implies that a founder was concerned only with his financial or material well-being. The economic model indicates that a founder weighed the benefits (the satisfaction) and the costs (the sacrifice) to himself of his actions, making those choices that were in his self-interest, broadly defined to include any pecuniary and non-pecuniary benefits and costs of his choices. This is the presumption of rational choice.

Personal and Constituent Interests

More precisely, the economic model is that a founder acted individually to maximize the net benefit he received from his votes. A founder would have voted in favor of a particular issue at Philadelphia, or in favor of ratification, if he expected the net benefit he would receive would have been greater if the issue, or the Constitution, was adopted. Because a founder was from a particular state or locality, the founder represented the citizens (the constituents) of the state or locality in which he resided as well as represented his own personal interests at Philadelphia or a ratifying convention. The benefit of a founder’s vote was affected directly by the anticipated impact of his vote on his personal interests and indirectly by the anticipated impact of his vote on his constituents’ interests. A founder’s personal interests depended on his own economic interests and ideology and his constituent interests depended on the economic interests and ideologies of his constituents. The interests may have been purely economic (pecuniary interests, such as the ownership or value of specific economic assets) or ideological (non-pecuniary interests, such as beliefs about the moral correctness of a particular form of government). The potential effect of personal interests on a founder’s vote is straightforward; the founder would have benefited or been harmed directly. The potential effect of constituents’ interests on a founder’s vote is through the impact of his vote on the potential for maintaining his decision-making authority, continuing to represent his constituents.

Statistical Tests

To quantitatively test the economic model, the founders’ observed votes on a particular issue at Philadelphia or on ratification are statistically related to measures of the economic interests and ideologies of the founders and their constituents. The statistical technique employed is called multivariate logistic regression. Estimation of a logistic regression model is designed to determine the marginal or incremental impact of each explanatory variable – the measures of the economic interests and ideologies – on the dependent variable – the “yes” or “no” votes on a particular issue at Philadelphia or ratification. The estimated logistic regression produces for each explanatory variable an estimated coefficient that captures the influence (its direction and magnitude) of the explanatory variable on the probability of a founder voting in favor of the issue being estimated, holding the influence of all other explanatory variables constant. The benefit of this approach is that each potential factor, each explanatory variable, affecting a vote is examined separately from the influence of the other factors, while at the same time, controlling for the influence of the other factors. This reduces to a minimum the incidence of spurious relationships between any particular factor and a vote. For example, if the relationship between the vote on an issue and the founders’ slaveholdings is examined in isolation, a positive correlation may be indicated. But if other interests are taken into account (for example, the founders’ public securities holdings), the correlation with slaveholdings could change and, in fact, be negative.

The modern economic history of the Constitution indicates that Charles Beard’s economic interpretation has not yet been refuted. The issues, in fact, have not been heretofore tested. Earlier historical studies did not have the benefit of modern economic methodology and systematic statistical analysis. As such, their conclusions cannot pass scientific scrutiny. Major advances in both economic thinking about political behavior and statistical techniques have taken place in the last thirty or so years. These modern methods allow for a systematic quantitative analysis of the voting behavior of the founders employing, among other data and evidence, the types of non-quantitative data about the founders that historians collected decades ago but never systematically analyzed. They failed to systematically analyze such data and evidence because the necessary techniques did not exist and because they generally were not trained in quantitative analysis.

Findings of the Quantitative Approach: A New Economic Interpretation of the Constitution

One unambiguous conclusion can be drawn from the recent quantitative studies: There is a valid economic interpretation of the Constitution. The idea of self-interest can explain the design and adoption of the Constitution. This does not mean that either the framers or the ratifiers of the Constitution were motivated by a greedy desire to “line their own pockets” or by some dialectic concept of “class interests.” Nor does it mean that some “conspiracy among the founders” or some fatalistic concept of “economic determinism” explains the Constitution. Nor does it mean that the founders were completely selfish in a purely financial or material sense. It does mean that the pursuit of one’s “interests” both in a narrow, pecuniary (financial) sense and a broader, non-pecuniary sense can explain the drafting and ratification of the Constitution. (See McGuire (2001).)

The recent quantitative studies contend that the Constitution was neither drafted nor ratified by a group of disinterested and nonpartisan demigods motivated only, or even primarily, by high-minded political principles to promote the nation’s interest. The fifty-five delegates to the Philadelphia convention that drafted the Constitution during the summer of 1787 were motivated by self-interest, in a broad sense, in choosing its design. Quantitative research suggests that these framers of the Constitution can be seen as rational individuals who were making choices in designing the fundamental rules of governance for the nation. In doing so, they rationally weighed the expected costs and benefits of each clause they considered. They included a particular clause in the Constitution only if they expected the benefits from its inclusion to exceed the costs they expected to result from inclusion. Likewise, the more than 1,600 delegates who participated in the thirteen state ratifying conventions, which took place between 1787 and 1790 to consider adopting the Constitution, can be viewed as rational individuals who were making the choice to adopt the set of rules embodied in the Constitution as drafted at the Philadelphia Constitutional Convention. In doing so, they rationally weighed the expected costs and benefits of their decision to ratify. They voted to ratify only if the benefits they expected from adoption of the set of rules embodied in the Constitution exceeded the costs they expected to result from that set of rules. If not, they voted against ratification.

Contrary to earlier views that the founders’ specific economic or financial interests cannot be principally identified with one side or the other of an issue, the modern evidence indicates that their economic and financial interests can be so identified. When specific issues arose at the Philadelphia convention that had a direct impact on important economic interests of the founders, their economic interests, even narrowly defined, significantly influenced the specific design of the Constitution, and the magnitudes of the influences were often quite large. The types of economic interests that mattered for the choice of specific issues were those that were likely to have accounted for a substantial portion of the overall wealth or represent the primary livelihood of the founders.

Even when the founders were deciding on the general issue of the basic design of the Constitution to strengthen the national government, economic and other interests significantly influenced them. In terms used in constitutional political economics, even when the founders were making fundamental “constitutional” choices rather than more specific-interest “operational” choices, the modern evidence indicates their choices were still consistent with self-interested and partisan behavior. In terms used among legal scholars, even when the founders were involved in the “higher lawmaking” of the “constitutional founding,” they were still self-interested and partisan. Partisan behavior explains even this “constitutional moment.” However, the modern evidence does indicate that fewer economic and financial interests mattered for the basic design of the Constitution than for specific-interest aspects of it.

Specific Empirical Findings from the Constitutional Convention and the Ratifying Conventions

Financial Securities

The financial securities holdings of the founders often had a significantly large influence on their behavior and founders with such financial assets were often aligned with each other on the same issue. These findings are in contrast to a strongly held view among many historical scholars that the founders’ financial securities holdings had little or no influence on their behavior or that these founders were not aligned on common issues. For a small number of the issues considered at the Philadelphia convention, the founders’ financial securities holdings mattered. Moreover, during the ratification process, the financial securities holdings had a major influence. Specifically, delegates with private securities holdings (private creditors) or public securities holdings (public creditors), and especially delegates with large amounts of public securities holdings (generally, Revolutionary War debt), were significantly more likely to vote in favor of ratification.

This does not mean that all securities-holding delegates voted together at the constitutional conventions. What it does mean is that the holdings of financial securities, controlling for other influences, significantly increased the probability of supporting some of the issues at the Philadelphia convention, particularly those issues that strengthened the central government (or weakened the state governments). For example, one issue that the securities holders were more likely to have supported was a proposal to absolutely prohibit state governments from issuing paper money. This means that the securities holders (creditors) at the convention desired to constrain the states’ ability to inflate away the value of their financial holdings through expansion of the supply of state paper money. Not surprisingly, the twelve founders at Philadelphia with private securities holdings voted unanimously in favor of the prohibition. Likewise, those with public securities holdings were significantly more likely to have favored it. The evidence indicates that a founder at Philadelphia with any public securities holdings, who at the same time possessed the average values of all other interests represented at the convention, was 26.5 percent more likely to vote yes than was an otherwise average delegate with no public securities holdings. With respect to the ratification process, a delegate’s financial securities holdings, controlling for other influences, significantly increased his probability of voting in favor of ratification at his state convention. An implication that can be drawn from this evidence is that to the extent some delegates with financial securities holdings did not support strengthening the central government, or did not vote for ratification, it was the effects of their other interests that influenced them to vote “no.”

Slaveowners

The view of many historical scholars is that delegates who were slaveowners and those who represented slave areas generally supported strengthening the central government and supported ratifying the Constitution. While this may be correct as far as it goes, the issue of the influence of slaveholdings on the behavior of the Founding Fathers, as is the influence of any factor, is actually more complex. The quantitative evidence indicates that, although a majority of the slaveowners and a majority of the delegates from slave areas, may have, in fact, voted for issues strengthening the central government or voted for ratification, the actual influence of slaveholdings or representing slave areas per se was to significantly decrease a delegate’s likelihood of voting for strengthening the central government or voting for ratification.

As with the findings for financial securities holdings, this does not mean that all slaveholding delegates or all delegates from slave areas voted together at the various constitutional conventions. What it does mean for the Philadelphia constitutional convention is that slaveholdings, controlling for other influences, decreased the probability of voting at the convention for issues that would have strengthened the central government. For example, one issue that slaveholders at Philadelphia were less likely to have supported was a proposal that would have given the national legislature an absolute veto over state laws, which would have greatly strengthened the central government. This means that if the national veto had been put into the Constitution at Philadelphia, which it was not, the national Congress, especially if it had a majority of non-slaveholding representatives, could have vetoed state laws concerning slavery, for example. This would have given the national Congress the power to limit the economic viability of slavery, if it so chose. Not surprisingly, the evidence suggests that a delegate at Philadelphia who owned the most slaves at the convention, for example, and had average values of all other interests, was one-twelfth as likely to have voted yes on the national veto than an otherwise average delegate with no slaveholdings. Likewise, during the ratification process, slaveholdings, controlling for other influences, significantly decreased the probability of voting in favor of ratification at the state ratifying conventions. An implication from this evidence is that in the case of the slaveholding delegates and the delegates from slave areas, who did vote to strengthen the central government or did vote for ratification, it was the effects of their other interests that influenced them to vote “yes.”

Commercial Interests

The modern evidence confirms that the framers and the ratifiers of the Constitution, who were from the more commercial areas of their states, were likely to have voted differently from individuals from the less commercial areas. Delegates who were from the more commercial areas were significantly more likely to have voted for clauses in the Constitution that strengthened the central government and were significantly more likely to have voted for ratification in the ratifying conventions. The Founding Fathers who were from the more isolated, less commercial areas of their states were significantly less likely to support strengthening the central government and significantly less likely to vote for ratification.

Local and State Office Holders

But surprisingly, the findings for the ratification of the Constitution strongly conflict with the nearly unanimous prevailing scholarly view that the localism and parochialism of local and state officeholders were major factors in the opposition to the Constitution’s ratification. The modern quantitative evidence, in fact, indicates that there were no significant relationships whatsoever between any measure of local or state office holding and the ratification vote in any ratifying convention for which the data on officeholders were collected.

The Founders Mattered: How the Constitution Would Have Been Different If Men with Different Interests Had Written It

One of the more important findings of the modern approach to the adoption of the Constitution is that it makes evident the importance to historical outcomes of the specific individuals involved in any historical process. The modern evidence attests to the paramount importance of the specific political actors involved in the American constitutional founding. The estimated magnitudes of the influences of many of the economic, financial, and other interests on the founders’ behavior are large enough that the findings suggest the product of the constitutional founding most likely would have been dramatically different had men with dramatically different interests been involved.

For example, had all the founders at Philadelphia represented a state with a population the size of the most populous state, and possessed the average values of all other interests represented at Philadelphia, the Constitution most certainly would have contained a clause giving the national government an absolute veto over all state laws. If the national veto had been put into the Constitution, which it was not, and representation in the national Congress was based on the population of a state, which it was and is in the House of Representatives, rather than each state possessing an equal vote as under the Articles, representatives from the most populous states could have controlled legislative outcomes. This would have given “large” states potential control over the “small” states. As might be expected, the modern findings indicate that the predicted probability of voting yes on the national veto for a founder at Philadelphia who represented the most populous state and possessed the average values of all other interests is 0.837. But the predicted probability for an “average” delegate, one with the average values of all measured interests including state population, is only 0.379.

Or, had all the founders at Philadelphia represented a state with the heaviest concentration of slaves of all states, and possessed the average values of all other interests, the Constitution likely would have contained a clause requiring a two-thirds majority of the national legislature to enact any commercial laws. If the two-thirds majority requirement had been put into the Constitution, which it was not, it would have been more difficult to enact commercial laws, laws that could have regulated the slave-based export economies of the southern states. The two-thirds requirement would have made it much more difficult for a future northern majority to impact negatively on the southern economy through commercial regulation. Again, as might be expected, the modern findings indicate that the predicted probability of a yes vote on the two-thirds issue for an otherwise “average” founder who represented a state with the heaviest concentration of slaves is 0.914; but it is only 0.206 for an “average” founder. The Constitution also might not have contained a clause prohibiting the national legislature from enacting export duties (taxes) had there been no delegates with merchant interests at the Philadelphia convention; there might have been only a fifty-fifty chance of passing the prohibition. The predicted probability of a yes vote to prohibit national-level export duties for an otherwise “average” delegate without merchant interests is 0.505. But it is 0.790 for an otherwise “average” delegate with merchant interests, and nine of the Founding Fathers at the Philadelphia convention had merchant interests.

Interests of the Ratifiers Mattered

With respect to ratification, the quantitative evidence indicates that the magnitudes of the influences of the economic and other interests on the ratification votes were even more considerable than for the Philadelphia convention. The outcome of ratification appears to have depended even more on the specific individuals involved. The estimated influences were considerable enough that they suggest the outcome of ratification almost certainly would have been different had men with different interests attended the ratifying conventions. Had there been, among the ratifiers, fewer merchants, more debtors, more slaveowners, more delegates from the less-commercial areas, or more delegates belonging to dissenting religions, there would have been no ratification of the Constitution, at least no ratification as the Constitution was written. For example, at the Massachusetts ratifying convention, the predicted probability of a yes vote on ratification for an otherwise “average” delegate who was a debtor is only 0.175 but if the same delegate was not a debtor it is 0.624. For an otherwise “average” Baptist, the predicted probability of a yes vote is only 0.162 but if the Massachusetts delegate was not a Baptist it is 0.657. At the North Carolina ratifying convention, the predicted probability of a yes vote for an otherwise “average” delegate who was not a merchant is 0.175 but if the same delegate was a merchant it is 0.924. For an otherwise “average” North Carolina delegate from the least commercial areas in the state, the predicted probability of a yes vote is a trivial 0.002 but if the delegate was from the most commercial areas in the state it is 0.753. At the Virginia ratifying convention, the predicted probability of a yes vote for an otherwise “average” slaveowner is 0.451 but if the otherwise “average” delegate was not a slaveowner it is 0.837. Differences of these magnitudes suggest that ratification of the Constitution strongly depended on the specific economic, financial, and other interests of the specific individuals who attended the state conventions.

Broader Implications for Constitution Making

Overall, the modern approach to explaining the design and adoption of the Constitution suggests that it is unlikely that any real world constitution would ever be drafted or ratified through a disinterested and nonpartisan process. Because actual constitutional settings will always involve political actors who possess partisan interests and who likely will be able to predict the consequences of their decisions; partisan interests will influence constitutional choice. The economic history of the drafting and ratification of our nation’s Constitution makes it hard to envision any actual constitutional setting, including any setting to reform existing constitutions, in which self-interested and partisan behavior would not dominate. The modern evidence suggests that constitutions are the products of the interests of those who design and adopt them.

The Statistical Approach versus the Traditional Approach

Much of the differences between the modern evidence and the evidence found in the traditional historical literature is a matter of the approach taken, as well as the questions asked, rather than a matter of arriving at fundamentally different answers to identical questions. Many studies in the traditional literature question an economic interpretation of the Constitution because they question whether the Constitution is strictly an economic document designed solely to promote specific economic interests. Of course, it was not designed merely to promote economic interests. Many others question an economic interpretation because they question whether the founders were really attempting to solely, or even to principally, enhance their personal wealth, or the wealth of those they represented, as a result of adopting the Constitution. Of course, the founders were not. Others question an economic interpretation because they question whether the founders were really involved in a conspiracy to promote specific economic interests. Of course, they were not. Others question an economic interpretation because they question whether political principles, philosophies, and beliefs can be ignored in an attempt to understand the design of the Constitution. Of course, they cannot. In contrast, the modern economic history of the Constitution does not take any of these positions.

Yet the conclusions drawn from the modern evidence on the role of the economic, financial, and other interests of the founders are fundamentally different from the conclusions found in the traditional literature. The primary reason is that the statistical technique employed in the modern reexamination yields estimates of the separate influence of a particular economic interest or other factor on the founders’ behavior (how they voted) taking into account, and controlling for, the influence of other interests and factors on the founders’ behavior. The traditional literature nearly always draws conclusions about how the majority of the delegates with a particular interest – for example, how the majority of public securities holding delegates – voted on a particular issue, without regard to the influence of other interests and factors on behavior and without any formal statistical analysis. Prior studies, consequently, do not control for the confounding influences of other factors when drawing conclusions about any particular factor. As a result, the modern reexamination and the prior studies will often reach different conclusions about the influence of the same economic interest or other factor on the founders’ behavior. The conclusions differ because in a sense the studies are asking different questions. The modern economic history of the Constitution asks: How did a particular economic interest (for example, slaveholdings) per se influence the founders’ voting behavior taking into account all the influences of other factors on those founders’ voting behavior (for example, the slaveholding founders)? Prior historical studies more simply ask: How many of the founders with a particular economic interest (for example, founders with slaveholdings) voted the same on a particular issue?

The modern approach to the adoption of the Constitution may be disquieting to individuals of all political persuasions. It may be personally difficult for many to embrace. The evidence suggests motivating factors and intent on the part of our Founding Fathers that may be distasteful to conservatives, moderates, and liberals alike, to those on the left, in the middle, and on the right. The methodology employed, rational choice and methodological individualism, will be acceptable to some. But methodological individualism and a presumption of rational choice are likely to be troublesome to others. Some may have difficulty because an economic approach to the adoption of the Constitution appears “too calculating.” To some, it may appear “too deterministic” or “too economic.” Yet it actually is a dispassionate, almost antiseptic, view of the founders. It does not offer a special approach to the behavior of the founders because of the unique position reserved for them in our nation’s history. It treats them as it would any political actor. The modern approach represents an impartial, disinterested explanation of the behavior of our Founding Fathers, employing what are today commonly accepted techniques of economic and statistical analysis. Yet many individuals tend to look at our Founding Fathers through rose-colored glasses. They often place the founders on a pedestal and treat them as demigods. Many contend that the founders were motivated primarily, if not solely, by high-minded political principles “To Form a More Perfect Union.” The modern approach takes a broader view.

Annotated References

Ackerman, Bruce. We the People, two volumes. Cambridge, MA: The Belknap Press of Harvard University Press, 1991.

A view of the American constitutional founding by an eminent legal scholar. Ackerman offers a “dualist” theory of the founders’ politics in an attempt to recover the “true” revolutionary character of the founders, contending they were “dualist democrats.” Given this dualism, it is claimed that the founders behaved differently during “constitutional politics” than during “normal politics.” The founders thus were able to suspend their self-interests during the framing of the Constitution and promote instead the “rights of citizens and the permanent interests of the community.” Dismisses an economic interpretation as not serious. Indicates how a modern legal scholar thinks about the issues. Not a quantitative study.

Beard, Charles A. An Economic Interpretation of the Constitution of the United States. New York, NY: Macmillan Publishing Company, 1913 (1935).

A must read. The classic study of economics and the Constitution. Beard consolidated existing scholarly views and, in the process, his study became identified as “the” economic interpretation of the Constitution. Argues that the adoption of the Constitution was based on a conflict among competing economic interests. Contends that the founders who supported the strong, centralized government in the Constitution were merchants, shippers, bankers, land speculators, or private and/or public securities holders. Contends that the opponents, who supported a more decentralized government, represented agrarian interests and were less-commercial farmers, who often were also debtors, and/or northern planters along the Hudson. Contains little empirical evidence. Offers no formal or quantitative analysis.

Brown, Robert E. Charles Beard and the Constitution: A Critical Analysis of An Economic Interpretation of the Constitution. Princeton, NJ: Princeton University Press, 1956.

The first significant blow to Beard after nearly a half-century of acceptance. Dismisses an economic interpretation as utterly without merit, attacking its conclusions in their entirety. Brown maintains that eighteenth-century America was democratic, the franchise was common, and there was widespread support for the Constitution, claiming that his evidence counters Beard’s contention about the lack of democracy and the narrow support for the Constitution. Brown accuses Beard of taking the Philadelphia debates out of context, falsely editing The Federalist, and misstating facts. Not an empirical study per se. Offers no formal or quantitative analysis of the economic or financial interests of the founders.

Buchanan, James M., and Gordon Tullock. The Calculus of Consent: Logical Foundations of Constitutional Democracy. Ann Arbor, MI: University of Michigan Press, 1962.

An important read. The first modern attempt by economists to develop an economic theory of constitutions. The premise is that citizens rationally devise constitutions, which contain the fundamental rules of governance to be used for future collective decisions in a society. As constitutions specify the constraints placed on governments and individuals, they establish the incentive structure for the future. Buchanan and Tullock maintain that it is in the self-interest of rational citizens to adopt a constitution that contains economically “efficient” rules that promote the interests of the society as a whole rather than the interests of any particular group. Suggests that the theory is applicable to the American founding. No empirical evidence is presented, however.

Elliot, Jonathan, editor. The Debates in the Several State Conventions on the Adoption of the Federal Constitution as Recommended by the General Convention at Philadelphia, in 1787, 5 volumes. Philadelphia, PA: J. B. Lippincott, 1836 (1888).

Worth perusing. Contains a record of the speeches and debates during the ratification process at most of the state ratifying conventions, as well as numerous other documents and correspondence pertaining to the Constitution’s ratification and drafting. The original source of information on what was said at the constitutional conventions. Elliot’s “Debates” are a most illuminating source of information concerning the views of both the supporters and opponents of the Constitution. Contains a record of the debates over ratification in the ratifying conventions in Massachusetts, New York, Pennsylvania, Virginia, South Carolina, and North Carolina. Contains only small fragments of the debates in the ratifying conventions in Connecticut, New Hampshire, and Maryland. No debates from the other four state ratifying conventions are included.

Farrand, Max, editor. The Records of the Federal Convention of 1787, 3 volumes. New Haven, CT: Yale University Press, 1911.

Worth perusing. Reputably the best source of information concerning what took place at the Philadelphia Constitutional Convention in 1787. Contains copies of the official journal of the convention; James Madison’s highly respected notes of the entire proceedings; the diaries, notes, and memoranda of seven others (Alexander Hamilton, Rufus King, George Mason, James McHenry, William Pierce, William Paterson, and Robert Yates); the Virginia and the New Jersey plans of government presented at the convention; several documents recording the work of the Committee of Detail that wrote the first draft of the Constitution; a list of the framers, their attendance records, whether they signed the Constitution, and for thirteen of the sixteen non-signing framers whether the debates indicated they favored or opposed the Constitution; and hundreds of letters and correspondence of many of the framers and their contemporaries.

Hamilton, Alexander, John Jay, and James Madison. The Federalist: A Commentary on the Constitution of the United States, Being a Collection of Essays written in Support of the Constitution agreed upon September 17, 1787, by the Federal Convention. New York, NY: The Modern Library, 1937.

A must read to understand the arguments put forth by the contemporary supporters of the Constitution. Commonly referred to today as The Federalist Papers, a collection of eighty-five essays written, between October 1787 and May 1788, under the pseudonym “Publius,” in support of the Constitution during the ratification debate in New York, seventy-seven of which originally appeared in the New York press. They appeared in book form in the spring of 1788 and it was soon after revealed that Alexander Hamilton, James Madison, and John Jay collectively wrote them. Given the “Papers” were part of a political campaign to win ratification, they should not be considered unbiased interpretations of the Constitution. Yet because Hamilton and, especially, Madison, the “Father” of the Constitution, were both at the Philadelphia convention that drafted the Constitution and Jay was a renowned lawyer, The Federalist soon became the authoritative interpretation of the intention of the framers as well as the meaning of the Constitution. Still viewed as such today by many but some scholars readily acknowledge the biased political nature of their conception.

Jensen, Merrill. The Making of the Constitution. New York, NY: Van Nostrand, 1964.

A culmination of more than two decades of scholarship on constitutional history and the Confederation period. Presents an interesting view of the issues. Concludes that many of the framers “who agreed on ultimate goals differed as to the means of achieving them, and they tended to reflect the interests of their states and their sections when those seemed in conflict with such goals.” Suggests that throughout the Philadelphia convention the framers expressed their common belief that men conducting public business must be restrained from using their influence to further their private interests. Jensen’s conclusion about the controversy over Charles Beard is especially revealing, as he maintains that the founders would have been bewildered because they “took for granted the existence of a direct relationship between the economic life of a state or nation and its government.” Not a study of economic interests, however.

Jillson, Calvin C. Constitution Making: Conflict and Consensus in the Federal Convention of 1787. New York, NY: Agathon Press, 1988.

An argument for the importance of economic and other interests by a respected political scientist. Employs modern statistical techniques to describe the voting alignments among the states at the Philadelphia convention. The findings indicate that many of the long recognized voting alignments existed over many of the issues considered at Philadelphia. Concludes that issues of basic constitutional design were decided on the basis of principle, whereas specific economic and political interests decided votes involving more specific issues. Is limited though because it does not use explicit data to measure economic or other interests. Employs the historical literature to categorize the interests of the states represented at the convention and then tests whether the states voted together on particular issues, concluding that when they did, economic or political interests mattered. Employs fairly sophisticated statistical techniques. Concerns issues of interest mainly to political scientists, voting alignments and coalition formation.

McDonald, Forrest. We the People: The Economic Origins of the Constitution. Chicago, IL: University of Chicago Press, 1958.

An important read to understand the scholarly opinion of an “economic interpretation of the Constitution” among many. The most important and lasting blow to Beard after nearly a half-century of acceptance. Empirically examines the wealth and economic interests of the framers of the Constitution and ratifiers at the thirteen state conventions. Several economic interests are reported for nearly 1,300 (about three-quarters) of the founders. The votes on several issues at the Philadelphia convention and the votes at the ratifying conventions also are reported. Concludes that for the Philadelphia convention and the ratifying conventions the facts do not support an interpretation of the Constitution based on the economic interests represented. Further concludes there is no measurable relationship between specific economic interests and specific voting at the Philadelphia convention nor generally between specific economic interests and the votes at most of the ratifying conventions. Argues that an economic interpretation is more complex than that offered by Beard. Contains much empirical evidence but offers no formal or quantitative analysis. Many of its conclusions are overturned in McGuire’s To Form A More Perfect Union.

McGuire, Robert A. To Form A More Perfect Union: A New Economic Interpretation of the United States Constitution. New York, NY: Oxford University Press, (2002, in press).

Should be read by anyone interested in the modern “economic interpretation of the Constitution” and what the evidence indicates formally. The culmination of more than a decade and a half of modern research critically reexamining the adoption of the Constitution that seriously challenges the prevailing interpretation of our constitutional founding. Based on large amounts of new data on the economic, financial, and other interests of the Founding Fathers, an economic model of their voting behavior, and formal statistical analysis. The votes of the founders on selected issues at the Philadelphia convention and the votes during ratification are statistically related to measures of the founders and their constituents’ interests. The findings indicate that the economic and other interests significantly influenced the drafting and ratification of the Constitution. The magnitudes of the influences are shown to be substantial in many cases. Indicates how the Constitution would have been different had different interests been present at Philadelphia and how ratification would have been different had different interests been represented at the ratifying conventions. Attests to the importance of the specific individuals involved in historical events to historical outcomes.

McGuire, Robert A., and Robert L. Ohsfeldt. “Economic Interests and the American Constitution: A Quantitative Rehabilitation of Charles A. Beard.” Journal of Economic History 44 (1984): 509-519.

Quite readable. A useful preliminary study, reexamining the adoption of the Constitution employing the methods of modern economic history. Discusses the issues in a straightforward fashion with a minimum of technical jargon. Develops an economic model of the behavior of the Founding Fathers, discusses the data and evidence collected on the economic and other interests, and reports preliminary statistical findings on the role of economic interests in the drafting and ratification of the Constitution. The findings are dated though because of their preliminary nature. The findings have been superceded by those reported in McGuire’s To Form A More Perfect Union.

Riker, William H. “The Lessons of 1787.” Public Choice 55 (1987): 5-34.

Quite readable. Written with a minimum of technical jargon by an eminent political scientist and constitutional expert. While emphasizing a rational choice view of the founders, it places little weight on the importance of economic interests per se. Riker maintains that military threats to the status quo during the 1780s explain the adoption of a strengthened central government. Presumes the framers of the Constitution were different from modern day politicians. Their achievements could not be duplicated today because, according to Riker, they were not constrained, as so many contemporaries are, by the foolish views of their constituencies. Maintains that the framers were less partisan and more disinterested than politicians are today. The approach presumes there was near unanimity among the framers. Indicates how an important political scientist thinks about the issues. Not a quantitative study.

Rossiter, Clinton. 1787: The Grand Convention. New York, NY: Macmillan Publishing Company, 1966.

An influential study of the Philadelphia convention that maintains economic interests motivated the founders throughout their deliberations. Contends, however, that the founders were essentially “like-minded gentlemen” whose interests and political ideologies were similar. Openly rejects an economic interpretation during ratification, claiming that “Virginia ratified the Constitution . . . because of a whole series of accidents and incidents that mock the crudely economic interpretation of the Great Happening of 1787-1788.” Further concludes “the evidence we now have leads most historians to conclude that no sharp economic or social line can be drawn on a nationwide basis.” Offers no formal or quantitative analysis of the role of any economic, financial, or other interests, however.

Storing, Herbert J. The Complete Anti-Federalist, volumes 1 through 7. Chicago, IL: University of Chicago Press, 1981.

A must read for anyone seriously interested in our nation’s founding. Places the essays in The Federalist in perspective. It is not at all necessary to read the volumes in their entirety. The seven volumes are the magnum opus for the arguments of the contemporary opponents of the Constitution. Given the success of the supporters of the Constitution and the esteem given their arguments presented in The Federalist, the opponents have often been denigrated and ignored. Yet many prominent Americans in the 1780s did oppose the Constitution. Among some of the better know Anti-Federalists, and opponents of the Constitution, are Patrick Henry and George Mason of Virginia, and Melancton Smith of New York. The Complete Anti-Federalist is a superb attempt, in Storing’s words, “to make available for the first time all of the substantial Anti-Federal writings in their complete original form and in an accurate text, together with appropriate annotation.” See, especially, the introduction, contained in volume one, which gives valuable coherence to Anti-Federalist thought.

Whaples, Robert. “Where Is There Consensus among American Economic Historians? The Results of a Survey on Forty Propositions.” Journal of Economic History, 55 (1995): 139-154.

The title of this article says it all. Whaples surveyed economists and historians whose specialty is American economic history to determine whether, and where, there is consensus among economic historians on forty important historical issues concerning the American economy. Reports the findings of the survey so that they indicate whether there are differences in the consensus on various issues among scholars trained in economics versus scholars trained in history.

Wood, Gordon S. The Creation of the American Republic 1776-1787. Chapel Hill, NC: University of North Carolina Press, 1969.

An important read. A widely acclaimed, and monumentally influential, study of the American founding by an eminent historian. Contends it is nearly impossible to identify the supporters or opponents of the Constitution with specific economic interests. Argues that the founding can be better understood in terms of the fundamental social forces underlying the ideological positions of the founders. Wood maintains the Constitution was founded on these larger sociological and ideological forces, which are the primary interests of the book. Concludes, “The quarrel was fundamentally one between aristocracy and democracy.” Offers no formal or quantitative analysis of the role of any economic, financial, or other interests.

Walton, Gary M., and James F. Shepherd. The Economic Rise of Early America. New York, NY: Cambridge University Press, 1979.

Quite readable. A concise presentation of the economic history of early America from the colonial period through the early national period by two eminent economic historians of early America. In addition to the material on the colonial period, contains a discussion of general economic conditions in the United States in the 1780s, a discussion of the Articles of Confederation, and the immediate and longer-term influences on the American economy brought about by the adoption of the Constitution. A nice starting point for a general understanding of the economic history of early America. It is somewhat dated though, as there has been new scholarship on the early American economy in the last twenty years.

Money and Finance in the Confederate States of America

Marc Weidenmier, Claremont McKenna College

The Nobel Laureate Milton Friedman once noted that wars have provided laboratories to examine the behavior of money, prices, and income (Friedman, 1952). The Confederate experience during the American Civil War is no exception. Between January 1861 and April 1865, the Lerner Commodity Price Index of leading cities in the Eastern Confederacy increased from 100 to over 9000. Price inflation in the South during the Civil War ranks second only to the American Revolution in U.S. history.

Which European power pushed into the Canadian Hudson Bay region in pursuit of furs during the seventeenth century?

Confederate Revenue Sources during the War

There are three sources of government revenue: taxation, borrowing, and printing money. Given that the Confederate States of America was established on the principle of states’ rights, many Southerners were suspicious of granting the central government powers to impose and collect taxes. Governor Moore of Alabama summarized this position, “The collection of this tax, by the state would be an onerous and unpleasant duty as it imposes upon the state the necessity of enforcing the laws of the Confederate government against her own citizens” (quoted in Lerner, 1956, p. 165 and Weidenmier, 1999a). With opposition from the general public as well as leading political figures, it is not surprising that the Confederate government collected approximately only 8.2% of its total revenues from taxes (Ball, 1991). Tariffs, another potential source of tax revenue, were hampered by the Union blockade of Southern ports.

The Confederacy then turned to debt issue as a means of war finance. The South successfully sold some long-term government securities during the early stages of the war. Bond issues proved a limited source of war financing as Southern prospects diminished, however. Investors increasingly shied away from purchasing securities offered by a government with little or no tax base and a deteriorating military situation. The government resorted to money financing as their primary source of revenue. Overall, debt issue and the printing press accounted for nearly 32 and 60 percent of the South’s total real revenues during the war (Ball, 1991). In the following section, I will briefly analyze the economic effects of the Confederacy’s reliance on note issue as a source of war finance.

The Confederate Inflation

Lerner (1954, 1955, 1956) used the quantity theory of money to analyze the Confederate inflation. The quantity theory of money can be described by the following equation:

M = K*(P*Y), (1)

where P is the price level, Y is real (i.e., inflation-adjusted) output, and M is money. Equation (1) assumes that people hold some fraction, K, of their nominal income, P*Y, in the form of money. For example, if your income was $10,000 per year and K=1/5, then you would hold $2,000 in the form of money. To study inflation, it is useful to express equation (1) in growth rates, using equation (2):

p = m – y – k (2)

Lerner decomposed the influence of changes in money, velocity — the number of times a dollar bill turns over in a year (mathematically velocity is the inverse of k) — and real output on the inflation rate — the rate at which prices rise. Lerner showed that the Confederate money supply increased 11.5 times between January 1861 and October 1864 while commodity prices increased 28 times in the same period (also see Godfrey, 1978). Rising velocity contributed to the runaway price level as people reduced their holdings of money balances and purchased commodities and non-monetary assets. Lerner also inferred from periodic Treasury reports that the South experienced a forty percent fall in real output during the war.

More recent contributions, notably by Burdekin and Langdana (1993) and McCandless (1996), have emphasized the importance of war news and fiscal confidence in price level determination (also see Schwab, 1901). The basic idea is that Confederate citizens were forward-looking and incorporated all available information in forming their expectations of the price level. In the event of a Confederate defeat, for example, people forecasted higher government spending and money growth in the future and bid up prices immediately. Moreover, mounting Confederate defeats also drove up prices as people were unsure about the fate of the fledging nation. War news, a measure of fiscal confidence, was an important determinant of the Confederate price level and helps to explain the low correlation between the money stock and price level.

Monetary Reforms

Which European power pushed into the Canadian Hudson Bay region in pursuit of furs during the seventeenth century?

Pecquet (1987) and Burdekin and Weidenmier (2001) have attempted to disentangle the effects of war news and money supply changes in Confederate prices. They compare fluctuations in the Confederate money price of gold, the Grayback, in the leading gold market in the eastern (Richmond) and western (Galveston/Houston) Confederacy. The two studies focus on the Confederate Currency Reform Act of 1864, which repudiated one-third of the Confederate money supply. The monetary reform act took effect April 1, 1864 east of the Mississippi River, but did not take effect until July 1, 1864 in the Trans-Mississippi Confederacy. For practical purposes, the reform did not take effect until January 1865 in the west because of difficulties in transporting the new currency across the enemy-controlled Mississippi River. As shown in Figure 2, there was a large divergence in the prices for new money trading in Richmond and old money in Houston. Even after the currency reform took effect in the west, there was a fifty percent difference in the value of the same currency in Richmond and Houston. The results strongly suggest that war news alone cannot explain the behavior of Confederate prices during the war.

Davis and Pecquet (1990) and Burdekin and Weidenmier (2002) have focused on different aspects of the Confederacy’s fiscal and monetary policies. Davis and Pecquet (1990) argue that the Confederacy fixed nominal interest rates though a special monetary instrument, call certificates, to reduce the cost of debt service. Burdekin and Weidenmier (2002b) examine the effects of three monetary reforms on Confederate asset and commodity prices. Each reform authorized that currency be exchanged for bonds by a certain date. After the funding date, noteholders could only exchange their money into lower yielding bonds — or in some cases their notes would no longer be convertible at all. As shown in Figure 3, monetary reforms in early 1863 and 1864 led to a sharp rise in the ratio of commodity to currency prices as Confederate citizens unloaded their money balances and purchased goods before the funding date. Currency prices temporarily stabilized as the money stock was reduced through the forced funding of notes into bonds. Only the August 1863 reform did not have a noticeable effect on the ratio of commodity to currency prices. This exception can be explained in part by the much smaller quantity of notes that were exchanged for bonds and by the fact that this reform occurred in the aftermath of Confederate defeats at Gettysburg and Vicksburg.

Which European power pushed into the Canadian Hudson Bay region in pursuit of furs during the seventeenth century?

Event Studies and War News

How did contemporaries view the American Civil War? This question has traditionally been answered by reading diaries, newspapers, and first-hand accounts of the conflict. An alternative method is to examine how financial market participants reacted to war, fiscal, and political events (see Willard, Guinnane, and Rosen, 1996). To investigate this issue, we need a financial instrument that accurately reflected Confederate victory prospects. Fortunately for our purposes, the South issued Grayback money to purchase goods and services during the war. The value of Confederate money depended on victory or at least on a negotiated peace settlement with the United States. Contemporaries of the Civil War noted that “financial matters fluctuated under the successes and reverses of the war like ebb and tide” (Richmond Examiner, July 9, 1863, p. 1, quoted in Weidenmier, 2002a).

Which European power pushed into the Canadian Hudson Bay region in pursuit of furs during the seventeenth century?

Figure 4 plots the Grayback price of a gold dollar during the Civil War. Large movements in Grayback money prices are labeled and associated with important military, fiscal, and political events to determine events important to contemporaries of the Civil War. Grayback prices depreciated following battle defeats at Antietam and Gettysburg/Vicksburg. The gold premium also rose following the passage of the US Conscription/Finance Bill that increased the North’s ability to finance the war and draft soldiers. A final breakpoint occurred in late spring 1864 when the Confederate government repudiated one-third of the money supply with a currency reform act. The monetary legislation’s positive effect on currency prices was short-lived, however, as the Confederacy cranked up the printing press again in the fall of 1864. Graybacks renewed their depreciation and continued to actively trade until early February 1864. At this point, many Richmond bankers and gold traders packed their wagons and left the besieged capital (Weidenmier, 2002a).

Confederate Debt Operations Abroad

The Confederacy floated two small loans in Europe during the American Civil War: cotton bonds in London and unbacked, high-risk “junk bonds” in Amsterdam. These two loans accounted for less than one percent of Confederate military expenditures during the war. Funds from the two loans were used to build ships in Europe, purchase supplies, and finance overseas operations (Gentry, 1970). Weidenmier (2002b) also argues that the South floated the cotton bonds for political gain. The Confederacy believed the war debt could open the way for sovereign recognition or aid from England or France.

The cotton bonds were unique in that they were denominated in British currency (the pound sterling), and could be converted into cotton on demand. The only catch was that investors had to take delivery of the cotton in the Confederacy. The South honored the provisions of the debt issue for the entire war. The bonds were initially oversubscribed three times when they came to market in March 1863. Holders of the bonds allegedly included several members of British Parliament and the Right Honourable William Gladstone, Chancellor of the Exchequer and future Prime Minister of England.

The junk bonds, on the other hand, were floated in Amsterdam in the summer of 1863, weeks before news of Confederate defeats at Gettysburg and Vicksburg reached European markets. The Dutch bonds contained a default clause that allowed the Confederacy to postpone interest payments on the debt issue until after the war. Indeed, the South never serviced the junk bonds and pursued a debt management policy of selective default (Weidenmier, 2002b).

Figure 5 plots cotton and junk bond prices. The cotton bonds retained a large premium to the junk bonds for the entire war. The discrepancy is largest during 1864 when the cotton bonds rose from a price of 36 pounds sterling in December 1864 to nearly 90 in late August 1864. The junk bonds, in contrast, lost nearly 50 percent of their value during this period. Weidenmier (2001) and Burdekin and Brown (2001) examine the large runup in Confederate bond prices during this period. Brown and Burdekin (2001) attribute the rise to the belief that George McClellan would be elected President of the United States on a peace party platform in November 1864. Weidenmier (2001) traces the large increase in bond prices to an increase in the underlying collateral of the war bonds, New Orleans cotton prices. Since a win for the peace party should have strengthened the price of junk bonds as well as the price of cotton bonds, the decline in junk bond prices during this

Which European power pushed into the Canadian Hudson Bay region in pursuit of furs during the seventeenth century?

period suggests that the decline was largely due to the runup in cotton prices.

Confederate debt prices in Europe plummeted following news of the South’s defeat at Atlanta in late September 1864. By June 1865, cotton bond prices were trading for about 6 pounds sterling and the junk bonds at 1-2 pounds. The small positive price for the cotton bonds can perhaps be explained by the (unlikely) possibility that England might put pressure on the United States or individual Southern states to pay off the bonds. As for the junk bonds, the Dutch firm underwriting the issue offered investors a small credit for exchanging their defaulted Confederate debt for “good bonds” (Weidenmier, 2002b). The investment house’s reputation was apparently tarnished by their dealings in Southern war debt.

Counterfeit Money and the Yankee Scoundrel

Economic and monetary historians have largely ignored the role of counterfeit money in the Confederate inflation because data are not available on the amount of bogus notes. Nevertheless, contemporaries and scholars of the Civil War have noted that counterfeit money posed a serious problem for the Confederacy (Hughes, 1992). More money chasing the same number of goods created inflation, reducing the central government’s take from the inflation tax. The Confederacy was unable to curtail counterfeiting because they lacked the resources and equipment to produce high quality money. Counterfeiting was such a widespread problem that people sometimes joked that fake money was of higher quality than government issued currency.

Weidenmier (1999b) studied the effects of counterfeit money on the Confederate price level by examining the history of the war’s most famous counterfeiter, Sam Upham. The Philadelphia lithographer, printed nearly 15 million dollars of bogus Confederate notes during the war. Upham claims that he originally printed bogus “rebel” notes as souvenirs. Although this may have been initially true, Upham certainly became aware of the fact that smugglers were using his notes to buy cotton in the South. The businessman expanded his business to include mail orders and placed advertisements for his bogus notes in leading Northern cities, including Louisville and St, Louis. Upham’s venture was so successful that the Confederate Treasury Secretary Memminger made the following comments about the “Yankee scoundrel” in June 1862:

“Organized plans seem to be in operation for introducing counterfeiting among us by means of prisoners and traitors, and printed advertisements have been found stating that the counterfeit notes, in any quantity, will be forwarded by mail from Chestnut Street

[Sam Upham’s address], in Philadelphia to the order of any purchaser.” (Secretary of the Treasury Memminger to Confederate Speaker of the House of Representatives, Thomas Bocock, quoted in Todd, 1954, p. 101).

President Jefferson Davis and the Confederate government placed a $10,000 bounty on Upham. The bogus money maker was never caught and some have suggested that the U.S. government protected the businessman with secret service agents.

Weidenmier (1999b) attempted to quantify Upham’s effect on the Confederate price level by making different assumptions about the proportion of Upham’s notes that ended up in the South. Weidenmier estimates that Upham printed between 1.0-2.5 percent of the Confederate money supply between June 1862 and August 1863. Upham stopped printing bogus notes once Confederate money had depreciated so much that it was no longer accepted as a medium of exchange in cotton smuggling. Given that the Philadelphia businessman was one of many counterfeiters, it is probably safe to assume that bogus money makers had a large impact on the Confederate price level. The actions of bogus money makers fueled the Confederate inflation via a large increase in the money stock.

Interest-Bearing Money

There are very few instances in monetary history where governments have issued currency that pays interest. Governments tend to issue interest-bearing money during crises to increase the demand for money — it is more attractive to hold money if it pays interest. Twenty-percent of the South’s money supply paid interest, providing an opportunity to test theories of interest/non-interest-bearing money. Legal restrictions theory, for example, predicts that interest-bearing money will drive non-interest bearing money out of circulation unless there are laws — legal restrictions — that prevent this from happening. Burdekin and Weidenmier (2002a) examine why non-interest-bearing money remained the predominant medium of exchange in the Southern Confederacy despite the existence of large quantities of interest-bearing money. They find that state and Confederate governments forced banks to accept both types of money through de facto legal restrictions. The Confederate and state governments threatened to resume specie convertibility if banks did not accept Grayback interest and non-interest bearing liabilities at par. Banks obviously went along with this policy as they feared losing their gold stocks with specie resumption. In turn, banks held a disproportionate amount of their reserves in interest-bearing money. Non-interest bearing money enjoyed greater circulation in the Confederate economy as banks rationally chose to hold money that paid interest (Makinen and Woodward, 1999).

Weekly data on Confederate cotton bond prices in London and junk bond prices in Amsterdam can be found here. (You may have to right-click and choose Save As to open this file with Excel)

Biweekly data on the Confederate Grayback note price of a gold dollar in Richmond and Houston can be found here. (You may have to right-click and choose Save As to open this file with Excel)

Ball, Douglas B. Financial Failure and Confederate Defeat. Urbana: University of Illinois Press, 1991.

Brown, William O. and Richard C. K. Burdekin. “Turning Points in the U.S. Civil War: A British Perspective.” Journal of Economic History 60, no. 1 (2000): 216-31.

Burdekin, Richard C. K. and Farokh K. Langdana. “War Finance in the Southern Confederacy.” Explorations in Economic History 30, no. 1 (1993): 352-376.

Burdekin, Richard C. K. and Marc D. Weidenmier. “Inflation is Always and Everywhere a Monetary Phenomenon; Richmond vs. Houston in 1864.” American Economic Review 91, no. 5 (2001): 1621-1630.

Burdekin, Richard C. K. and Marc D. Weidenmier. “Legal Restrictions Theory and Interest-Bearing Money: Lessons from the Southern Confederacy.” Cato Journal, in press, 2002a.

Burdekin, Richard C. K. and Marc D. Weidenmier. “Suppressing Asset Price Inflation: The Confederate Experience, 1861-1865.” Economic Inquiry (forthcoming).

Davis, George K. and Gary M. Pecquet. “Interest Rates in the Civil War South.” Journal of Economic History 50 no. 1 (1990): 133-148.

Friedman, Milton. “Prices, Income, and Monetary Changes in Three Wartime Periods.” American Economic Review 42 no. 2(1952): 157-197.

Gentry, Judith F. “A Confederate Success in Europe, the Erlanger Loan.” Journal of Southern History 36, no. 2 (1970): 157-188.

Godfrey, John M. Monetary Expansion in the Confederacy. New York: Arno Press, 1978.

Hughes, B. “Sam Upham: Storekeeper and Yankee Scoundrel.” Civil War (1992): 32-39.

Lerner, Eugene M. “The Monetary and Fiscal Programs of the Confederacy, 1861-1865.” Journal of Political Economy 62 no. 6 (1954): 506-522.

Lerner, Eugene M. “Money, Prices, and Wages in the Confederacy, 1861-1865.” Journal of Political Economy 63 no. 1 (1955): 20-40.

Lerner, Eugene M. “Inflation in the Confederacy, 1861-1865.” In Studies in the Quantity Theory of Money, edited by Milton Friedman, 163-175. Chicago: Chicago University Press, 1956.

Makinen, Gail E. and Thomas. G. Woodward. “Use of Interest-Bearing Currency in the Civil War: The Experience below the Mason-Dixon Line.” Journal of Money, Credit, and Banking 31 no. 1 (1999): 121-129.

McCandless, George T. “Money, Expectations, and the U.S. Civil War.” American Economic Review 86 no. 3 (1996): 661-671.

Pecquet, Gary M. “Money in the Trans-Mississippi Confederacy and the Confederate Currency Reform Act of 1864.” Explorations in Economic History 24 no. 2 (1987): 218-243.

Schwab, John C. The Confederate States of America. New York: Charles Scribner’s Sons, 1901.

Todd, Richard C. Confederate Finance. Athens: University of Georgia Press, 1954.

Weidenmier, Marc D. “Financial Aspects of the American Civil War: War News, Price Risk, and the Processing of Information.” Ph.D. dissertation, University of Illinois at Urbana-Champaign, 1999a.

Weidenmier, Marc D. “Bogus Money Matters: Sam Upham and His Confederate Counterfeiting Business.” Business and Economic History 28 no. 2 (1999b): 313-324.

Weidenmier, Marc D. “The Market for Confederate Cotton Bonds.” Explorations in Economic History 37 no. 1 (2001): 76-97.

Weidenmier, Marc D. “Turning Points in the US Civil War: Views from the Grayback Market.” Southern Economic Journal 68 no. 4 (2002): 875-890.

Weidenmier, Marc D. “Understanding the Costs of Sovereign Default: The Foreign Debts of the Southern Confederacy.” Claremont McKenna College Working Paper, 2002b.

Willard, Kristen, Timothy W. Guinnane, and Harvey Rosen. “Turning Points in the Civil War: Views from the Greenback Market.” American Economic Review 86 no. 4 (1996): 1001-1018.

  1. <<
  2. 1
  3. 2
  4. 3
  5. 4
  6. 5
  7. 6
  8. 7
  9. >>
  10. Last

Which indigenous nation acted as the most prominent middleman during the early fur trade quizlet?

The first few years of fur trading along the St. Lawrence involved the Algonquin and the Innu in particular. Both were acting as middlemen in their own right, trading goods that had been procured first by their neighbours, generally farther north. That middleman role was taken over by the more powerful Wendat.

What impact did the fur trade have on Native American traditional crafts?

What impact did the fur trade have on Native American traditional crafts? Superior manufactured goods secured from European traders led to a collapse in native craft production.

Which of the following was the most immediate motivation for Europeans to find a sea route to India?

Which of the following was the most immediate motivation for Europeans to find a sea route to India? Tropical spices were expensive and highly desirable luxuries in Europe; the quest to trade directly for them while avoiding trade with Muslim lands was the most immediate motivation for European exploration.

Which region of West Africa was known for its female rulers?

From the late 1600s to the early 1900s, the West African kingdom of Dahomey (in present-day Benin) was protected by an all-female regiment of warriors. Depicted here in a 19th-century lithographic print, these women were widely known as fierce defenders of their realm.