Saturday, April 28, 2012

A Singular Empire

Review


The National Interest - April 25, 2012

Greg Woolf, Rome: An Empire’s Story (New York: Oxford University Press, 2012), 384 pp., $29.95.











 

THE ROMAN Empire casts a long shadow. It may not have been the largest empire ever to exist, but it was one of the largest, and few if any can match its longevity. The Romans ruled Italy by the end of the fourth century BC, dominated the entire Mediterranean world by the middle of the second century BC and in the years to come ruled from the Atlantic coast in the west to the Euphrates in the east, from what is now Scotland in the north to the Sahara desert in the south. This achievement was all the more remarkable in an age when no person or idea could travel faster than a ship could sail or a horse gallop. The last emperor to rule in the western provinces was deposed in 476. The last to rule from Constantinople—established in the fourth century AD as a new Rome, with its own seven hills and senate—lost his city to the Turks in 1453.

Greg Woolf, a professor at the University of St. Andrews, opens his excellent new book with the statement, “All histories of Rome are histories of empire.” This states a truth that should be obvious, although it is surprisingly often neglected by some scholars working on the period. That is not the case here, and Woolf’s focus “is empire itself,” and within that he explores many of the great questions—how and why the empire was created, how it changed the world, how the empire itself changed and, ultimately, why it failed. This is a vast topic, and any of his chapters could readily be expanded into a book in its own right. His focus is on some fifteen hundred years, from the creation of the republic to the end of the sixth century AD, when the eastern empire made an ultimately unsuccessful bid to regain Italy and some of the western provinces. Moderns usually refer to the eastern emperors as Byzantine, but they saw themselves as Romans. Nevertheless, by this time the eastern empire was sufficiently different to make this a good stopping point. In the eighth century, the Arab conquests would strip the eastern empire of the bulk of its territory, leaving it merely one state among many and no longer by any stretch of the imagination the dominant superpower in the world.

Traces of the western empire survived. For scholars, the orthodox view now stresses these continuities and speaks of the creation of the barbarian kingdoms that replaced them as transforming the ancient into the early medieval world—a label giving a more positive spin to what used to be called the Dark Ages. At times, this is accompanied by a somewhat naive tendency to play down the violence and upheaval of this process. However, even these commentators don’t pretend that change did not occur. An immense and coherent empire was replaced by many smaller kingdoms and states, and the world became less stable, much less literate, technologically less sophisticated and far more local in its focus. The Roman Empire had been powerful and was gone, even if traces remained in law or the structure of a Catholic Church centered on Rome. These remain to this day, without altering the basic fact that the Roman Empire is long gone.

THE SUCCESS of Rome’s empire—the republic was already an imperial power even before the emperors—is obvious from its size and longevity. Its sophistication and apparent modernity impress us, as do its legacies. Christianity began under Rome’s rule, and together with the Greco-Roman culture of that empire, the Judeo-Christian tradition provides the main bedrock of Western culture. Roman monuments still inspire awe, even in their ruined state. Woolf notes that if the Pantheon or the Baths of Caracalla in Rome were still covered in their original marble and decoration, they would rival the Taj Mahal in beauty as well as in sheer grandeur.

The bathhouse was one of the most sophisticated pieces of engineering ever created by the Romans, and it is significant that so much ingenuity was devoted simply to make life more comfortable. A society has to be wealthy to afford such luxury—and even small communities and modest army bases had their public bathhouses, for this was not simply an indulgence for the rich. The vast cost of the grand public entertainments and the amphitheaters and circuses built to stage them is a similar indicator of priorities. The cruelty of gladiators and beast fights shock us as something utterly alien to modern sensibilities, but the simple fact that a society could afford to lavish money on such spectacles is a sign of its wealth. (Gladiators still make good box-office material, and in Hollywood’s Rome, all roads still lead to the arena.) Modern analyses of samples taken from the polar ice caps also appear to bring the Roman world close to our own, for we now know that industrial activity during the first and second centuries AD generated levels of pollution not seen again until the Industrial Revolution.

Rome—successful and sophisticated for a very long time (if also at times appallingly cruel)—offers a dream of power and success. Roman symbols—the eagle, wide and straight roads, columns and triumphal arches, laurel wreaths, the title of caesar or kaiser or tsar, and the fasces that gave their name to Mussolini’s party—have often been invoked by ambitious leaders and states. Eighteenth-century education drew heavily on the classical past, and America’s Founding Fathers looked to Roman models as they sought to craft a better version of Rome’s republic that would not decay into monarchy. For Rome had suffered several serious crises during its long history, and the one that tore its republic apart in political violence and civil war was so grim that by the end many Romans were eager to accept the rule of an emperor instead of elected magistrates as long as it brought peace.

The empire flourished in the first two centuries AD, the period when the vast majority of its great monuments were built. It also survived subsequent crises, but ultimately it still collapsed. The dream of Rome’s success cannot avoid the nightmare of its fall—or almost inevitably its “decline and fall,” for the title of Edward Gibbon’s great work is firmly established in our minds. Whether called the Dark Ages or the early medieval period, the world that followed was a lot less sophisticated. The lesson appears to be that progress is not inevitable and success rarely permanent. Yet that has not stopped successive generations from looking to Rome in the hope of matching its success and avoiding its failure.

Woolf’s book is not about learning lessons for the modern world or comparing our society and states to Rome. Instead, this is history as it should be written and studied—looking at the past on its own terms in an attempt to understand it. This is a worthwhile end in itself, but it is also essential before making any attempt to draw lessons for the modern world. Drawing hasty analogies from the past often reveals no more than the author’s own preoccupations.

For Woolf, there is little point in comparing the Roman experience to that of more recent empires, and parallels are only drawn with other preindustrial empires. The approach is both refreshing and illuminating. Julius Caesar was granted honors very close to divinity during his dictatorship and was formally deified two years after his assassination. His heir—the future emperor Augustus—then became divi filius, the “son of the god.” He was worshipped in his own right in some provinces during his lifetime and deified when he died in AD 14. Only the mad and bad emperors tried to be gods while they were alive, and only such unpopular rulers were not made gods by the Senate after their deaths. The process became so routine that Emperor Vespasian’s final words were a grim joke—“I think I am becoming a god.” By looking at the divine or semidivine rulers of other ancient empires, Woolf shows that this should not surprise us. It would have been odd if the Romans had not thought of their rulers as more than mortal.

It is a considerable challenge to survey such a broad period and extensive topic as Rome and its empire. To do it well requires both a familiarity with the details and a capacity to stand back and ask big questions, as well as the ability to connect the two. Woolf succeeds at both. He mixes discussion of the underlying ecology and geology of what became the Roman world with consideration of telling details. He explains that “there are no chickens in the Iliad, but Socrates’ last words were that he owed a cockerel to the god Asclepius.” Woolf notes that chickens appeared in the Mediterranean world sometime in the middle of the last millennium BC. Quick to breed and relatively easy to maintain, they provided eggs and a source of conveniently small quantities of meat—an important attribute in a world without refrigeration. The same is true in much of Africa today, where you will often see roadside signs advertising “live chickens” for sale. Such small points help build a picture of everyday life. Neither the little details nor the new approaches to understanding the past are allowed to dominate.

MANY OF Woolf’s questions are very old ones, which does not mean they are easy to answer. No one has yet come up with a satisfactory explanation of why Rome expanded as it did. This question isn’t as widely addressed in popular works as the cause of Rome’s fall, but in both instances, it is easier to explain how it happened than to explain why. Thankfully, no one suggests that the pre-Roman world simply transformed itself into the Roman Empire, but a good deal of work has been done to show that some regions were already changing in ways that made it easier for them to plug into the Roman system. In Gaul, for instance, some tribes were developing into something not too dissimilar from the city-states of the Mediterranean world, and many were producing a substantial agricultural surplus, encouraging trade while incidentally providing sufficient food for the conquering legions when they did arrive. Yet the Romans also conquered areas where this was not true, and it is too facile to claim that the empire stopped expanding when it reached communities too undeveloped to absorb.

Trade usually long predated military contact, as Rome’s flourishing economy influenced markets far beyond the provinces it physically controlled. Roman merchants were active in most areas long before the legions arrived. Caesar found them in the towns of Gaul, but more often, we only hear of their presence when they were massacred by the locals. Under the republic, Roman senators were forbidden by law from investing in large-scale trade or the companies fulfilling government contracts—the publicani or publicans of the King James Bible. They got around this in various ways, mainly through using freed slaves as agents. Former owners had considerable legal and social control over their freedmen and freedwomen, and many aristocrats were involved in numerous projects to spread the risks they took—the closest the Romans came to the idea of limited companies.

Yet, unlike more recent empires, trade did not in itself drive Roman conquest. Glory and plunder were at least as important, feeding political competition, and the leaders in Rome’s wars became fabulously wealthy and increasingly powerful as the empire emerged. In the longer run, this intensified the competitive nature of the republic’s public life and helped to make it violent. Men like Pompey and Caesar conquered great swathes of territory but also doomed the republican system with their civil wars. Emperors feared internal rivals far more than foreign enemies and were reluctant to let senators gain too much glory or win the loyalty of the legions. With a few exceptions, the rule of the emperors brought expansion to a standstill, at least after a final surge under Augustus.

The frontiers were static, but merchants still traveled far beyond them. They brought back amber from the Baltic and exploited the monsoon winds to sail to India and back. Some may even have reached China during the second century AD; the two great empires in the world were dimly aware of the other’s existence, even if most contact was through intermediaries. Roman goods often turn up far beyond the frontiers. Indeed, more Roman swords have been found outside the empire than within its borders, the bulk of them in Scandinavia. Some of this resulted from simple trade and some from open warfare. A few finds were clearly seized in raids on the empire. Other contact was a mixture of commerce and diplomacy. Spectacular finds of silver, gold and glass ornaments suggest gifts to tribal leaders, quite possibly sponsored by the state to keep them peaceful. Rome’s influence stretched far beyond its frontiers.

The empire’s great market had, at times, a drastic impact on peoples outside of Rome’s confines, and intertribal warfare may well have increased simply because of the proximity to the empire. Raiding intensified to provide slaves for sale within the empire in return for luxury goods—just as some African communities turned to warfare to supply the demands of Arab and European slavers in later centuries. Gifts of money and weapons to friendly leaders led to some carving out great kingdoms for themselves. A few became so powerful that they were seen as threats to the empire. Changes in Roman subsidies—or equally the movement away of frontier garrisons that had formerly provided a market—could create hardship and desperation in external communities and might well prompt them to raid the empire instead. Frontier relations were a delicate balancing act.

ROMAN WARFARE was always accompanied by diplomacy, and the formal submission of an enemy was as glorious a success as beating him by force. Yet ultimately military force made the empire possible. The Romans often went to war, but then so did almost every people and state in the ancient world. Greek city-states almost seem to have considered hostility a natural condition of interstate relations. The Romans liked to see all of their wars as just, defending themselves or their allies from real or threatened attack. Sometimes the allies were acquired only a very short time before the war commenced by militant Roman leaders eager for glory.

Julius Caesar took great pains in his Commentaries on the Gallic Wars to show that all of his campaigns in Gaul—and across the Rhine into Germany and over the sea to Britain—were for the good of the republic, and by Roman standards they probably were. He casually talked of pacifying—the Latin verb is pacare—the tribes of northwestern Gaul who had little prior contact with Rome but who had not treated his envoys with suitable respect. The Romans did not grant other peoples any rights even vaguely equal to their own, although Caesar would remark that it was natural for all men to fight for freedom. Rome’s freedom and advantage simply trumped the interests of others.

Yet none of this thinking was probably unique. We are simply better informed about Roman attitudes than those of other ancient peoples. As Woolf points out, the Romans considered pietas—a far stronger word than our piety—to be a characteristically Roman virtue. The Romans took care to worship the gods correctly and reliably, on the whole respecting and even adopting foreign deities. There was a formal rite performed outside a besieged city to invite the gods of that community to leave and come to new homes prepared by the Romans. Rome’s success was seen as coming from this divine approval, but Woolf shows that this does not explain Roman expansion, and there was certainly no sense of a crusade. Virgil’s Jupiter announced that it was the Romans’ destiny to “spare the conquered and overcome the proud in war”—parcere subiectis et debellare superbos. This neatly divided the world into those who had already submitted and acknowledged Roman might, thus deserving a degree of mercy, and those yet to be defeated. Yet this, like the promised dream of imperium sine fine, or “power without limit,” did not produce constant or consistent expansion.

The Romans certainly took warfare seriously, almost personally, and in this, they were unusual. The Roman Republic devoted itself to war in a way unmatched by any of the rival great powers, whether the mercantile empire of Carthage or the kingdoms of the Greek world. The fleet that won the final battle in the twenty-three-year First Punic War was paid for by voluntary contributions made by individual Roman aristocrats. The Romans accepted staggeringly high casualties—a third of their three hundred senators between 218 and 216 BC died at the hands of Hannibal, and fifty thousand soldiers were killed and twenty thousand captured in a single day at the Battle of Cannae. Yet Rome would not concede defeat in the face of these grim totals. The Romans did not give in, kept fighting and learned from their mistakes. In the end, it was the enemy who gave up and sought peace. Rome had the resources of manpower to absorb such appalling losses, and that was rare. But the determination to persist in a conflict until either the Roman state was destroyed or the enemy threat was permanently removed—by becoming a clearly subordinate ally or ceasing to exist as a political entity—was unique. The Romans expected victory to be permanent. A piece of graffiti scratched onto a cave wall in Jordan reads: “The Romans always win. I, Lauricius write this, Zeno.” The context is as unclear as the identities of the two men, but it was not a bad summary of Roman warfare.

The Romans were good at winning wars and even better at creating lasting peace on their own terms—the losers permanently pacified. Rome had the manpower to survive Hannibal’s onslaught because by this time, it had absorbed almost all of Italy and its various cities and peoples. Some became citizens, and all were allies who willingly fought for Rome and had a share in the rewards of victory. It was rare at any period for a Roman army to consist of more than 50 percent Roman citizens, and often the percentage was much lower. The bulk of the population throughout the empire was descendants of the peoples conquered by Rome, many of whom became citizens and adopted Roman lifestyles. The senator and historian Tacitus wrote of British aristocrats wearing togas, learning Latin, and building basilicas and villas, calling such things “‘civilization,’ when in fact they were only a feature of their enslavement.” The cynicism veils the truth that the Romans excelled at making conquered people into Romans.

Cultural identity could still be complex, and being Roman did not mean abandoning all prior connections and loyalties. Woolf cites an incident that the author of the book of Acts clearly did not expect his readers to find strange, which provides an interesting insight into the cultural mix. After disturbances in the Temple, Paul—a Jew from Tarsus educated in Jerusalem and also a Roman citizen—was arrested by Roman troops. He asked (in Greek) for permission to speak to the angry crowd, was given this and addressed it (in Aramaic, more common than Hebrew for everyday conversation in Judaea in this period). Afterwards, the centurion decided to have him beaten as a disturber of the peace. Paul protested that he was a Roman citizen and so exempt from such a casual and demeaning punishment. There were no equivalents of passports or identity cards, and it is clear that it could be difficult to get some legal rights recognized. A person’s status might well not be obvious from their dress or ethnicity. Not wanting to make a mistake, the centurion summoned the tribune in charge of the cohort, who spoke to Paul in Greek. (There is no direct evidence that Paul spoke Latin. It is possible that he did, although since so many educated Romans also spoke Greek it would not have been essential.) The tribune was a former slave who had bought his freedom—a common enough process, as some slaves were allowed to earn a wage or run a business on their owner’s behalf, saving until they could purchase their freedom. The slave of a citizen who was freed became a citizen, if one with slightly reduced rights and continued obligations to the former owner. That freed slave’s children were citizens with exactly the same legal status as anyone else. The stigma of slavery was another matter. In Acts, Paul emphasized that, in contrast to the Roman army officer, he had been born a citizen.

In AD 212, the Emperor Caracalla extended citizenship to almost every free person in the empire, most probably to make them liable to certain imperial taxes. Over time, the privileges of citizenship were eroded, and legally they were divided into the better-off honestiores, or “more honest men,” and the disadvantaged humiliores, or “more humble men.” Being Roman was in itself no longer quite such an advantage, but perhaps the most striking sign of Rome’s success is that scarcely anyone wanted to be anything else. Within a few generations of Roman occupation, little sense of a strong identity predating Roman rule tended to remain. The Jews were an exception, although even in their case, after Hadrian’s reign there were no more rebellions aimed at creating an independent Jewish state, as had existed briefly in AD 66–70 and 132–135. Elsewhere, even when the empire crumbled in the late fourth and fifth centuries, there were no regional or national independence movements. People in Spain or Syria or Britain did not want to free themselves from Rome and be Spanish, Syrian or British. There were no Washingtons or BolĂ­vars in the fifth century AD. Instead, each province wanted to remain Roman and simply have an emperor who dealt with its problems and rewarded local leaders with honors and posts in the imperial administration. They rebelled to proclaim a new emperor but not to overthrow the system. Even the “barbarian” warlords who carved up the western provinces into new kingdoms wanted to be part of the Roman system. Many had served in the Roman army—including Alaric, the Goth who sacked Rome itself in AD 410.

Augustine wrote his City of God in the aftermath of this shocking event, for even Christians struggled to imagine a world without Rome. The rise of Christianity from Jewish sect, via a distinct and sometimes persecuted religion, to the faith of emperors and the empire as a whole is one of the most dramatic stories from the Roman era and surely its most profound legacy. Gibbon is often seen as blaming the collapse of the empire on Christianity, although for all his acidic comments about many church leaders, he did not actually argue this. Woolf sees the consequences of Constantine’s conversation to Christianity as politically mixed. On the one hand, it helped to bolster imperial power. But the degree of unity it brought to the empire was constantly challenged by the repeated doctrinal schisms within the church.

Gibbon felt that

the decline of Rome was the natural and inevitable effect of immoderate greatness. Prosperity ripened the principle of decay; the causes of destruction multiplied with the extent of conquest; and as soon as time or accident had removed the artificial supports, the stupendous fabric yielded to the pressure of its own weight. The story of its ruin is simple and obvious; and instead of enquiring why the Roman empire was destroyed, we should rather be surprised that it had subsisted so long.

Woolf makes no attempt to give a simple answer to why the Roman Empire eventually collapsed, but in some ways his attitude is similar. Comparisons with other ancient empires show that it is the Romans who were peculiar simply because their empire lasted so long and was not supplanted by another rival power. Throughout, Woolf emphasizes change in so many aspects of institutions and life. Little remained unchanged in fifteen hundred years and over such a wide area. The striking thing is that there were such recognizable links between the society and state at different stages in this great sweep of history.

IF INSTEAD of asking why Rome ultimately failed, we should ask how it managed to survive so long, then we would not find the answer to this question any easier. It is difficult to believe that the creation and long survival of the empire were merely matters of chance and of an environment favorable to such an immense state—uniquely favorable, since it has not been repeated. Nor does it explain variations in the Roman experience itself. In the first and second centuries AD, there were only two civil wars. In the century before and the ones that followed, civil war was endemic. In the later period, such conflict was so common that scholars rarely think about it and merely accept it as background noise. Explanations tend to invoke increased external pressure, although the evidence for this is actually poor, or still less satisfactorily imply it was simply chance that led to some two hundred years of stability and prosperity. We still have far more questions than convincing answers.

Understanding the history of Rome is not a simple task, and so much remains uncertain, for the Roman experience is not neatly comparable to the rise and fall of any other empire. Learning lessons from the past is always a precarious task. That does not mean that we shall ever stop trying or that Roman history will not continue to fascinate us. For those already with such an interest, Woolf’s book will be a joy to read. For those not yet intrigued by Rome, it may well set them on that path.

Thursday, April 26, 2012

The Global Power Shift from West to East





The National Interest - April 25, 2012

WHEN GREAT powers begin to experience erosion in their global standing, their leaders inevitably strike a pose of denial. At the dawn of the twentieth century, as British leaders dimly discerned such an erosion in their country’s global dominance, the great diplomat Lord Salisbury issued a gloomy rumination that captured at once both the inevitability of decline and the denial of it. “Whatever happens will be for the worse,” he declared. “Therefore it is our interest that as little should happen as possible.” Of course, one element of decline was the country’s diminishing ability to influence how much or how little actually happened.

We are seeing a similar phenomenon today in America, where the topic of decline stirs discomfort in national leaders. In September 2010, Secretary of State Hillary Clinton proclaimed a “new American Moment” that would “lay the foundations for lasting American leadership for decades to come.” A year and a half later, President Obama declared in his State of the Union speech: “Anyone who tells you that America is in decline . . . doesn’t know what they’re talking about.” A position paper from Republican presidential candidate Mitt Romney stated flatly that he “rejects the philosophy of decline in all of its variants.” And former U.S. ambassador to China and one-time GOP presidential candidate Jon Huntsman pronounced decline to be simply “un-American.”

Such protestations, however, cannot forestall real-world developments that collectively are challenging the post-1945 international order, often called Pax Americana, in which the United States employed its overwhelming power to shape and direct global events. That era of American dominance is drawing to a close as the country’s relative power declines, along with its ability to manage global economics and security.

This does not mean the United States will go the way of Great Britain during the first half of the twentieth century. As Harvard’s Stephen Walt wrote in this magazine last year, it is more accurate to say the “American Era” is nearing its end. For now, and for some time to come, the United States will remain primus inter pares—the strongest of the major world powers—though it is uncertain whether it can maintain that position over the next twenty years. Regardless, America’s power and influence over the international political system will diminish markedly from what it was at the apogee of Pax Americana. That was the Old Order, forged through the momentous events of World War I, the Great Depression and World War II. Now that Old Order of nearly seven decades’ duration is fading from the scene. It is natural that U.S. leaders would want to deny it—or feel they must finesse it when talking to the American people. But the real questions for America and its leaders are: What will replace the Old Order? How can Washington protect its interests in the new global era? And how much international disruption will attend the transition from the old to the new?

The signs of the emerging new world order are many. First, there is China’s astonishingly rapid rise to great-power status, both militarily and economically. In the economic realm, the International Monetary Fund forecasts that China’s share of world GDP (15 percent) will draw nearly even with the U.S. share (18 percent) by 2014. (The U.S. share at the end of World War II was nearly 50 percent.) This is particularly startling given that China’s share of world GDP was only 2 percent in 1980 and 6 percent as recently as 1995. Moreover, China is on course to overtake the United States as the world’s largest economy (measured by market exchange rate) sometime this decade. And, as argued by economists like Arvind Subramanian, measured by purchasing-power parity, China’s GDP may already be greater than that of the United States.

Until the late 1960s, the United States was the world’s dominant manufacturing power. Today, it has become essentially a rentier economy, while China is the world’s leading manufacturing nation. A study recently reported in the Financial Times indicates that 58 percent of total income in America now comes from dividends and interest payments.

Since the Cold War’s end, America’s military superiority has functioned as an entry barrier designed to prevent emerging powers from challenging the United States where its interests are paramount. But the country’s ability to maintain this barrier faces resistance at both ends. First, the deepening financial crisis will compel retrenchment, and the United States will be increasingly less able to invest in its military. Second, as ascending powers such as China become wealthier, their military expenditures will expand. The Economist recently projected that China’s defense spending will equal that of the United States by 2025.

Thus, over the next decade or so a feedback loop will be at work, whereby internal constraints on U.S. global activity will help fuel a shift in the distribution of power, and this in turn will magnify the effects of America’s fiscal and strategic overstretch. With interests throughout Asia, the Middle East, Africa, Europe and the Caucasus—not to mention the role of guarding the world’s sea-lanes and protecting U.S. citizens from Islamist terrorists—a strategically overextended United States inevitably will need to retrench.

Further, there is a critical linkage between a great power’s military and economic standing, on the one hand, and its prestige, soft power and agenda-setting capacity, on the other. As the hard-power foundations of Pax Americana erode, so too will the U.S. capacity to shape the international order through influence, example and largesse. This is particularly true of America in the wake of the 2008 financial crisis and the subsequent Great Recession. At the zenith of its military and economic power after World War II, the United States possessed the material capacity to furnish the international system with abundant financial assistance designed to maintain economic and political stability. Now, this capacity is much diminished.

All of this will unleash growing challenges to the Old Order from ambitious regional powers such as China, Brazil, India, Russia, Turkey and Indonesia. Given America’s relative loss of standing, emerging powers will feel increasingly emboldened to test and probe the current order with an eye toward reshaping the international system in ways that reflect their own interests, norms and values. This is particularly true of China, which has emerged from its “century of humiliation” at the hands of the West to finally achieve great-power status. It is a leap to think that Beijing will now embrace a role as “responsible stakeholder” in an international order built by the United States and designed to privilege American interests, norms and values.

These profound developments raise big questions about where the world is headed and America’s role in the transition and beyond. Managing the transition will be the paramount strategic challenge for the United States over the next two decades. In thinking about where we might be headed, it is helpful to take a look backward—not just over the past seventy years but far back into the past. That is because the transition in progress represents more than just the end of the post-1945 era of American global dominance. It also represents the end of the era of Western dominance over world events that began roughly five hundred years ago. During this half millennium of world history, the West’s global position remained secure, and most big, global developments were represented by intracivilizational power shifts. Now, however, as the international system’s economic and geopolitical center of gravity migrates from the Euro-Atlantic world to Asia, we are seeing the beginnings of an intercivilizational power shift. The significance of this development cannot be overemphasized.

THE IMPENDING end of the Old Order—both Pax Americana and the period of Western ascendancy—heralds a fraught transition to a new and uncertain constellation of power in international politics. Within the ascendant West, the era of American dominance emerged out of the ashes of the previous international order, Pax Britannica. It signified Europe’s displacement by the United States as the locus of global power. But it took the twentieth century’s two world wars and the global depression to forge the transition between these international orders.

Following the end of the Napoleonic wars in 1815, at the dawn of the Industrial Revolution, Britain quickly outstripped all of its rivals in building up its industrial might and used its financial muscle to construct an open, international economic system. The cornerstones of this Pax Britannica were London’s role as the global financial center and the Royal Navy’s unchallenged supremacy around the world. Over time, however, the British-sponsored international system of free trade began to undermine London’s global standing by facilitating the diffusion of capital, technology, innovation and managerial expertise to emerging new centers of power. This helped fuel the rise of economic and geopolitical rivals.

Between 1870 and 1900, the United States, Germany and Japan emerged onto the international scene more or less simultaneously, and both the European and global power balances began to change in ways that ultimately would doom Pax Britannica. By the beginning of the twentieth century, it had become increasingly difficult for Britain to cope with the growing number of threats to its strategic interests and to compete with the dynamic economies of the United States and Germany.

The Boer War of 1899–1902 dramatized the high cost of policing the empire and served as both harbinger and accelerant of British decline. Perceptions grew of an ever-widening gap between Britain’s strategic commitments and the resources available to maintain them. Also, the rest of the world became less and less willing to submit to British influence and power. The empire’s strategic isolation was captured in the plaintive words of Spenser Wilkinson, military correspondent for the Times: “We have no friends, and no nation loves us.”

Imperially overstretched and confronting a deteriorating strategic environment, London was forced to adjust its grand strategy and jettison its nineteenth-century policy of “splendid isolation” from entanglements with other countries. Another consideration was the rising threat of Germany, growing in economic dynamism, military might and population. By 1900, Germany had passed Britain in economic power and was beginning to threaten London’s naval supremacy in its home waters by building a large, modern and powerful battle fleet. To concentrate its forces against the German danger, Britain allied with Japan and employed Tokyo to contain German and Russian expansionism in East Asia. It also removed America as a potential rival by ceding to Washington supremacy over the Americas and the Caribbean. Finally, it settled its differences with France and Russia, then formed fateful de facto alliances with each against Germany.

World War I marked the end of Pax Britannica—and the beginning of the end of Europe’s geopolitical dominance. The key event was American entry into the war. It was Woodrow Wilson who called the power of the New World “into existence to redress the balance of the Old” (in the words of the early nineteenth-century British statesman George Canning). American economic and military power was crucial in securing Germany’s defeat. Wilson took the United States to war in 1917 with the intent of using American power to impose his vision of international order on both the Germans and the Allies. The peace treaties that ended World War I—the “Versailles system”—proved to be flawed, however. Wilson could not persuade his own countrymen to join his cherished League of Nations, and European realpolitik prevailed over his vision of the postwar order.

Although the historical wisdom is that America retreated into isolationism following Wilson’s second term and Warren Harding’s return to “normalcy,” that is not true. The United States convened the Washington Naval Conference and helped foster the Washington naval treaties, which averted a U.S. naval arms race with Britain and Japan and dampened prospects for increased great-power competition over influence in China. America also played a key role in trying to restore economic, and hence political, stability in war-ravaged Europe. It promoted Germany’s economic reconstruction and political reintegration into Europe through the Dawes and Young plans that addressed the troublesome issue of German reparations. The aim was to help get Europe back on its feet so it could once again become a vibrant market for American goods.

Then came the Great Depression. In both Europe and Asia, the economic cataclysm had profound geopolitical consequences. As E. H. Carr brilliantly detailed in his classic work The Twenty Years’ Crisis, 1919–1939, the Versailles system cracked because of the growing gap between the order it represented and the actual distribution of power in Europe. Even during the 1920s, Germany’s latent power raised the prospect that eventually Berlin would renew its bid for continental hegemony. When Adolf Hitler assumed the chancellorship in 1933, he unleashed Germany’s military power, suppressed during the 1920s, and ultimately France and Britain lacked the material capacity to enforce the postwar settlement. The Depression also exacerbated deep social, class and ideological cleavages that roiled domestic politics throughout Europe.

In East Asia, the Depression served to discredit the liberal foreign and economic policies that Japan had pursued during the 1920s. The expansionist elements of the Japanese army gained sway in Tokyo and pushed their country into military adventurism in Manchuria. In response to the economic dislocation, all great powers, including the United States, abandoned international economic openness and free trade in favor of economic nationalism, protectionism and mercantilism.

The crisis of the 1930s culminated in what historian John Lukacs called “the last European war.” But it didn’t remain a European war. Germany’s defeat could be secured only with American military and economic power and the heroic exertions of the Soviet Union. Meanwhile, the war quickly spread to the Pacific, where Western colonial redoubts had come under intense military pressure from Japan.

World War II reshaped international politics in three fundamental ways. First, it resulted in what historian Hajo Holborn termed “the political collapse of Europe,” which brought down the final curtain on the Eurocentric epoch of international politics. Now an economically prostrate Western Europe was unable to defend itself or revive itself economically without American assistance. Second, the wartime defeats of the British, French and Dutch in Asia—particularly the humiliating 1942 British capitulation in Singapore—shattered the myth of European invincibility and thus set in motion a rising nationalist tide that within two decades would result in the liquidation of Europe’s colonies in Asia. This laid the foundation for Asia’s economic rise that began gathering momentum in the 1970s. Finally, the war created the geopolitical and economic conditions that enabled the United States to construct the postwar international order and establish itself as the world’s dominant power, first in the bipolar era of competition with the Soviet Union and later as the globe’s sole superpower following the 1991 Soviet collapse.

Thus do we see the emergence of the new world order of 1945, which now represents the Old Order that is under its own global strains. But we also see the long, agonizing death of Pax Britannica, which had maintained relative global stability for a century before succumbing to the fires of the two world wars and the Great Depression. This tells us that periods of global transition can be chaotic, unpredictable, long and bloody. Whether the current transitional phase will unfold with greater smoothness and calm is an open question—and one of the great imponderables facing the world today.

AS THE United States emerged as the world’s leading power, it sought to establish its postwar dominance in the three regions deemed most important to its interests: Western Europe, East Asia and the Middle East/Persian Gulf. It also fostered an open international-trading regime and assumed the role of the global financial system’s manager, much as Britain had done in the nineteenth century. The 1944 Bretton Woods agreement established the dollar as the international reserve currency. The World Bank, International Monetary Fund, and the General Agreement on Tariffs and Trade fostered international commerce. The United Nations was created, and a network of American-led alliances established, most notably NATO.

It is tempting to look back on the Cold War years as a time of heroic American initiatives. After all, geopolitically, Washington accomplished a remarkable double play: while avoiding great-power war, containment—as George F. Kennan foresaw in 1946—helped bring about the eventual implosion of the Soviet Union from its own internal contradictions. In Europe, American power resolved the German problem, paved the way for Franco-German reconciliation and was the springboard for Western Europe’s economic integration. In Asia, the United States helped rebuild a stable and democratic Japan from the ashes of its World War II defeat. For the trilateral world of Pax Americana—centered on the United States, Western Europe and Japan—the twenty-five years following World War II marked an era of unprecedented peace and prosperity. These were remarkable accomplishments and are justly celebrated as such. Nevertheless, it is far from clear that the reality of the Cold War era measures up to the nostalgic glow in which it has been bathed. Different policies might have brought about the Cold War’s end but at a much less expensive price for the United States.

The Cold War was costly in treasure and in blood (the most obvious examples being the wars in Korea and Vietnam). America bears significant responsibility for heightening postwar tensions with the Soviet Union and transforming what ought to have been a traditional great-power rivalry based on mutual recognition of spheres of influence into the intense ideological rivalry it became. During the Cold War, U.S. leaders engaged in threat inflation and overhyped Soviet power. Some leading policy makers and commentators at the time—notably Kennan and prominent journalist Walter Lippmann—warned against the increasingly global and militarized nature of America’s containment strategy, fearing that the United States would become overextended if it attempted to parry Soviet or communist probes everywhere. President Dwight Eisenhower also was concerned about the Cold War’s costs, the burden it imposed on the U.S. economy and the threat it posed to the very system of government that the United States was supposed to be defending. Belief in the universality of American values and ideals was at the heart of U.S. containment strategy during most of the Cold War, and the determination to vindicate its model of political, economic and social development is what caused the United States to stumble into the disastrous Vietnam War.

Whatever questions could have been raised about the wisdom of America’s Cold War policies faded rapidly after the Soviet Union’s collapse, which triggered a wave of euphoric triumphalism in the United States. Analysts celebrated America’s “unipolar moment” and perceived an “end of history” characterized by a decisive triumph of Western-style democracy as an end point in human civic development. Almost by definition, such thinking ruled out the prospect that this triumph could prove fleeting.

But even during the Cold War’s last two decades, the seeds of American decline had already been sown. In a prescient—but premature—analysis, President Richard Nixon and Secretary of State Henry Kissinger believed that the bipolar Cold War system would give way to a pentagonal multipolar system composed of the United States, Soviet Union, Europe, China and Japan. Nixon also confronted America’s declining international financial power in 1971 when he took the dollar off the Bretton Woods gold standard in response to currency pressures. Later, in 1987, Yale’s Paul Kennedy published his brilliant Rise and Fall of the Great Powers, which raised questions about the structural, fiscal and economic weaknesses in America that, over time, could nibble away at the foundations of U.S. power. With America’s subsequent Cold War triumph—and the bursting of Japan’s economic bubble—Kennedy’s thesis was widely dismissed.
Now, in the wake of the 2008 financial meltdown and ensuing recession, it is clear that Kennedy and other “declinists” were right all along. The same causes of decline they pointed to are at the center of today’s debate about America’s economic prospects: too much consumption and not enough savings; persistent trade and current-account deficits; deindustrialization; sluggish economic growth; and chronic federal-budget deficits fueling an ominously rising national debt.

Indeed, looking forward a decade, the two biggest domestic threats to U.S. power are the country’s bleak fiscal outlook and deepening doubts about the dollar’s future role as the international economy’s reserve currency. Economists regard a 100 percent debt-to-GDP ratio as a flashing warning light that a country is at risk of defaulting on its financial obligations. The nonpartisan Congressional Budget Office (CBO) has warned that the U.S. debt-to-GDP ratio could exceed that level by 2020—and swell to 190 percent by 2035. Worse, the CBO recently warned of the possibility of a “sudden credit event” triggered by foreign investors’ loss of confidence in U.S. fiscal probity. In such an event, foreign investors could reduce their purchases of Treasury bonds, which would force the United States to borrow at higher interest rates. This, in turn, would drive up the national debt even more. America’s geopolitical preeminence hinges on the dollar’s role as reserve currency. If the dollar loses that status, U.S. primacy would be literally unaffordable. There are reasons to be concerned about the dollar’s fate over the next two decades. U.S. political gridlock casts doubt on the nation’s ability to address its fiscal woes; China is beginning to internationalize the renminbi, thus laying the foundation for it to challenge the dollar in the future; and history suggests that the dominant international currency is that of the nation with the largest economy. (In his piece on the global financial structure in this issue, Christopher Whalen offers a contending perspective, acknowledging the dangers posed to the dollar as reserve currency but suggesting such a change in the dollar’s status is remote in the current global environment.)

Leaving aside the fate of the dollar, however, it is clear the United States must address its financial challenge and restore the nation’s fiscal health in order to reassure foreign lenders that their investments remain sound. This will require some combination of budget cuts, entitlement reductions, tax increases and interest-rate hikes. That, in turn, will surely curtail the amount of spending available for defense and national security—further eroding America’s ability to play its traditional, post–World War II global role.

Beyond the U.S. financial challenge, the world is percolating with emerging nations bent on exploiting the power shift away from the West and toward states that long have been confined to subordinate status in the global power game. (Parag Khanna explores this phenomenon at length further in this issue.) By far the biggest test for the United States will be its relationship with China, which views itself as effecting a restoration of its former glory, before the First Opium War of 1839–1842 and its subsequent “century of humiliation.” After all, China and India were the world’s two largest economies in 1700, and as late as 1820 China’s economy was larger than the combined economies of all of Europe. The question of why the West emerged as the world’s most powerful civilization beginning in the sixteenth century, and thus was able to impose its will on China and India, has been widely debated. Essentially, the answer is firepower. As the late Samuel P. Huntington put it, “The West won the world not by the superiority of its ideas or values or religion . . . but rather by its superiority in applying organized violence. Westerners often forget this fact; non-Westerners never do.”

Certainly, the Chinese have not forgotten. Now Beijing aims to dominate its own East and Southeast Asian backyard, just as a rising America sought to dominate the Western Hemisphere a century and a half ago. The United States and China now are competing for supremacy in East and Southeast Asia. Washington has been the incumbent hegemon there since World War II, and many in the American foreign-policy establishment view China’s quest for regional hegemony as a threat that must be resisted. This contest for regional dominance is fueling escalating tensions and possibly could lead to war. In geopolitics, two great powers cannot simultaneously be hegemonic in the same region. Unless one of them abandons its aspirations, there is a high probability of hostilities. Flashpoints that could spark a Sino-American conflict include the unstable Korean Peninsula; the disputed status of Taiwan; competition for control of oil and other natural resources; and the burgeoning naval rivalry between the two powers.

These rising tensions were underscored by a recent Brookings study by Peking University’s Wang Jisi and Kenneth Lieberthal, national-security director for Asia during the Clinton administration, based on their conversations with high-level officials in the American and Chinese governments. Wang found that underneath the visage of “mutual cooperation” that both countries project, the Chinese believe they are likely to replace the United States as the world’s leading power but Washington is working to prevent such a rise. Similarly, Lieberthal related that many American officials believe their Chinese counterparts see the U.S.-Chinese relationship in terms of a zero-sum game in the struggle for global hegemony.

An instructive historical antecedent is the Anglo-German rivalry of the early twentieth century. The key lesson of that rivalry is that such great-power competition can end in one of three ways: accommodation of the rising challenger by the dominant power; retreat of the challenger; or war. The famous 1907 memo exchange between two key British Foreign Office officials—Sir Eyre Crowe and Lord Thomas Sanderson—outlined these stark choices. Crowe argued that London must uphold the Pax Britannica status quo at all costs. Either Germany would accept its place in a British-dominated world order, he averred, or Britain would have to contain Germany’s rising power, even at the risk of war. Sanderson replied that London’s refusal to accommodate the reality of Germany’s rising power was both unwise and dangerous. He suggested Germany’s leaders must view Britain “in the light of some huge giant sprawling over the globe, with gouty fingers and toes stretching in every direction, which cannot be approached without eliciting a scream.” In Beijing’s eyes today, the United States must appear as the unapproachable, globally sprawling giant.

IN MODERN history, there have been two liberal international orders: Pax Britannica and Pax Americana. In building their respective international structures, Britain and the United States wielded their power to advance their own economic and geopolitical interests. But they also bestowed important benefits—public goods—on the international system as a whole. Militarily, the hegemon took responsibility for stabilizing key regions and safeguarding the lines of communication and trade routes upon which an open international economy depend. Economically, the public goods included rules for the international economic order, a welcome domestic market for other states’ exports, liquidity for the global economy and a reserve currency.

As U.S. power wanes over the next decade or so, the United States will find itself increasingly challenged in discharging these hegemonic tasks. This could have profound implications for international politics. The erosion of Pax Britannica in the late nineteenth and early twentieth centuries was an important cause of World War I. During the interwar years, no great power exercised geopolitical or economic leadership, and this proved to be a major cause of the Great Depression and its consequences, including the fragmentation of the international economy into regional trade blocs and the beggar-thy-neighbor economic nationalism that spilled over into the geopolitical rivalries of the 1930s. This, in turn, contributed greatly to World War II. The unwinding of Pax Americana could have similar consequences. Since no great power, including China, is likely to supplant the United States as a true global hegemon, the world could see a serious fragmentation of power. This could spawn pockets of instability around the world and even general global instability.

The United States has a legacy commitment to global stability, and that poses a particular challenge to the waning hegemon as it seeks to fulfill its commitment with dwindling resources. The fundamental challenge for the United States as it faces the future is closing the “Lippmann gap,” named for journalist Walter Lippmann. This means bringing America’s commitments into balance with the resources available to support them while creating a surplus of power in reserve. To do this, the country will need to establish new strategic priorities and accept the inevitability that some commitments will need to be reduced because it no longer can afford them.

These national imperatives will force the United States to craft some kind of foreign-policy approach that falls under the rubric of “offshore balancing”—directing American power and influence toward maintaining a balance of power in key strategic regions of the world. This concept—first articulated by this writer in a 1997 article in the journal International Security—has gained increasing attention over the past decade or so as other prominent geopolitical scholars, including John Mearsheimer, Stephen Walt, Robert Pape, Barry Posen and Andrew Bacevich, have embraced this approach.

Although there are shades of difference among proponents of offshore balancing in terms of how they define the strategy, all of their formulations share core concepts in common. First, it assumes the United States will have to reduce its presence in some regions and develop commitment priorities. Europe and the Middle East are viewed as less important than they once were, with East Asia rising in strategic concern. Second, as the United States scales back its military presence abroad, other states need to step up to the challenge of maintaining stability in key regions. Offshore balancing, thus, is a strategy of devolving security responsibilities to others. Its goal is burden shifting, not burden sharing. Only when the United States makes clear that it will do less—in Europe, for example—will others do more to foster stability in their own regions.

Third, the concept relies on naval and air power while eschewing land power as much as possible. This is designed to maximize America’s comparative strategic advantages—standoff, precision-strike weapons; command-and-control capabilities; and superiority in intelligence, reconnaissance and surveillance. After all, fighting land wars in Eurasia is not what the United States does best. Fourth, the concept avoids Wilsonian crusades in foreign policy, “nation-building” initiatives and imperial impulses. Not only does Washington have a long record of failure in such adventures, but they are also expensive. In an age of domestic austerity, the United States cannot afford the luxury of participating in overseas engagements that contribute little to its security and can actually pose added security problems. Finally, offshore balancing would reduce the heavy American geopolitical footprint caused by U.S. boots on the ground in the Middle East—the backlash effect of which is to fuel Islamic extremism. An over-the-horizon U.S. military posture in the region thus would reduce the terrorist threat while still safeguarding the flow of Persian Gulf oil.

During the next two decades, the United States will face some difficult choices between bad outcomes and worse ones. But such decisions could determine whether America will manage a graceful decline that conserves as much power and global stability as possible. A more ominous possibility is a precipitous power collapse that reduces U.S. global influence dramatically. In any event, Americans will have to adjust to the new order, accepting the loss of some elements of national life they had taken for granted. In an age of austerity, national resources will be limited, and competition for them will be intense. If the country wants to do more at home, it will have to do less abroad. It may have to choose between attempting to preserve American hegemony or repairing the U.S. economy and maintaining the country’s social safety net.

THE CONSTELLATION of world power is changing, and U.S. grand strategy will have to change with it. American elites must come to grips with the fact that the West does not enjoy a predestined supremacy in international politics that is locked into the future for an indeterminate period of time. The Euro-Atlantic world had a long run of global dominance, but it is coming to an end. The future is more likely to be shaped by the East.

At the same time, Pax Americana also is winding down. The United States can manage this relative decline effectively over the next couple of decades only if it first acknowledges the fundamental reality of decline. The problem is that many Americans, particularly among the elites, have embraced the notion of American exceptionalism with such fervor that they can’t discern the world transformation occurring before their eyes.

But history moves forward with an inexorable force, and it does not stop to grant special exemptions to nations based on past good works or the restrained exercise of power during times of hegemony. So is it with the United States. The world has changed since those heady days following World War II, when the United States picked up the mantle of world leadership and fashioned a world system durable enough to last nearly seventy years. It has also changed significantly since those remarkable times from 1989 to 1991, when the Soviet Union imploded and its ashes filled the American consciousness with powerful notions of national exceptionalism and the infinite unipolar moment of everlasting U.S. hegemony.

But most discerning Americans know that history never ends, that change is always inevitable, that nations and civilizations rise and fall, that no era can last forever. Now it can be seen that the post–World War II era, romanticized as it has been in the minds of so many Americans, is the Old Order—and it is an Old Order in crisis, which means it is nearing its end. History, as always, is moving forward.

U.S. Debt Culture and the Dollar's Fate



The National Interest - May 25, 2012

IN OUR common narrative, the modern era of global finance—what we call the Old Order—begins with the Great Depression and New Deal of the 1930s. The economic model put in place by President Franklin D. Roosevelt and others at the end of World War II is seen as a political as well as economic break point. But arbitrarily selected demarcation points in any human timeline can be misleading. The purpose of narrative, after all, is to simplify the complex and, over time, to remake the past in today’s terms. As we approach any discussion of the Old Order, we must acknowledge that the image of intelligent design in public policy is largely an illusion.

There is no question that the world after 1950 was a reflection of the wants and needs of the United States, the victor in war and thus the designer of the peacetime system of commerce and finance that followed. Just as the Roman, Mongol and British empires did centuries earlier, America made the post–World War II peace in its own image. The U.S.-centric model enjoyed enormous success due to factors such as relatively low inflation, financial transactions that respect anonymity, an open court system and a relatively enlightened foreign policy—all unique attributes of the American system.

But the framework of the global financial system in the twentieth century and its U.S.-centric design were the end results of a series of terrible wars—starting, in the case of America, with the Civil War. The roots of the U.S.-centric financial order that arose at the end of World War II extend back into the nineteenth century and reflect the political response of a very young nation to acute problems of employment and economic growth—problems that remain unresolved today.

From an American perspective, the modern era of what we describe as the global financial system based upon the U.S. dollar begins with Abraham Lincoln, the great emancipator who took office in March 1861 as the American republic stood on the verge of dissolution. In those days, “money,” as understood by Americans, comprised gold and silver coins, foreign currency and notes issued by state-chartered banks that were convertible into metal, in that order of qualitative ranking.

The state-chartered banks of that era relied upon gold coin or specie as a store of value and means of exchange with other banks. Going back to Andrew Jackson’s epic campaign to extinguish the Second Bank of the United States in the 1830s, state-chartered banks large and small were suspicious of Washington and would not finance Lincoln’s war. Bankers in New York, Boston and London, for example, would have been happy to see the North and South separate without a war and the continuation of slavery so as not to disturb the cotton trade.

Lincoln tasked Secretary of the Treasury Salmon P. Chase to sell Treasury bonds and had Congress create national banks to buy the debt. The Treasury Department suspended convertibility and issued large quantities of “greenbacks” in the form of paper dollars to pay for the immediate cost of fighting the war. By the end of the Civil War, the greenback traded down to a fifth of face value when measured in gold. Yet Lincoln won the war, even in death, because the Union outspent the Confederacy using the credit afforded by paper money created by government fiat. And the Civil War set the precedent for Washington to engage in massive currency inflation in times of exigency and also to develop by fiat new platforms for creating financial leverage to meet national needs. As Wall Street speculator Ray Dalio wrote a century and a half later, “Virtually all of what they call money is credit (i.e., promises to deliver money) rather than money itself.” Lincoln used that fact to win the Civil War.

Even without a central bank, from the end of the Civil War to the start of World War I in 1914, the United States saw powerful economic growth. So strong was demand for a means of exchange that the much-abused greenback dollar traded back to par value against gold by the time President Ulysses S. Grant officially restored convertibility at the end of his term. The public remained highly skeptical of paper money or other promissory notes, with good reason. As Mark Twain immortalized in Pudd’nhead Wilson, Roxy lost the savings accumulated from eight years of labor as a riverboat servant when her bank failed. Roxy concluded that “hard work and economy” were insufficient to make her secure and independent.

By the turn of the twentieth century, many Americans had adopted a view similar to Roxy’s, which differed significantly from the rugged, self-reliant individualism of American pioneer mythology. Decades of financial crises tempered the independent, hard-money views of Americans. Growing urban populations worried about jobs and opportunity, while farmers, businesses and even conservative state-chartered banks fretted about access to credit. The solution that emerged was not the free market but increasingly the collective credit of the federal government in Washington.

By 1913, when the banking industry and progressive forces in Congress created the Federal Reserve System, America had been through several more financial and economic crashes, leaving bankers even more disposed to a government-sponsored entity (GSE) rescuing them from the harsh discipline of “market forces,” to recall the words of South Carolina’s Democratic senator Ernest Hollings. The private clearinghouse system developed in major U.S. cities during the nineteenth century was inadequate to provide liquidity in times of market upheaval. Thus twelve Federal Reserve banks were created to support the liquidity needs of banks and, in a big-picture sense, provide another layer of financial leverage on top of national banks to support the funding needs of the U.S. economy.

Yet, even after the Fed’s establishment, the U.S. economy continued to labor under the weight of deflation and slack demand. In that sense, the First World War was the first true watershed for America’s economic narrative; it forced Americans to look outward for growth and financial interaction with foreign states as a single nation. Public sentiment was split between sympathy for the British, French and Belgian forces, on the one hand, and for the Central powers led by Germany on the other. But all Americans welcomed the vast demand for goods and services as a relief from years of price deflation on the farm and slack job growth in urban markets driven by the adoption of new technology and imports from Europe. Allan Nevins and Henry Steele Commager wrote, “Economic considerations re-enforced sentimental and political ones.”

The gradual American economic mobilization to support the Allies in World War I not only marked a growing willingness of Americans to engage in foreign intervention overseas but also saw a vast transfer of wealth from Europe to the former colonies as a large portion of the Continent’s gold was sent to America to pay for the war. U.S. banks and eventually the Fed also provided financing for Allied purchases, which grew to a great torrent of raw materials and finished goods. So vast were the financial flows in the early days of World War I that J.P. Morgan could not manage the dollar-sterling transactions. At first, commercial paper issued by British banks could not be discounted with the Fed. The Federal Reserve Bank of New York stepped in, however, and effectively subsidized the British pound exchange rate when the Bank of England exhausted its gold reserves.

Victory bonds were sold widely to Americans to finance the war and manage domestic demand, thereby also socializing the idea of investing by individuals. A number of new GSEs were created to fund and manage the American war effort via the issuance of debt. Compared to the neoliberal orthodoxy of today, there was little fretting over market forces during World War I. Jobs and inflation were the top issues. Wage and price controls and other authoritarian mechanisms were employed by Washington without apology to limit cost increases because wages were constrained. The American farm sector recovered from years in the doldrums as global demand for cotton, grains and meat soared, pushing up domestic prices as well. By 1917, when the United States entered the war militarily on the Allied side, the American economy was running better than it had in many decades. The following year, however, when the debt available to the Allies dried up and exports to Europe slowed, the U.S. economy quickly faded as well. By 1919, the United States was entering a serious economic slump.

After World War I, America descended into a period of isolation and uneven economic circumstances presided over by the dominant Republican Party, which rejected U.S. involvement in world affairs and promptly raised tariffs to protect Americans from cheap imports. When Congress enacted Smoot-Hawley eight years later, the increase in tariffs was marginal compared with earlier increases in import taxes. But no amount of tariff protection could shield U.S. workers and industries from the impact of technological changes such as electricity and the growing use of machines.

The return to “normalcy” promised by President Warren Harding meant an environment where large corporations and banks prowled the economic landscape unhindered, and the federal government largely withdrew from the economy, compared with the policies of Teddy Roosevelt and Woodrow Wilson. The Fed played a relatively marginal role in the post–World War I period and did little to alleviate the economic stagnation that affected much of America’s rural areas. Urban workers had employment, but wages remained stagnant even as the concentration of wealth in the United States increased dramatically.

While a large part of the real economy suffered during the post–World War I period, speculation in real estate and on Wall Street grew through the 1920s. With it came financial fraud. The party ended, though, with the landmark 1925 Supreme Court decision written by Louis Brandeis in Benedict v. Ratner, which set a new standard for collateralized borrowing. The Brandeis decision, which ruled that the failure to specify the collateral was “fraud on its face,” arguably helped cause the great crash of 1929 because it effectively shut down the Wall Street sausage machine, cutting liquidity to the market.

The great Wall Street crash of 1929 completed the process of speculative boom and bust that made the market collapses and currency crises of the previous half century pale by comparison. John Kenneth Galbraith noted in The Great Crash of 1929 that Americans displayed “an inordinate desire to get rich quickly with a minimum of physical effort.” As a chronicler of the Great Depression, Galbraith describes the run-up to the Wall Street crash, including the real-estate mania in Florida in the mid-1920s. Few today recall that the precursor to the Great Depression was a real-estate bubble in the mid-1920s, an eerie parallel to the real-estate boom and bust of the 2000s. But in each case, it was the supply of credit in the form of debt that drove the boom and eventual bust in the economy.

IN THE wake of the financial and social catastrophe that followed the 1929 crash, the Franklin D. Roosevelt administration responded with government and more government. Whatever the laissez-faire excesses of the era of Republican rule in the 1920s, the New Deal Democrats lurched in the opposite direction. Historian Arthur M. Schlesinger Jr. noted that “whether revolution was a real possibility or not, faith in a free system was plainly waning.”

Roosevelt launched a campaign of vilification and intimidation against private business, a terrible but probably deliberate blunder that worsened the Depression and drove the formation of private debt capital in the United States to zero by the mid-1930s. Economist Irving Fisher notes in his celebrated 1933 essay, “The Debt-Deflation Theory of Great Depressions,” that FDR’s reflation efforts did help to avoid catastrophic price deflation, but he also blames Roosevelt for prolonging the Depression. The man Milton Friedman called America’s greatest economist wrote:
“In fact, under President Hoover, recovery was apparently well started by the Federal Reserve open-market purchases, which revived prices and business from May to September 1932. The efforts were not kept up and recovery was stopped by various circumstances, including the political “campaign of fear.”

The Second World War and the new debt used to fund it ultimately rescued the United States from FDR’s economic mismanagement. The mobilization to meet the needs of the conflict quickly soaked up the excess workforce, either in terms of conscription or war industries, which were organized in a centralized fashion, as had been the case in World War I under production czar Herbert Hoover. The Fed played a secondary role in financing the New Deal and America’s military effort in World War II. By contrast, the Reconstruction Finance Corporation (RFC) under Jesse Jones took the lead as the government’s merchant bank and provided the financial muscle to fund government programs by issuing its own debt.

At the end of World War II, Britain was broke, and its leaders worried openly that the United States would take advantage of its parlous financial position in the postwar era. In geopolitical terms, the war was the handoff of imperial responsibility from London to Washington. During World War II, Britain liquidated $1 billion in overseas investments and took on another $3 billion in debt, much of which would be rescheduled and eventually forgiven. But when the British, Americans and other Allies met at Bretton Woods at the war’s end, the objective was to stimulate growth and thereby avoid another global war. The key decision taken at that meeting, which set the pattern for the post–World War II financial order, was equating the fiat paper dollar with gold.

When FDR confiscated public gold holdings in 1933 and devalued the dollar, the RFC and not the Fed was the instrument of government action. Jones took delight in having the Fed of New York execute open-market purchases of gold on behalf of the RFC. Together with giants like Leo Crowley—who organized the Federal Deposit Insurance Corporation (FDIC), ran the “lend-lease” operation in World War II and managed two reelection campaigns for FDR—Jones restructured the American economy and then financed the war’s industrial effort with massive amounts of debt.

Besides the RFC, many other parastatal entities were created before, during and after the Depression and war years that were modeled after the experiments of fascist European nations. These included the Federal Housing Administration, the Federal Home Loan Banks, Fannie Mae, the Export-Import Bank, the FDIC, the World Bank and the International Monetary Fund. All of these GSEs were designed to support economic growth via the issuance of debt atop a small foundation of capital—capital that was not in the form of gold but in the form of fiat greenback dollars and U.S. government debt.

Most industrial nations had backed away from gold convertibility by the 1950s, but the metal was still the symbolic unit of account in terms of national payments and private commercial transactions. By stating explicitly that the dollar was essentially interchangeable with gold, Bretton Woods vastly increased the global monetary base and created political possibilities for promoting economic growth that would not have been otherwise possible. Just as Lincoln used a vast expansion of the money supply and the issuance of debt to fund the Civil War, the cost of which approximated the U.S. gross national product of that era, the United States and Allied victors after World War II built the foundation of prosperity on old-fashioned money (gold) and debt (paper dollars). Civil War–era greenbacks originally bore interest to help make these “notes,” which were not backed by gold, more attractive to the public. But by 1945, the paper dollar had become de facto money for the entire world—one of many legacies of war.

Multilateral GSEs such as the World Bank and IMF fueled growth in the emerging world, while U.S. domestic growth in defense spending and later housing was driven by a growing number of domestic GSEs. “Created to rebuild Western Europe, the World Bank soon was eclipsed by the Marshall Plan and its appendages as West European capital markets recovered,” notes author and journalist Sol Sanders. “Looking for new fields to conquer, it turned to what then were unambiguously called undeveloped countries, entering its golden age under Eugene Black (1949–1963), a former Wall Street bond salesman.”

Carried by the demographic tsunami known as the baby boom, created when the “greatest generation” returned from the war, the U.S. economy fueled the rebuilding of European and Asian nations. The Marshall Plan supported growth in Europe while loans from the World Bank and IMF supported nations around the globe with everything from infrastructure loans to social-welfare schemes to explicit balance-of-payments financing—the latter something John Maynard Keynes would have condemned in a loud voice. Hardly a free trader, Keynes wrote in 1933:

I sympathize, therefore, with those who would minimize, rather than with those who would maximize, economic entanglement among nations. Ideas, knowledge, science, hospitality, travel—these are the things which should of their nature be international. But let goods be homespun whenever it is reasonably and conveniently possible, and, above all, let finance be primarily national.

Based on the dollar as the common currency of the free world, in the era known as the Cold War, the United States led a marathon of economic stamina against the Warsaw Pact nations. Loans to nations of all sizes and descriptions fueled global growth and also supported the geopolitical objective of blocking the military expansion of the Soviet Union. Developing nations such as Mexico, Brazil and India became clients of the World Bank and IMF through large loans, causing periodic political and economic crises and currency devaluations as the world reached the 1970s.

When the Berlin Wall fell in 1989, it was not from force of arms by the NATO alliance but the weight of spending and debt by the U.S. defense-industrial and multilateral-aid complex. As in World War II, the ability of America to outmatch the foe in terms of logistics and sheer weight of money—that is, credit—won the day over often-superior weapons and military forces. But while the United States won the Cold War in a geostrategic sense, the economic cost mounted enormously in terms of decades of debt issuance, accommodative monetary policy and extremely generous free-trade policies. Consumers felt the wasting effect of steady inflation, and the impact on American free-market values was corrosive in the extreme. Recalling the allegory in George Orwell’s Animal Farm, all the politicians in Washington, regardless of affiliation, became pigs. In the 1970s, when Washington tried to manage the economy via price controls, “this initiative was not the handiwork of left-wing liberals but of the administration of Richard Nixon,” wrote Daniel Yergin and Joseph Stanislaw, “a moderately conservative Republican who was a critic of government intervention in the economy.”

Through the 1970s and 1980s, as core industries were stripped out of the United States and moved offshore, lost jobs were replaced with domestic-oriented service industries. Chief among these was housing, a necessary and popular area of economic activity that supports employment but does not create any national wealth. The first surge in real-estate prices, which was again driven by the demographic force of the baby boom, ended with the savings-and-loan crisis of the late 1980s. Several of the largest U.S. banks tottered on the brink of failure in the early 1990s. But these crises only presaged the subprime meltdown of the 2000s.

As domestic growth slowed and inflation reared its ugly head, Americans for the first time since the years following World War II began to feel constrained by debt and a lack of opportunity. But instead of succumbing to the constraints of current income, Americans substituted ever-increasing amounts of debt in order to maintain national living standards. Through the 1990s and 2000s, the United States used a toxic combination of debt and easy-money policy to maintain growth levels while a politically cowed, “independent” central bank pushed interest rates lower and lower to pursue the twin goals of full employment and price stability. Under Chairman Alan Greenspan, the Fed kept the party going in terms of nominal growth, even if American consumers actually lost ground in terms of wages and inflation, proof that the Fed’s dual mandate to foster both employment and stable prices is impossibly conflicted.

The use of debt to bid up the prices of residential real estate from the late 1990s through 2007 is yet another example of the determinative impact of demographics on the economic narrative. Federal spending financed with debt started to grow dramatically in the 1980s, while mandates for future social-welfare benefits likewise began to soar. Domestic industries continued to lose ground to imports, which were encouraged through now-institutionalized free-trade policies to preserve the myth of low domestic inflation for consumers.

As the debt dependence of the United States grew from the 1980s onward, the rest of the world benefited from the steady demand for goods and services needed to satiate American consumers. So long as America was willing to incur debt to buy foreign goods, the global financial system functioned via a transfer of wealth from the now-developed U.S. economy to the less developed nations of the world. And to a large extent, the model worked. Today, India, Mexico and Brazil have all repaid their once-problematic foreign debts, leaving agencies such as the World Bank and IMF seemingly out of a job. The question remains how to turn the success of the new world as an export-oriented platform into a stable, competitive marketplace among global industries and nations.

IN A December 2011 comment in Project Syndicate, Mohamed El-Erian of PIMCO wrote:

A new economic order is taking shape before our eyes, and it is one that includes accelerated convergence between the old Western powers and the emerging world’s major new players. But the forces driving this convergence have little to do with what generations of economists envisaged when they pointed out the inadequacy of the old order; and these forces’ implications may be equally unsettling.

El-Erian points to a most troubling aspect of considering the state of the Old Order in global finance—namely, that much of it was a function of war, demographics and other factors far removed from the minds of today’s world leaders. Whereas after World War II there was a strong international consensus behind coordinated government planning when it came to global finance, today the resurgence of neoliberal thinking makes such concerted action unlikely. At the time of Bretton Woods, respected icons of the Old Order like Henry Morgenthau called publicly for government control of the financial markets; today, such views would be ridiculed as retrograde.

Yet even now, the blessed age of globalization—including support for free markets and free trade—may be receding after decades of torrid economic expansion around the globe driven by easy money and debt. “The aging of the baby boom will redirect spending toward domestically provided services and away from foreign supplied gadgetry,” one senior U.S. official said in comments for this article. “The same is true in other industrial countries. Export-led growth is overrated.”

With the subprime-mortgage crisis in the United States since 2007 and the subsequent collapse of the EU nations into a financial meltdown, the dollar remains the only currency in the world that investors trust as a means of exchange, despite America’s massive public debt. Even though the Old Order built around the dollar is in the process of disintegrating, there is simply no obvious alternative to the greenback as a means of exchange in the global economy, at least for now. As my friend and mentor David Kotok of Cumberland Advisors likes to say, “Being the reserve currency is not a job you ask for. It finds you.”

In any event, asking whether the dollar will remain the global reserve currency may be the wrong question. In practical terms, neither the euro nor any other currency is large enough to handle even a small fraction of global payments. The global energy market, for example, is too large for any currency other than the dollar to handle.

Furthermore, there are strong political reasons for the dollar’s preeminence. Far more solvent but also authoritarian nations such as Russia and China just don’t have the right combination of attributes to make their currencies a globally accepted means of exchange, much less a store of value. This fact still makes America the most attractive venue in the world for global commerce—and, yes, capital flight, albeit not a long-term store of value. But in order for the dollar to retain this privileged position, a great deal depends upon the United States turning away from years of ever-expanding government, ever-expanding debt and an ever-expanding money supply.

One of the great fallacies after World War II was that government needed to continue spending and borrowing in order to save the Allies from economic disaster, an offshoot of the Keynesian line of reasoning regarding state intervention in the economy. While choosing to rebuild the productive capacity of the world’s industrial states following the war was clearly the right policy up to a point, the resulting governmental expansion in all aspects of the U.S. domestic economy has sapped the long-term prospects of the world’s greatest market—and hence the global financial system.

Now the price has come due. Keynes and the other leading thinkers of the post–World War II era championed this leading role of government in economic affairs, but all ignored the fundamental truth that production and purchasing power are two entirely different things. Keynes believed, falsely, that purchasing power had to be kept high via government spending to support real production. But, as American economist Benjamin M. Anderson noted, “The prevailing view among economists . . . has long been that purchasing power grows out of production.”

Jobs created via productive economic activity increase the overall pool of wealth, but artificially augmenting consumer activity via government spending or monetary expansion merely slices the existing economic pie into ever-smaller pieces. Governments can use fiscal and monetary policy to encourage growth on the margins, but substituting debt-fueled public-sector spending or easy-money policies for basic economic activity is dishonest politically and madness in economic terms. Yet this is precisely the path championed by Keynes and recommended by most economists today. “We do not have to keep pouring more money into the spending stream through endless Government deficits,” argued economist and writer Henry Hazlitt in a 1945 editorial in the New York Times. “That is not the way to sound prosperity, but the way to uncontrolled inflation.” After living through almost a century of Keynesian-fueled boom and bust, the admonition of Hazlitt and other members of the free-market school is one that we would do well to heed today.

But it won’t be easy. As Friedrich Hayek wrote on this subject:

I do not think it an exaggeration to say that it is wholly impossible for a central bank subject to political control, or even exposed to serious political pressure, to regulate the quantity of money in a way conducive to a smoothly functioning market order. A good money, like good law, must operate without regard to the effects that decisions of the issuer will have on known groups or individuals. A benevolent dictator might conceivably disregard these effects; no democratic government dependent on a number of special interests can possibly do so.

Hayek’s observation really gets to the fundamental issue facing Americans—namely, that changing course after almost seven decades of economic indulgence following WWII will be a domestic political challenge of the first order. Limiting public spending and monetary policy may ultimately force a political change in America in much the same way that Germany is now imposing fiscal austerity on the peripheral states of the EU via entirely nondemocratic means.

IF AMERICA can restrain its libertine impulses and get its fiscal house in order, the reality of an open, free-market, democratic system will continue to make the dollar among the most desirable asset classes in the world. But perhaps the real question is whether America will remain a free, open and democratic society in an environment of lower economic growth and expectations. After seven decades of using debt and inflation to pull future jobs and growth into the present, the prospect of less opportunity raises the specter of domestic political turmoil in the United States and other nations. Internationally, the result could be turmoil and war. This is not merely a short-run political challenge for Washington but ultimately threatens to challenge the self-image of American society. How will Americans react to seeing their children facing declining prospects for employment and home ownership?

That in turn raises a question of whether declining living standards in the United States could eventually force a geopolitical withdrawal by Americans from the world stage. Allied nations from the UK to Israel to South Korea and Japan may soon see an end to unconditional American military and economic support.

America remains a very young, fluid country that is still trying to figure out its place in the world. While one hopes that the ethic of open borders and open markets that helped the world recover from World War II continues, Americans will be under great pressure in coming years to turn inward and may eventually revisit protectionist and interventionist policies if economic pressures become sufficiently acute. It has happened before.

But there is still plenty of room for hope and perhaps even optimism about the shape of things to come. One key component of the new international order may be a mechanism to help overly indebted, mature societies in the United States and EU make the adjustment process in the same way that the emerging debtor nations of the 1980s have become engines of growth today. By giving the other nations of the world greater responsibility in managing the global financial system, we may be able to hasten the day when all nations trade in a global clearinghouse system based on the competitive position of each. The notion of a global currency is attractive in theory and goes back to some of the ideas of Keynes and others at Bretton Woods. But it remains to be seen if investors want to embrace an ersatz global currency that is not connected to a dominant nation.

The reason that the dollar is the currency of choice in the free world is because of the American political system, not just economic or foreign-policy considerations. If that open system remains intact, the role of the dollar in the global financial system is unlikely to change. As I wrote in my 2010 book Inflated, if Americans gradually deal with the explosion of government in the post–New Deal era and steer the economic course back toward a more responsible fiscal formulation focused on individual rights and responsibilities, our future is quite bright. In that event, the dollar is likely to remain the center of the global financial system for some time to come.

But should America’s political leaders continue to embrace the policies of borrow and spend championed by Paul Krugman and other mainstream economists, the likelihood is that Washington will not be able to preserve the dollar’s special role. Just as nations cannot substitute inflation and debt for true production and employment without slowly destroying the underlying purchasing power of their people, America cannot continue to play a leading geopolitical role in the world if its domestic economy falters. And there seem to be few alternatives to the United States.

As America comes to accept that there are real limits on its economic and military power, the leading role of the dollar in the global economy eventually may have to end. In that event, the world will face a future with no single nation acting as the guarantor of global security and economic stability. Instead, we may see a world with many roughly equal nations competing for a finite supply of global trade and economic resources, precisely the situation that prevailed prior to World War I. The choice facing all societies going back to the Greeks, Romans, the British Empire and now America seems to lie between using inflation and debt to stimulate economic growth when real production proves inadequate and turning to war to create growth at the expense of others. Finding a way to avoid these two extremes is now the chief concern.

Extreme Tracker

eXTReMe Tracker

Followers

Google Analytics


Learn chess strategies!