Sunday, December 01, 2013
Why Western Thinkers Struggle With the Subcontinent
foreign AFFAIRS - NOVEMBER/DECEMBER 2013 ISSUE
The Indian Ideology. By Perry Anderson. Verso, 2013, 192 pp. $19.95.
According to Perry Anderson’s new book, The Indian Ideology, India’s democracy -- routinely celebrated as the world’s largest -- is actually a sham. It is fatally compromised by its origins in an anticolonial struggle led by the “monolithically Hindu” Congress party, which Anderson holds largely responsible for the bloodiness of the partition of the British-ruled subcontinent in 1947. Anderson describes India’s most famous leader, Mahatma (“Great Soul”) Gandhi, as a crank and a “stranger” to “real intellectual exchange.” Jawaharlal Nehru, Gandhi’s political disciple and India’s first prime minister, was a mediocrity. And both of these upper-caste maladroits were considerably inferior to their sharpest critic, B. R. Ambedkar, the leader of the Dalits (low-caste Hindus) and the main framer of India’s constitution.
In Anderson’s telling, Nehru, who inherited the colonial “machinery of administration and coercion,” entrenched dynastic rule, thus blighting India’s political progress and failing to make an effort “to meet even quite modest requirements of social equality or justice” for the Indian poor. The much-vaunted secularism that Nehru bequeathed to India was nothing more than a cover for “Hindu confessionalism,” which is enforced to this day in the Muslim-majority valley of Kashmir, where Indian troops and paramilitaries enjoy a “license to murder” that is even broader than the one their British predecessors exercised during colonial times. Yet despite these compound flaws, liberal Indian intellectuals continue to “fall over themselves in tributes to their native land,” exalting what Anderson deems to be fabricated notions of its diversity, unity, secularism, and democracy.
Posted by MBI Munshi at 12/01/2013 07:37:00 am
Friday, November 15, 2013
London Review of Books – November 21, 2013
The Battle of Bretton Woods: John Maynard Keynes,Harry Dexter White and the Making of a New World Order by Benn Steil Princeton, 449 pp, £19.95, February, ISBN 978 0 691 14909 7
When an ailing John Maynard Keynes travelled to the American South in March 1946, he was delighted by what he found. The ‘balmy air and bright azalean colour’ of Savannah offered a welcome reprieve from the cold and damp of London, he wrote on arriving, and the children in the streets were livelier company than the ‘irritable’ and ‘exceedingly tired’ citizens of postwar Britain. Keynes was in Savannah for the inaugural session of the board of governors of the International Monetary Fund and the World Bank, two institutions he had helped found at the Bretton Woods Conference of July 1944. He was desperate to persuade the Americans not to place the headquarters of the two institutions in Washington, where he feared they would function more as appendages of the American state than as truly international bodies, but their location in the American capital was all but a fait accompli, requiring only a handful of votes from the odd array of allies the US had assembled at the meeting. Keynes’s last effort to check the growth of American power had failed. He died six weeks later.
At the end of the Second World War, many thought that a lasting peace would be possible only if we learned to manage the world economy. The fact that the worst war in history had followed shortly on the heels of the worst economic crisis seemed to confirm that international political crisis and economic instability went hand in hand. In the 1940s, this was a relatively new way of thinking about interstate relations. Negotiations for the peace settlement after the First World War had largely skirted economic questions in favour of political and legal ones – settling territorial borders, for example, or the rights of national minorities. When Keynes criticised the peacemakers in 1919 for ignoring Europe’s economic troubles, and for thinking of money only in terms of booty for the victors, he was ahead of his time: ‘It is an extraordinary fact that the fundamental economic problems of a Europe starving and disintegrating before their eyes, was the one question in which it was impossible to arouse the interest of the Four,’ Keynes wrote in , referring to the quartet of national leaders who shaped the Treaty of Versailles. Their indifference wasn’t much of a surprise: national leaders at the time had little direct experience in managing economic affairs beyond their own borders. The worldwide commercial system that had sprung up in the decades before the war had been facilitated largely through the efforts of private business and finance; the gold standard set the rules of exchange, but states mostly stayed out of the way, except when lowering trade barriers or enforcing contracts. When things went badly, they didn’t try to intervene.
This began to change after 1918. As Europe lurched from one economic crisis to another, it became obvious that more durable solutions needed to be found. The newly formed League of Nations took up the charge, fulfilling economic and financial functions well beyond what its founders had envisaged: rescuing Central and Eastern European states from postwar hyperinflation; collecting and interpreting statistical data; and calling a series of (largely unsuccessful) international conferences on trade and finance, most famously in London in 1933. By this point, Europe’s economies had begun to spiral out of control, as the Depression’s contagion spread quickly across the Continent and into Britain, destroying the banking systems of nearly every state it touched. Investors moved their money from country to country, leaving a trail of bank runs and currency crises. States raced to devalue their currencies in an escalating competition to achieve an advantage, and erected new barriers to trade and exchange. Global commerce collapsed, states turned inward, and empires contracted: the new German Reich and the Japanese Empire built regional economic blocs in the name of the fashionable ideal of national self-sufficiency, while Britain established a system of exclusionary and preferential trade with its colonies and dominions.
In the early years of World War Two, when Allied and Axis planners both began to imagine what the postwar world might look like, the question of how to prevent a return to the economic chaos of the 1930s was uppermost in their minds. In a series of negotiations that began in 1941 and culminated in the Bretton Woods Conference of July 1944, British and American officials debated how to re-create a stable and open capitalist world economy in a way that would moderate the excesses of capitalism and give states more leeway to pursue national economic policies. What was obvious was the need for a new form of international governance: a permanent system, regulated by laws and overseen by international bodies, to manage the interaction of national economies. Nothing like it had ever existed before. The new, interventionist model of organised capitalism – that of the American New Deal state – was to be scaled up to the entire world.
This was no easy task: it required the rules of international finance to be completely rewritten. But when British and American officials opened discussions, they turned first to the politically charged question of trade. It was an inauspicious start. In the 1930s, the US State Department had been taken over by the zealous free trade philosophy of the then secretary of state Cordell Hull, and as the US edged closer to outright support for the British war effort in 1941 – and a bankrupt and war-weary Britain grew more desperate for US aid – an opportunity seemed to arise to put Hull’s laissez-faire visions into practice. Article VII of the Lend-Lease Agreement stipulated that, in exchange for American aid, Britain would agree to forego all future discrimination against US imports. In essence, this meant the abolition of Britain’s system of imperial preference. (One Conservative peer referred to the demand as the ‘Boston Tea Party in reverse’.) Many saw imperial preference not only as the economic backbone of empire, but also as a means of mitigating what promised to be Britain’s very weak postwar position: giving up its system of trade barriers and exchange controls threatened to expose Britain to cheap foreign competitors, and leave it fatally attached to the US if the latter went into recession after the war (as many predicted). Keynes reflected a common British feeling when he remarked in March 1941 that US officials were treating Britain ‘worse than we have ever ourselves thought it proper to treat the humblest and least responsible Balkan country’.
Were the British being bullied? Many Conservatives and imperialists believed so, and it’s one of the core assertions of Benn Steil’s new history of the negotiations leading up to Bretton Woods. On Steil’s view, Britain was pressured into making a ‘Faustian bargain’: give up the one thing that seemed to guarantee its continued existence as an empire in exchange for enough money and goods to make it through the war. According to Steil, the US pursued wartime economic negotiations in such a way as to guarantee its postwar dominance. ‘At every step to Bretton Woods,’ Steil writes, ‘the Americans had reminded [the British], in as brutal a manner as necessary, that there was no room in the new order for the remnants of British imperial glory.’
This is going a bit far, and Steil’s focus on imperial preference leads him to downplay the consensus British and American negotiators managed to achieve on the fundamental aims and priorities (if not the institutional details) of the new economic system. When the Anglo-American conversation shifted away from trade and towards the seemingly technical issues of currency and finance, progress towards a deal proceeded more smoothly. In August 1941, Keynes, now adviser to the chancellor and leading postwar economic planning, returned from negotiations over Lend-Lease in Washington to draft plans for a new international monetary regime. Over the course of several meetings from the summer of 1942, Keynes and his American counterpart, the economist and US Treasury official Harry Dexter White, traded blows over how to rewrite the monetary rules of the international economy. They made curious sparring partners: Keynes, the world-famous economist and public intellectual, pitted against White, an obscure technocrat and late-blooming academic born to working-class Jewish immigrants from Lithuania and plucked by the US Treasury from his post at a small Wisconsin university. Neither seemed to enjoy the company of the other: Keynes was disdainful of what he saw as the inferior intellect and gruff manners of the ‘aesthetically oppressive’ White, whose ‘harsh rasping voice’ proved a particular annoyance. Keynes, meanwhile, was the archetype of the haughty English lord; as White remarked to the British economist Lionel Robbins, ‘your Baron Keynes sure pees perfume.’
Squabbles aside, the two men ended up largely in agreement about the basic aims of the new international monetary system: to stabilise exchange rates; facilitate international financial co-operation; prohibit competitive currency depreciations and arbitrary alterations of currency values; and restrict the international flow of capital to prevent the short-term, speculative investments widely believed to have destabilised the interwar monetary system. They also agreed on the need to establish a new international institution to provide financial assistance to states experiencing exchange imbalances and to enforce rules about currency values (what would become the International Monetary Fund), and another to provide capital for postwar reconstruction (the future World Bank). A closely managed and regulated international financial system would replace the unco-ordinated and competitive system of the interwar years. And with currencies stabilised – so they hoped – world trade could be resumed.
The main disagreement between Keynes and White had to do with the remit of the new international institutions. Keynes’s plans, which reflected Britain’s interests and needs as a debtor, called for the establishment of an International Clearing Union – basically, a world central bank. This would issue an artificial international currency called ‘bancor’ to settle payments imbalances between states, and allocate funds from creditor countries (like the US) to debtor countries (like Britain) to facilitate this process. White’s proposed Stabilisation Fund would issue loans to countries towards a similar end, but would make fewer demands on creditor countries – and would be designed in a way that allowed the US a greater say in how it operated. In the autumn of 1943, when British negotiators agreed under American pressure to give up Keynes’s idea of a clearing union, the outlines of a final deal fell quickly into place. By April 1944, a compromise – one that reflected White’s plans more closely than Keynes’s – was on the table. The following month, Roosevelt’s government formally invited most of the non-Axis states to an international conference in the United States to hammer out details and design the new institutions.
One of the most innovative aspects of the Anglo-American deal was the fact that it prioritised the need for full employment and social insurance policies at the national level over thoroughgoing international economic integration. To this extent, it was more Keynesian than not – and it represented a dramatic departure from older assumptions about the way the world’s financial system should function. Under the gold standard, which had facilitated a period of financial and commercial globalisation in the late 19th and early 20th centuries, governments had possessed few means of responding to an economic downturn beyond cutting spending and raising interest rates in the hope that prices and wages would drop so low that the economy would right itself. Populations simply had to ride out periods of deflation and mass unemployment, as the state couldn’t do much to help them: pursuing expansionary fiscal or monetary measures (what states tend to do today) would jeopardise the convertibility of the state’s currency into gold. For these reasons, the gold standard was well suited to a 19th-century world in which there were few organised workers’ parties and labour unions, but not so well suited to a messy world of mass democracy. The Keynesian revolution in economic governance gave the state a set of powerful new tools for responding to domestic economic distress – but they wouldn’t work as long as the gold standard called the shots.
What was needed, and what both Keynes and White wanted to establish, was a system of fixed but adjustable exchange rates, which would allow states to make domestic policy without worrying too much about how it would affect their international economic position. Along with capital controls, this system would work to stabilise currencies, as the gold standard had done, but in a way that gave states more breathing space to pursue the interventionist and welfarist techniques of national economic management that had recently come into vogue across the Atlantic world. The compromise that Keynes and White reached was based on this fundamental insight, and reflected what had become a new (if fleeting) consensus: that the state owed its citizens basic economic security. By telling the story of the road to Bretton Woods as one primarily of conflict and competition, Steil gives short shrift to this larger story, and misses how remarkable it was that these two imperial powers, in the middle of the Second World War, agreed to rewrite the rules of global capitalism to make the world safe for the interventionist Keynesian state.
The convocation of the United Nations Monetary and Financial Conference, where the Keynes-White deal was to be approved, was scheduled for 1 July 1944, three weeks after the Allied invasion of Normandy. Besides the obvious challenges this date posed for transatlantic travel, Washington had a reputation for its unforgiving summer weather. Keynes, already in poor health, insisted that the meeting take place somewhere cooler, preferably in the Rocky Mountains. On the advice of the State Department, the US Treasury secretary Henry Morgenthau selected the lavish Mount Washington Hotel in the small New Hampshire village of Bretton Woods. The hotel, the largest building in the state, was theoretically well suited to house the 730 delegates, representing 44 countries, that Roosevelt had invited to the conference: it boasted its own power plant and post office, an 18-hole golf course, a church, beauty parlour and barber shop, a bowling alley and two cinemas. The Mount Washington had been shuttered for two years of the war, and was in complete disarray when the guests began to arrive; an undermanned hotel staff was forced to ask for help from a group of Boy Scouts and military personnel. (Last-minute preparations had been so anxiety-inducing for the hotel manager that he locked himself in a room with a case of whiskey.)
When the delegates showed up on 1 July, a senior manager of the Mount Washington was reportedly shocked to find himself among a ‘gathering of Colombians, Poles, Liberians, Chinese, Ethiopians, Russians, Filipinos, Icelanders and other spectacular people’. Getting to the US hadn’t been easy: security precautions in the Atlantic for the D-Day invasion had slowed the journey of the British and European representatives aboard the , and wartime conditions had kept many states from sending their official economic ministers. (The representative of Guatemala, Manuel Noriega Morales, was a graduate student in economics at Harvard.) On 30 June, special sleeper trains (referred to as ‘the Tower of Babel on wheels’) departed from Atlantic City and Washington, carrying many of the world’s most powerful finance ministers, economic experts, lawyers and politicians north to the White Mountains.
At the opening of the conference, the main attraction seemed to have been Keynes himself. According to Lionel Robbins, he ‘was photographed from at least 50 angles … Lord Keynes standing up; Lord Keynes sitting down; Lord Keynes in plan; Lord Keynes in elevation; and so on and so forth.’ When the actual work began its pace was feverish (too much so for Keynes, who suffered a minor heart attack two weeks in). At the top of the agenda was the establishment of the two new international bodies. There was significant dissent from some of the non-Western countries about the way global hierarchies would be determined within the new system. The most controversial issue was the voting power each country would have at the IMF – crucial for determining who got to select its directors and perceived as a matter of prestige. Original transcripts of the conference’s proceedings show how passionately some of the poorer countries, such as China and India, fought to increase their representation. As one Indian delegate in the IMF Commission argued (ultimately unsuccessfully),
It is not merely the size of India; it is not merely the population of India – and I may say that one out of every four of the people represented at this conference is an Indian – it is that on purely objective economic criteria, India feels that she is an extremely important part of the world and will probably be an even more important part in the years to come.
Plans for international economic development were also controversial, as delegates from India and elsewhere fought for the IMF to commit itself to an ambitious postwar programme of development for ‘economically backward countries’. To little avail: these new institutions were designed with American and European interests in mind.
American dominance over the system was guaranteed by another crucial fact: in 1944, the US dollar was the only currency available widely enough to facilitate international exchange under the new ‘gold exchange standard’. This was intended to be a modified version of the gold standard which, in practice, would allow states to adjust their currency values against the dollar as they saw fit (depending on whether they prioritised economic growth, for example, or controlling inflation), with the value of the dollar convertible into gold at a fixed rate of $35 an ounce. What this meant was that, after the end of the war, the US dollar would effectively become the world’s currency of reserve – which it remains to this day (although it’s no longer pegged to gold). This arrangement would give the US the privilege of being indebted to the world ‘free of charge’, as Charles de Gaulle later put it, but would work only as long as the US saw maintaining gold convertibility as working in its national interest. Harry Dexter White apparently hadn’t envisaged a scenario in which it wouldn’t, but this eventually happened in the 1970s, when deficits from financing the Vietnam War piled so high that the US began to face a run on its gold reserves. In 1971, Richard Nixon removed the dollar’s peg to gold – effectively bringing Bretton Woods to an end – rather than raising interest rates to staunch the outflow of gold, which would probably have caused a recession (with an election on the horizon). Before this, the track record of the gold exchange standard had been pretty good: the years of its operation had seen stable exchange rates, unprecedented global economic growth, the rebirth of world trade and relatively low unemployment. This period also saw the emergence of many different models of the welfare state – in Europe, the United States and Japan – just as the economists behind Bretton Woods had intended.
What should we make of the fact that one of these economists was a Soviet mole? Along with his more famous colleagues Alger Hiss and Whittaker Chambers, White was responsible for passing sensitive American documents to the Soviets during the war, mostly concerning information he thought would help the Soviet Union secure a sizable postwar loan. White’s actions landed him before the House Un-American Activities Committee in 1948, but the extent of and reasons for his collaboration remain unclear. Most historical accounts see him as having played a relatively minor role, his activities driven less by ideology than by the desire to aid a wartime ally. If postwar peace depended on continued US-Soviet co-operation, the thinking went, it was better to keep the Soviets on side. Steil sees something more sinister at work, and much of his book dwells on White’s Soviet connections. He claims to have found new evidence linking White’s public activities on behalf of the American state with his private support for the Soviet Union: an undated and unpublished note from White’s papers at Princeton, in which he praised the Soviet model of socialist planning and suggested the US continue to pursue a strong military alliance with Stalin’s state. But the vague sympathy White expressed for Soviet planning was not uncommon among the American political elite during the 1930s and 1940s, before the orthodoxies of the Cold War hardened. And, as Steil himself admits, the actual plans White developed showed no influence of Soviet economic ideas, and his courting of the Soviets, who ultimately refused to sign up to the Bretton Woods system, ‘meant little in the end’. If that’s the case, why all the redbaiting?
Steil’s fixation on White’s Soviet sympathies is in keeping with the general tenor of the book and its focus on the conflictual origins of Bretton Woods. Steil makes it clear that he’s not a fan of Keynes and White’s creation, and seems to belong to what he refers to as the ‘small but passionate constituency’ that believes in a return ‘to some form of global gold standard’. Bretton Woods, on this view, was a foolhardy attempt to replace the strict rules of the deterritorialised and apolitical gold standard with a world economic system that relied on the discretion of national policy-makers. The state got involved where it had been rightly absent, and was given more sovereignty over national economic policy than it had ever had. Such nostalgia for the gold standard is a position outside the mainstream of contemporary economics, and tends to be found on the US libertarian right: Ron Paul made a return to the gold standard part of the platform of his 2008 and 2012 presidential runs. Steil cites the economists Hayek and Jacques Rueff as his authorities on the matter. What neoliberals like these two admired about the gold standard was the discipline it imposed on the state: following its rules promised to keep inflation and public debt in check, and to facilitate a nation’s integration into a global commercial system. But it did so at the cost of preventing the state from doing much at all in response to the economic distress of its citizens. As the great American Keynesian Alvin Hansen described life under this kind of arrangement, ‘If it gave us good times, we were thankful. If it gave us bad times, we accepted this as an inevitable concomitant of a system of free enterprise.’ When it failed to satisfy human needs, Hansen wrote, ‘we accepted the result with a stern, ascetic fatalism.’ The designers of Bretton Woods decided that the modern nation state could do better than this – and designed a system intended to buffer its citizens from the turbulence of world economic conditions.
Late 20th-century globalisation was set in motion just as the Keynesian consensus was falling apart. States were pressured to put on what Thomas Friedman has referred to as the ‘golden straitjacket’ of market friendly policies (one size fits all) to maintain national competitiveness in a liberalising world economy. Apologists for Keynes and White’s creation argue that a new Bretton Woods-style regulatory system could soften the sharper edges of globalisation in the interests of the state and national welfarism. Steil’s critique of the Bretton Woods system is intended, in part, to dampen their enthusiasm: there will always be an irreconcilable tension between national discretion over economic policy, he writes, and adherence to multinational rules. This is what doomed the Bretton Woods system and what will doom any successor. A durable system of international monetary co-operation, he argues, can’t exist without some form of a gold standard that prevents states from fiddling with their currencies. (He suggests that a digital analogue to gold could serve this function in the future.)
But Bretton Woods was also a system designed to guarantee that one power, the US, would have a disproportionate say in dictating the rules by which the postwar world economy would be managed. The American dollar was enthroned as the global currency of reserve, and the Bretton Woods institutions built in such a way as to ensure the US would have the greatest say in how they operated. Keynes was right in Savannah in 1946 to worry that these bodies would come to function less as internationalist institutions than as tools for ensuring US dominance. But things didn’t play out exactly as he’d envisaged. It was only after the end of the Bretton Woods system in the 1970s that the IMF and World Bank took on the roles for which they’d become infamous by the century’s end – as strict enforcers of the harsh rules of a liberalised international economy directed from Washington. From this point on, states in receipt of IMF assistance were pressured to follow a standard set of disciplinary and liberalising policy prescriptions: remove capital controls and tariffs, privatise, deregulate, break up unions and rein in public debt. Neoliberal orthodoxies replaced the statist and Keynesian ideas which had originally governed these institutions. In the 1980s and 1990s, the IMF became the handmaiden of the Washington Consensus, insisting that the world’s diverse national economies be reshaped according to its austere rules. This was not what Keynes and White had in mind.
Posted by MBI Munshi at 11/15/2013 06:10:00 pm
Friday, November 08, 2013
Carnegie Endowmentfor International Peace - NOVEMBER 7, 2013
The downfall of Egypt’s elected Islamist president, Mohamed Morsi, in July 2013 has not resulted in the separation of religion and state in the country. Indeed, something quite different seems to be occurring: religion is being nationalized. Under the —a complex of Islamic schools, university faculties, and research institutes—the country’s religious establishment appears to be coalescing internally, aligning itself firmly with the post-Morsi road map, and asserting its leadership of religious life throughout Egypt.
That will be good news to many who view Ahmed el-Tayeb, the current grand sheikh of al-Azhar, as an enlightened figure, but it is already causing controversy among Salafists, the Muslim Brotherhood, and others in Egypt. In the past, al-Azhar’s struggles for centrality in Egypt’s religious life and for autonomy from the state have sometimes worked against each other. Over the years, pushing for greater influence has in political clashes that have affected its coherence and independence. Now, el-Tayeb’s dexterity, combined with emergent institutional changes in the religious establishment, offers the possibility that al-Azhar can finally pursue all its goals simultaneously.
Given these developments, it seems clear that the result of Egypt’s post-Morsi political reconstruction will be a state that weaves religious structures into its bureaucratic fabric every bit as much as it did in the past. Islam will hardly be excluded from public life. But the vision of Islam is emerging as more coherent and more susceptible to guidance by al-Azhar’s senior leadership.
AL-AZHAR AFTER MUBARAK
Since the time of Muhammad Ali in the 1800s, Egypt’s leaders have regarded al-Azhar as an influential tool in shaping and promoting the government’s domestic and foreign policies. Accordingly, they have gradually extended their control over the institution.
Then president Gamal Abdel Nasser moved ambitiously to reorganize al-Azhar through Law 103 of 1961, which placed the entire institution and its endowments under the formal jurisdiction of the Ministry of Religious Endowments. The same law also made the appointment of the grand sheikh the prerogative of the Egyptian president, just as the appointment of any other state official. In subsequent years, the regime worked to ensure that al-Azhar would act as a strong counterbalance to the growing religious influence of both internal forces such as the Muslim Brotherhood and Salafists and external forces like Saudi Arabia’s Wahhabism.
The social and political vacuum in Egypt that followed the fall of then president Hosni Mubarak in February 2011 created space for al-Azhar to escape this tight control. Although al-Azhar was in some ways above day-to-day politics, it was still part of the Egyptian state. And it took advantage of the new political context to push for greater autonomy.
Al-Azhar presented a contrast to rising Islamist political groupings, including the Muslim Brotherhood’s Freedom and Justice Party and Salafist groups like the Nour Party. Unlike the Islamists, al-Azhar was scholarly and not mired in politics. Unlike the Salafists, its approach to religion could be presented as more consistent with the needs of a twenty-first-century society. And unlike both, Ahmed el-Tayeb posed as a promoter of consensus, leading national dialogues and issuing widely supported statements and documents to guide the tumultuous political process.
In 2012, al-Azhar’s break from state control was formalized to a degree. The , the body that governed Egypt after Mubarak’s fall, made a hasty move days before the first meeting of the parliament that had been elected in late 2011 and early 2012. It unilaterally promulgated that effectively granted al-Azhar quasi-independent status. This status was then reinforced by , which stipulates that al-Azhar is “an encompassing independent Islamic institution with exclusive autonomy over its own affairs.” The amendments to the 1961 law restored the Council of Senior Scholars of al-Azhar and the council’s right to elect the grand sheikh and nominate the mufti. The 2012 constitution mandated that the council be consulted on matters of Islamic law.
These changes offered more than autonomy; they also cemented el-Tayeb’s position within the institution. He was allowed monopoly control over the initial composition of the Council of Senior Scholars. Because the council was , a traditional rival position was effectively brought within al-Azhar’s orbit. Moreover, the amendments confirmed that the law was intended to keep the existing leadership of al-Azhar in place and empower el-Tayeb to act in the best interests of al-Azhar without further discussion among the institution, save among those the grand sheikh himself designated.
Despite the major steps made toward autonomy, there is still some distance to travel. The amendments failed to address the issue of financial independence—al-Azhar remains dependent on the government in this area. Other possible reforms within al-Azhar—such as long-expected attempts to develop its educational curriculum—have also had to wait.
THE RISE AND FALL OF BROTHERHOOD RULE
Morsi’s election as president in June 2012 represented a possible challenge to the new arrangements concerning al-Azhar’s relationship to the Egyptian state. Morsi was backed by the Muslim Brotherhood. So his rise meant that a religious movement formerly independent of the state—the Brotherhood—was taking the political reins of power through the Freedom and Justice Party. The Brotherhood also had some support among the faculty and especially the student body of al-Azhar.
The differences between the Brotherhood and al-Azhar were not necessarily doctrinal—the Brotherhood, after all, claimed to be a centrist movement and had long called for al-Azhar’s independence, the resurrection of the Council of Senior Scholars, and a restoration of al-Azhar’s prestige. But many al-Azhar leaders, in particular el-Tayeb, clearly regarded the Brotherhood as more of a political movement than a religious one. They were suspicious that the Brotherhood would gradually place its own figures in key positions in the state religious establishment. And indeed, while charges that Morsi was “Brotherhoodizing” the Egyptian state were often exaggerated, there does seem to have been some attempt by the Ministry of Religious Endowments to fill state ranks with Brotherhood figures.
But as Morsi settled into the presidency, the leadership of al-Azhar avoided a full confrontation. Indeed, it maintained a generally cordial public relationship with the presidency. Some tiffs occurred—such as a perceived snub of el-Tayeb at Morsi’s inauguration—but no clashes followed. On some occasions, when the Muslim Brotherhood issued a statement on women’s rights, for example, or when the upper house of parliament passed a law on Islamic financial instruments, al-Azhar’s leadership used the opportunities to present its independent voice. But it never did so in a tone that suggested a direct challenge.
Yet by the end of June 2013, as public discontent with Morsi’s rule ran high, the grand sheikh apparently felt he could no longer stand above the brewing confrontation in Egyptian politics. On July 3, Minister of Defense Abdel Fattah el-Sisi announced that Morsi had been deposed and the 2012 constitution suspended. El-Tayeb (along with the Coptic pope) was by el-Sisi’s side, clearly endorsing the move. Al-Azhar’s position as a symbol of national unity and consensus was a critical part of the military’s pitching of the ouster as a broad public rejection of Brotherhood rule rather than a military coup.
Al-Azhar strove to make the move appear nonpolitical and a continuation, rather than a repudiation, of its position of standing above the fray. “It was clear that we had to choose between ,” . He went for the less harmful option of removing Morsi from power and supporting the Egyptian military’s . Yet, despite the enduring respect for al-Azhar among the majority of Egyptians, its credibility and neutrality were tainted, at least in the eyes of Morsi and Brotherhood supporters.
Since Morsi’s ouster, al-Azhar has called for a comprehensive and inclusive national dialogue to plan and complement the political agenda. However, these calls have not produced any concrete results, in part because the Brotherhood’s Freedom and Justice Party boycotted the dialogue, blaming al-Azhar for siding with the coup leaders. The other reason for the dialogue’s failure is the lack of interest from the military and the interim government in making any concessions or even discussing elements of the transition plan. In addition, almost immediately after Morsi’s ouster, the interim government and media began championing a “war against terrorism” campaign against the Muslim Brotherhood, which has made it even harder to bring the differing parties together.
In response, an alliance of the Muslim Brotherhood and some Islamists has been calling for a restoration of the 2012 constitution and decrying the July 3 regime change. These forces have been leading weekly rallies and protests to condemn the coup leaders and the interim government.
Last August, large public demonstrations were forcibly dispersed, and they have since subsided. But campus protests have become a daily occurrence since the beginning of the academic year.
Interestingly, they have been most marked at al-Azhar University, an institution where the students are far less hostile to Islamists than on other campuses. Since classes began on October 19, student demonstrators have called for el-Tayeb to be dismissed and have denounced the coup against Morsi. University officials have dealt uncertainly with the protesters, whom they decry as an unrepresentative minority of students. But the close proximity of the al-Azhar campus to the Rabaa al-Adawiya mosque, where pro-Morsi protesters camped out before being forcefully dispersed, fuels the students’ motivation to march from the campus and attempt to stage sit-ins. Despite warnings from the university’s president against politics on campus, protests escalated to the extent that the university called in the police on October 30.
Whether it wanted to or not, al-Azhar has thus been caught up in the post-2011 political upheavals. And it has been divided internally as well, as the student demonstrations illustrate most forcefully.
UNITING THE RELIGIOUS SECTOR AFTER MORSI
Despite the short-term tumult, over the longer term, the institution may reap handsome rewards from the post–July 3 environment.
In the months since Morsi’s overthrow, there have been two notable developments concerning the place of religious institutions in Egyptian public life: more unified leadership in the main state religious institutions and the promise of a greater role for these establishments in public life. These developments have primarily occurred not within al-Azhar but in the structure, personnel, and scope of the Ministry of Religious Endowments. Yet they offer a leading role for al-Azhar in shaping religious life in Egypt.
Long regarded as a foe of al-Azhar’s authority, the Ministry of Religious Endowments is now working with al-Azhar to implement regulations to recruit preachers, bring mosques under the ministry’s jurisdiction, and regulate the content of sermons and the issuing of fatwas. Last month, the minister of endowments, Mohamed Mokhtar Gomaa, declared that prayers would be allowed only in mosques controlled by the ministry and that only al-Azhar-qualified imams would be allowed to preach in mosques. The minister also stripped thousands of imams of their preaching licenses, closed mosques smaller than 80 square meters (860 square feet), which are often led by independent imams, and banned the collection in mosques of donations that “go to those who do not fear God.” What’s more, Gomaa ordered that the boards overseeing state-owned mosques be reformed.
As reasoning for the moves, he cited misuse of mosques during Morsi’s rule, incitement of violence and apostasy, and the recent clashes inside mosques because of the polarized political situation. Moreover, according to a source within the ministry, the minister before Gomaa had appointed Muslim Brotherhood members to high-level positions. Gomaa dismissed these Brotherhood appointees, and afterward some observers called for second- and third-tier ministry personnel to be removed as well on the basis of their membership in the Brotherhood and other Islamist groups.
These decisions stirred up controversy among preachers and religious groups. The Salafist Nour Party criticized the ministry’s move, calling for preachers to be chosen according to “scientific criteria, not loyalty to the authorities or security considerations.”
While the Ministry of Religious Endowment’s steps may seem to simply be an attempt to dismantle and weaken the Muslim Brotherhood and other Islamist groups, the situation is in reality more complicated. The regulations in fact suggest a plan to nationalize religious practice in Egypt.
But the plan may not be feasible. This is hardly the first time the Egyptian government has tried to extend its supervision over the country’s mosques; past attempts have foundered because the task is immense. How can the ministry ensure that the mosques are following the preaching guidelines or that Friday prayers are only taking place in mosques bigger than 80 square meters? How can the ministry deal with the “popular” preachers who do not have a degree from al-Azhar? Is the ministry going to seek help from the police to implement policies and arrest the violators? Another concern is the backlash these policies might invite from groups that have long been building, teaching, and preaching in mosques all over Egypt.
In a recent interview, an al-Azhar official acknowledged the challenges of carrying out these policies. The institution fully supports the changes pursued by the new minister as long-needed reforms that have been delayed due to the dismissiveness of the former governments and the religious leadership at al-Azhar and the ministry. But al-Azhar also recognizes the magnitude of the task.
Al-Azhar’s intention is still modest: not to control the religious apparatus, only to regulate and promote its centrist interpretation of Islam. As a first step, the ministry and al-Azhar decided to establish a Supreme Council for Preaching under the leadership of the grand sheikh. The council would be responsible for training imams and preachers and overseeing all matters related to preaching. Gomaa, who was a member of the grand sheikh’s technical office until his appointment, has been leading efforts to send al-Azhar-educated preachers into remote villages and communities, like Upper Egypt and North Sinai, to overcome the influence of other extreme visions of Islam.
Nonetheless, the most visible forum for al-Azhar’s new predominant role is the Committee of 50, which is currently working on a comprehensive revision of Egypt’s constitution. Al-Azhar has three representatives on the committee (the mufti and two others), and those representatives are in a far more powerful position than their predecessors were in the 2012 drafting body.
In 2012, al-Azhar was in more of a defensive and reactive position, seeking to defend the institution’s interests and vision while non-Islamist, Salafist, and Brotherhood committee members battled over various religious clauses. The result was a document that gave the institution more than it might have wished for. One clause gave al-Azhar a consultative role over issues relating to the Islamic sharia principles, while another adopted al-Azhar’s definition of those principles.
Such a powerful role seems a bit more formal than what the body’s current senior leaders desire. They seek supreme moral authority, not definitive and codified political authority. In the 2013 constitution, the clauses in question are likely to be watered down or eliminated, hardly reducing al-Azhar’s influence but making it less a matter of constitutional text.
AL-AZHAR AND THE NEW ORDER
However gradual its efforts and however modest its stated vision, al-Azhar is now leading Egypt’s religious establishment into a new era. Traditional rival institutions have been brought into far tighter coordination, and the grand sheikh and the Council of Senior Scholars stand at the head of the more unified apparatus. The mufti and the Ministry of Religious Endowments work more closely with al-Azhar, and all seem to acknowledge the moral authority of al-Azhar’s approach to Islam.
Not all dissident voices have been silenced—the demonstrations by al-Azhar students and the continued presence of Salafists in public debates make that clear. But the position of the grand sheikh and al-Azhar remains secure, with dissident voices losing some of their force and influence without being suppressed.
Perhaps the most telling symbol of the new order is Yusuf al-Qaradawi, one of the most prominent Islamic scholars active today. Although Egyptian and trained by al-Azhar, al-Qaradawi is based in Qatar. He has championed his own centrist approach to Islam but makes no secret of his support for the Muslim Brotherhood. Al-Qaradawi, characteristically unrestrained, was openly critical of Morsi’s overthrow, and he extended his criticism to the position taken by el-Tayeb. But al-Qaradawi also sits on al-Azhar’s Council of Senior Scholars. Though the council reacted with outrage to his comments and even met to discuss how to respond to them, in the end, al-Qaradawi retained his seat. He shows some signs of having tamed his voice but remains isolated within that body.
Not only has al-Azhar gained coherence; it has also been able to enhance its prestige and centrality. El-Tayeb’s appearance at el-Sisi’s July 3 announcement may have appeared to be a political move, but it also cemented al-Azhar’s position as the conscience of the Egyptian nation.
For Egyptians of a wide variety of stripes, al-Azhar represents the true and best face of Islam as it is understood and practiced in Egypt. Those opposed to Islamist rule have rallied around al-Azhar as an alternative and have reacted positively to al-Azhar’s enhanced post–July 3 voice as a repudiation of the Brotherhood. The Brotherhood bears considerable resentment toward el-Tayeb, but it still claims to support al-Azhar as an institution.
The current moment is one of tremendous opportunity for al-Azhar. The institution seems to be on the brink of achieving more autonomy and influence than it has ever had in the modern era.
Posted by MBI Munshi at 11/08/2013 11:55:00 am
Thursday, October 17, 2013
By Robert D. Kaplan
STRATFOR - OCTOBER 16, 2013
What is a dictator, or an authoritarian? I'll bet you think you know. But perhaps you don't. Sure, Adolf Hitler, Joseph Stalin, and Mao Zedong were dictators. So were Saddam Hussein and both Hafez and Bashar al Assad. But in many cases the situation is not that simple and stark. In many cases the reality -- and the morality -- of the situation is far more complex.
Deng Xiaoping was a dictator, right? After all, he was the Communist Party boss of China from 1978 to 1992. He was not elected. He ruled through fear. He approved the massacre of protesters at Tiananmen Square in Beijing in 1989. But he also led China in the direction of a market economy that raised the standard of living and the degree of personal freedoms for more people in a shorter period of time than perhaps ever before in recorded economic history. For that achievement, one could arguably rate Deng as one of the greatest men of the 20th century, on par with Winston Churchill and Franklin D. Roosevelt.
So is it fair to put Deng in the same category as Saddam Hussein, or even Hosni Mubarak, the leader of Egypt, whose sterile rule did little to prepare his people for a more open society? After all, none of the three men were ever elected. And they all ruled through fear. So why not put them all in the same category?
Or what about Lee Kuan Yew and Zine El Abidine Ben Ali? During the early phases of Lee's rule in Singapore he certainly behaved in an authoritarian style, as did Ben Ali throughout his entire rule in Tunisia. So don't they both deserve to be called authoritarians? Yet Lee raised the standard of living and quality of life in Singapore from the equivalent of some of the poorest African countries in the 1960s to that of the wealthiest countries in the West by the early 1990s. He also instituted meritocracy, good governance, and world-class urban planning. Lee's two-volume memoir reads like the pages in Plutarch's Lives of the Noble Grecians and Romans. Ben Ali, by contrast, was merely a security service thug who combined brutality and extreme levels of corruption, and whose rule was largely absent of reform. Like Mubarak, he offered stability but little else.
You get the point. Dividing the world in black and white terms between dictators and democrats completely misses the political and moral complexity of the situation on the ground in many dozens of countries. The twin categories of democrats and dictators are simply too broad for an adequate understanding of many places and their rulers -- and thus for an adequate understanding of geopolitics. There is surely a virtue in blunt, simple thinking and pronouncements. Simplifying complex patterns allows people to see underlying critical truths they might otherwise have missed. But because reality is by its very nature complex, too much simplification leads to an unsophisticated view of the world. One of the strong suits of the best intellectuals and geopoliticians is their tendency to reward complex thinking and their attendant ability to draw fine distinctions.
Fine distinctions should be what geopolitics and political science are about. It means that we recognize a world in which, just as there are bad democrats, there are good dictators. World leaders in many cases should not be classified in black and white terms, but in many indeterminate shades, covering the spectrum from black to white.
Nawaz Sharif and his rival, the late Benazir Bhutto, when they alternately ruled Pakistan in the 1990s were terrible administrators. They were both elected by voters, but each governed in a thoroughly corrupt, undisciplined and unwise manner that made their country less stable and laid the foundation for military rule. They were democrats, but illiberal ones.
The late King Hussein of Jordan and the late Park Chung Hee of South Korea were both dictators, but their dynamic, enlightened rules took unstable pieces of geography and provided them with development and consequent relative stability. They were dictators, but liberal ones.
Amid this political and moral complexity that spans disparate regions of the Earth, some patterns do emerge. On the whole, Asian dictators have performed better than Middle Eastern ones. Deng of China, Lee of Singapore, Park of South Korea, Mahathir bin Mohammad of Malaysia, Chiang Kai-Shek of Taiwan were all authoritarians to one degree or another. But their autocracies led to economic and technological development, to better governance, and to an improved quality of life. Most important, their rules, however imperfect, have overall better positioned their societies for democratic reforms later on. All of these men, including the Muslim Mahathir, were influenced, however indirectly and vaguely, by a body of values known as Confucianism: respect for hierarchy, elders, and, in general, ethical living in the here-and-now of this world.
Contrast that with Arab dictators such as Ben Ali of Tunisia, Mubarak of Egypt, Saddam of Iraq, and the al Assads of Syria. Ben Ali and Mubarak, it is true, were far less repressive than Saddam and the elder Assad. Moreover, Ben Ali and Mubarak did encourage some development of a middle class in their countries. But they were not ethical reformers by any means. Of course, Saddam and al Assad were altogether brutal. They ran states so suffocating in their levels of repression that they replicated prison yards. Rather than Confucianism, Saddam and al Assad were motivated by Baathism, a half-baked Arab socialism so viciously opposed to Western colonialism that it created a far worse tyranny of its own.
Beyond the Middle East and Asia there is the case of Russia. In the 1990s, Russia was ruled by Boris Yeltsin, a man lauded in the West for being a democrat. But his undisciplined rule led to sheer economic and social chaos. Vladimir Putin, on the other hand, is much closer to an authoritarian -- and is increasingly so -- and is consequently despised in the West. But, helped by energy prices, he has restored Russia to some measure of stability, and thus dramatically improved the quality of life of average Russians. And he has done this without resorting to the level of authoritarianism -- with the mass disappearances and constellation of Siberian labor camps -- of the czars and commissars of old.
Finally, there is the most morally vexing case of all: that of the late Chilean dictator Augusto Pinochet. In the 1970s and 1980s, Pinochet created more than a million new jobs, reduced the poverty rate from a third of the population to as low as a tenth, and the infant mortality rate from 78 per 1,000 to 18. Pinochet's Chile was one of the few non-Asian countries in the world to experience double-digit Asian levels of economic growth at the time. Pinochet prepared his country well for eventual democracy, even as his economic policy became a model for the developing and post-Communist worlds. But Pinochet is also rightly the object of intense hatred among liberals and humanitarians the world over for perpetrating years of systematic torture against tens of thousands of victims. So where does he fall on the spectrum from black to white?
Not only is the world of international affairs one of many indeterminate shades, but it is also one in which, sometimes, it is impossible to know just where to locate someone on that spectrum. The question of whether ends justify means should not only be answered by metaphysical doctrine, but also by empirical observation -- sometimes ends do justify means, sometimes they don't. Sometimes the means are unconnected to the ends, and are therefore to be condemned, as is the case with Chile. Such is the intricacy of the political and moral universe. Complexity and fine distinctions are things to be embraced; otherwise geopolitics, political science, and related disciplines distort rather than illuminate.
Posted by MBI Munshi at 10/17/2013 05:35:00 pm
Sunday, October 06, 2013
Thomas Jefferson didn't just own a Quran -- he engaged with Islam and fought to ensure the rights of Muslims
SALON – October 6, 2013
At a time when most Americans were uninformed, misinformed, or simply afraid of Islam, Thomas Jefferson imagined Muslims as future citizens of his new nation. His engagement with the faith began with the purchase of a Qur’an eleven years before he wrote the Declaration of Independence. Jefferson’s Qur’an survives still in the Library of Congress, serving as a symbol of his and early America’s complex relationship with Islam and its adherents. That relationship remains of signal importance to this day.
That he owned a Qur’an reveals Jefferson’s interest in the Islamic religion, but it does not explain his support for the rights of Muslims. Jefferson first read about Muslim “civil rights” in the work of one of his intellectual heroes: the seventeenth-century English philosopher John Locke. Locke had advocated the toleration of Muslims—and Jews—following in the footsteps of a few others in Europe who had considered the matter for more than a century before him. Jefferson’s ideas about Muslim rights must be understood within this older context, a complex set of transatlantic ideas that would continue to evolve most markedly from the sixteenth through the nineteenth centuries.
Amid the interdenominational Christian violence in Europe, some Christians, beginning in the sixteenth century, chose Muslims as the test case for the demarcation of the theoretical boundaries of their toleration for all believers. Because of these European precedents, Muslims also became a part of American debates about religion and the limits of citizenship. As they set about creating a new government in the United States, the American Founders, Protestants all, frequently referred to the adherents of Islam as they contemplated the proper scope of religious freedom and individual rights among the nation’s present and potential inhabitants. The founding generation debated whether the United States should be exclusively Protestant or a religiously plural polity. And if the latter, whether political equality—the full rights of citizenship, including access to the highest office—should extend to non-Protestants. The mention, then, of Muslims as potential citizens of the United States forced the Protestant majority to imagine the parameters of their new society beyond toleration. It obliged them to interrogate the nature of religious freedom: the issue of a “religious test” in the Constitution, like the ones that would exist at the state level into the nineteenth century; the question of “an establishment of religion,” potentially of Protestant Christianity; and the meaning and extent of a separation of religion from government.
Resistance to the idea of Muslim citizenship was predictable in the eighteenth century. Americans had inherited from Europe almost a millennium of negative distortions of the faith’s theological and political character. Given the dominance and popularity of these anti-Islamic representations, it was startling that a few notable Americans not only refused to exclude Muslims, but even imagined a day when they would be citizens of the United States, with full and equal rights. This surprising, uniquely American egalitarian defense of Muslim rights was the logical extension of European precedents already mentioned. Still, on both sides of the Atlantic, such ideas were marginal at best. How, then, did the idea of the Muslim as a citizen with rights survive despite powerful opposition from the outset? And what is the fate of that ideal in the twenty-first century?
This book provides a new history of the founding era, one that explains how and why Thomas Jefferson and a handful of others adopted and then moved beyond European ideas about the toleration of Muslims. It should be said at the outset that these exceptional men were not motivated by any inherent appreciation for Islam as a religion. Muslims, for most American Protestants, remained beyond the outer limit of those possessing acceptable beliefs, but they nevertheless became emblems of two competing conceptions of the nation’s identity: one essentially preserving the Protestant status quo, and the other fully realizing the pluralism implied in the Revolutionary rhetoric of inalienable and universal rights. Thus while some fought to exclude a group whose inclusion they feared would ultimately portend the undoing of the nation’s Protestant character, a pivotal minority, also Protestant, perceiving the ultimate benefit and justice of a religiously plural America, set about defending the rights of future Muslim citizens.
They did so, however, not for the sake of actual Muslims, because none were known at the time to live in America. Instead, Jefferson and others defended Muslim rights for the sake of “imagined Muslims,” the promotion of whose theoretical citizenship would prove the true universality of American rights. Indeed, this defense of imagined Muslims would also create political room to consider the rights of other despised minorities whose numbers in America, though small, were quite real, namely Jews and Catholics. Although it was Muslims who embodied the ideal of inclusion, Jews and Catholics were often linked to them in early American debates, as Jefferson and others fought for the rights of all non-Protestants.
In 1783, the year of the nation’s official independence from Great Britain, George Washington wrote to recent Irish Catholic immigrants in New York City. The American Catholic minority of roughly twenty-five thousand then had few legal protections in any state and, because of their faith, no right to hold political office in New York. Washington insisted that “the bosom of America” was “open to receive . . . the oppressed and the persecuted of all Nations and Religions; whom we shall welcome to a participation of all our rights and privileges.” He would also write similar missives to Jewish communities, whose total population numbered only about two thousand at this time.
One year later, in 1784, Washington theoretically enfolded Muslims into his private world at Mount Vernon. In a letter to a friend seeking a carpenter and bricklayer to help at his Virginia home, he explained that the workers’ beliefs—or lack thereof—mattered not at all: “If they are good workmen, they may be of Asia, Africa, or Europe. They may be Mahometans [Muslims], Jews or Christian of an[y] Sect, or they may be Atheists.” Clearly, Muslims were part of Washington’s understanding of religious pluralism—at least in theory. But he would not have actually expected any Muslim applicants.
Although we have since learned that there were in fact Muslims resident in eighteenth-century America, this book demonstrates that the Founders and their generational peers never knew it. Thus their Muslim constituency remained an imagined, future one. But the fact that both Washington and Jefferson attached to it such symbolic significance is not accidental. Both men were heir to the same pair of opposing European traditions.
The first, which predominated, depicted Islam as the antithesis of the “true faith” of Protestant Christianity, as well as the source of tyrannical governments abroad. To tolerate Muslims—to accept them as part of a majority Protestant Christian society—was to welcome people who professed a faith most eighteenth-century Europeans and Americans believed false, foreign, and threatening. Catholics would be similarly characterized in American Protestant founding discourse. Indeed, their faith, like Islam, would be deemed a source of tyranny and thus antithetical to American ideas of liberty.
In order to counter such fears, Jefferson and other supporters of non-Protestant citizenship drew upon a second, less popular but crucial stream of European thought, one that posited the toleration of Muslims as well as Jews and Catholics. Those few Europeans, both Catholic and Protestant, who first espoused such ideas in the sixteenth century often died for them. In the seventeenth century, those who advocated universal religious toleration frequently suffered death or imprisonment, banishment or exile, the elites and common folk alike. The ranks of these so-called heretics in Europe included Catholic and Protestant peasants, Protestant scholars of religion and political theory, and fervid Protestant dissenters, such as the first English Baptists—but no people of political power or prominence. Despite not being organized, this minority consistently opposed their coreligionists by defending theoretical Muslims from persecution in Christian-majority states.
As a member of the eighteenth-century Anglican establishment and a prominent political leader in Virginia, Jefferson represented a different sort of proponent for ideas that had long been the hallmark of dissident victims of persecution and exile. Because of his elite status, his own endorsement of Muslim citizenship demanded serious consideration in Virginia—and the new nation. Together with a handful of like-minded American Protestants, he advanced a new, previously unthinkable national blueprint. Thus did ideas long on the fringe of European thought flow into the mainstream of American political discourse at its inception.
Not that these ideas found universal welcome. Even a man of Jefferson’s national reputation would be attacked by his political opponents for his insistence that the rights of all believers should be protected from government interference and persecution. But he drew support from a broad range of constituencies, including Anglicans (or Episcopalians), as well as dissenting Presbyterians and Baptists, who suffered persecution perpetrated by fellow Protestants. No denomination had a unanimously positive view of non-Protestants as full American citizens, yet support for Muslim rights was expressed by some members of each.
What the supporters of Muslim rights were proposing was extraordinary even at a purely theoretical level in the eighteenth century. American citizenship—which had embraced only free, white, male Protestants—was in effect to be abstracted from religion. Race and gender would continue as barriers, but not so faith. Legislation in Virginia would be just the beginning, the First Amendment far from the end of the story; in fact, Jefferson, Washington, and James Madison would work toward this ideal of separation throughout their entire political lives, ultimately leaving it to others to carry on and finish the job. This book documents, for the first time, how Jefferson and others, despite their negative, often incorrect understandings of Islam, pursued that ideal by advocating the rights of Muslims and all non-Protestants.
A decade before George Washington signaled openness to Muslim laborers in 1784 he had listed two slave women from West Africa among his taxable property. “Fatimer” and “Little Fatimer” were a mother and daughter—both indubitably named after the Prophet Muhammad’s daughter Fatima (d. 632). Washington advocated Muslim rights, never realizing that as a slaveholder he was denying Muslims in his own midst any rights at all, including the right to practice their faith. This tragic irony may well have also recurred on the plantations of Jefferson and Madison, although proof of their slaves’ religion remains less than definitive. Nevertheless, having been seized and transported from West Africa, the first American Muslims may have numbered in the tens of thousands, a population certainly greater than the resident Jews and possibly even the Catholics. Although some have speculated that a few former Muslim slaves may have served in the Continental Army, there is little direct evidence any practiced Islam and none that these individuals were known to the Founders. In any case, they had no influence on later political debates about Muslim citizenship.
The insuperable facts of race and slavery rendered invisible the very believers whose freedoms men like Jefferson, Washington, and Madison defended, and whose ancestors had resided in America since the seventeenth century, as long as Protestants had. Indeed, when the Founders imagined future Muslim citizens, they presumably imagined them as white, because by the 1790s “full American citizenship could be claimed by any free, white immigrant, regardless of ethnicity or religious beliefs.”
The two actual Muslims Jefferson would wittingly meet during his lifetime were not black West African slaves but North African ambassadors of Turkish descent. They may have appeared to him to have more melanin than he did, but he never commented on their complexions or race. (Other observers either failed to mention it or simply affirmed that the ambassador in question was not black.) But then Jefferson was interested in neither diplomat for reasons of religion or race; he engaged them because of their political power. (They were, of course, also free.)
But even earlier in his political life—as an ambassador, secretary of state, and vice president—Jefferson had never perceived a predominantly religious dimension to the conflict with North African Muslim powers, whose pirates threatened American shipping in the Mediterranean and eastern Atlantic. As this book demonstrates, Jefferson as president would insist to the rulers of Tripoli and Tunis that his nation harbored no anti-Islamic bias, even going so far as to express the extraordinary claim of believing in the same God as those men.
The equality of believers that Jefferson sought at home was the same one he professed abroad, in both contexts attempting to divorce religion from politics, or so it seemed. In fact, Jefferson’s limited but unique appreciation for Islam appears as a minor but active element in his presidential foreign policy with North Africa—and his most personal Deist and Unitarian beliefs. The two were quite possibly entwined, with their source Jefferson’s unsophisticated yet effective understanding of the Qur’an he owned.
Still, as a man of his time, Jefferson was not immune to negative feelings about Islam. He would even use some of the most popular anti-Islamic images inherited from Europe to drive his early political arguments about the separation of religion from government in Virginia. Yet ultimately Jefferson and others not as well known were still able to divorce the idea of Muslim citizenship from their dislike of Islam, as they forged an “imagined political community,” inclusive beyond all precedent.
The clash between principle and prejudice that Jefferson himself overcame in the eighteenth and nineteenth centuries remains a test for the nation in the twenty-first. Since the late nineteenth century, the United States has in fact become home to a diverse and dynamic American Muslim citizenry, but this population has never been fully welcomed. Whereas in Jefferson’s time organized prejudice against Muslims was exercised against an exclusively foreign and imaginary nonresident population, today political attacks target real, resident American Muslim citizens. Particularly in the wake of 9/11 and the so-called War on Terror, a public discourse of anti-Muslim bigotry has arisen to justify depriving American Muslim citizens of the full and equal exercise of their civil rights.
For example, recent anti-Islamic slurs used to deny the legitimacy of a presidential candidacy contained eerie echoes of founding precedents. The legal possibility of a Muslim president was first discussed with vitriol during debates involving America’s Founders. Thomas Jefferson would be the first in the history of American politics to suffer the false charge of being a Muslim, an accusation considered the ultimate Protestant slur in the eighteenth century. That a presidential candidate in the twenty-first century should have been subject to much the same false attack, still presumed as politically damning to any real American Muslim candidate’s potential for elected office, demonstrates the importance of examining how the multiple images of Islam and Muslims first entered American consciousness and how the rights of Muslims first came to be accepted as national ideals. Ultimately, the status of Muslim citizenship in America today cannot be properly appreciated without establishing the historical context of its eighteenth-century origins.
Muslim American rights became a theoretical reality early on, but as a practical one they have been much slower to evolve. In fact, they are being tested daily. Recently, John Esposito, a distinguished historian of Islam in contemporary America, observed, “Muslims are led to wonder: What are the limits of this Western pluralism?” Thomas Jefferson’s Qur’an documents the origins of such pluralism in the United States in order to illuminate where, when, and how Muslims were first included in American ideals.
Until now, most historians have proposed that Muslims represented nothing more than the incarnated antithesis of American values. These same voices also insist that Protestant Americans always and uniformly defined both the religion of Islam and its practitioners as inherently un-American. Indeed, most historians posit that the emergence of the United States as an ideological and political phenomenon occurred in opposition to eighteenth-century concepts about Islam as a false religion and source of despotic government. There is certainly evidence for these assumptions in early American religious polemic, domestic politics, foreign policy, and literary sources. There are, however, also considerable observations about Islam and Muslims that cast both in a more affirmative light, including key references to Muslims as future American citizens in important founding debates about rights. These sources show that American Protestants did not monolithically view Islam as “a thoroughly foreign religion.”
This book documents the counterassertion that Muslims, far from being definitively un-American, were deeply embedded in the concept of citizenship in the United States since the country’s inception, even if these inclusive ideas were not then accepted by the majority of Americans. While focusing on Jefferson’s views of Islam, Muslims, and the Islamic world, it also analyzes the perspectives of John Adams and James Madison. Nor is it limited to these key Founders. The cast of those who took part in the contest concerning the rights of Muslims, imagined and real, is not confined to famous political elites but includes Presbyterian and Baptist protestors against Virginia’s religious establishment; the Anglican lawyers James Iredell and Samuel Johnston in North Carolina, who argued for the rights of Muslims in their state’s constitutional ratifying convention; and John Leland, an evangelical Baptist preacher and ally of Jefferson and Madison in Virginia, who agitated in Connecticut and Massachusetts in support of Muslim equality, the Constitution, the First Amendment, and the end of established religion at the state level.
The lives of two American Muslim slaves of West African origin, Ibrahima Abd al-Rahman and Omar ibn Said, also intersect this narrative. Both were literate in Arabic, the latter writing his autobiography in that language. They remind us of the presence of tens of thousands of Muslim slaves who had no rights, no voice, and no hope of American citizenship in the midst of these early discussions about religious and political equality for future, free practitioners of Islam.
Imagined Muslims, along with real Jews and Catholics, were the consummate outsiders in much of America’s political discourse at the founding. Jews and Catholics would struggle into the twentieth century to gain in practice the equal rights assured them in theory, although even this process would not entirely eradicate prejudice against either group. Nevertheless, from among the original triad of religious outsiders in the United States, only Muslims remain the objects of a substantial civic discourse of derision and marginalization, still being perceived in many quarters as not fully American. This book writes Muslims back into our founding narrative in the hope of clarifying the importance of critical historical precedents at a time when the idea of the Muslim as citizen is, once more, hotly contested.
Posted by MBI Munshi at 10/06/2013 08:53:00 am