Emperor Constantine the Great authorised Christianity across the Roman Empire in 313, but it was Theodosius I, half a century later, who put the brute force of the imperial state behind the faith.
Policy had vacillated through the fourth century. The emperor Julian had been a pagan, hostile to Christianity. Julian’s successor, Jovian, restored the faith and ruled for toleration, including for paganism, but not magic rites.
But what did it mean to be Christian? Most believers followed the rulings of the Council of Nicaea in 325 and accepted the Trinity; the followers of Arius, a Libyan theologian, did not. And there was no guarantee what kind of Christian would occupy the throne: Emperor Valens, who died in battle in the summer of 378 after 14 years in power, had been Arian.
So it was that in February 380, 23-year-old Theodosius, born into the faith but only recently baptised after serious illness, issued the Edict of Thessalonica threatening both divine punishment and imperial retribution for those who rejected the Nicene creed.
Five years later Priscillian, Bishop of Avila, would be the first Christian to face judicial execution by other Christians for his beliefs. ‘What is truth?’ Pilate had asked. After Theodosius, that was a matter for the state.
There is growing opinion that, in the global car industry, now facing a period of extraordinary change, history will repeat itself. The electric revolution appears poised on the brink of transforming the world of mobility. Yet, while companies such as Elon Musk’s US-based Tesla are using and developing cutting-edge technologies, they are also turning to tried and trusted business models, which would have been familiar to Henry Ford and Alfred Sloan.
This is unsurprising: it is hard to think of a more defining image in automotive history than the Model T motor car being produced on moving assembly lines at Ford’s gigantic Highland Park factory in Detroit. Ford organised production as a continuous flow, epitomised by the mechanically powered conveyor belt, which, in 1913, became the basis for mass car assembly, copied by corporations around the world. But what ultimately mattered was the scale he achieved. Ford broke record after record, passing in 1920 a factory output of a million cars in just one year.
A new century
More than a century on, and the world watches Tesla to see if it is able to do for battery electric technology what Ford did with the gasoline-fuelled internal combustion engine. The company is working to extend into its battery plants the Ford principle of production without stops and starts, even as it steps up its investments for manufacture at scale. Tesla’s immediate target is a little under half Ford’s output of a century ago: it claims to have produced half a million cars in 2020.
This has generated understandable excitement among industry observers. It suggests a resurgent industry that can still be managed in familiar ways, but which can also meet the existential threat of global warming head-on by mass building electric cars to eliminate CO2-emitting engines. But there is a hitch.
The flipside of having cars powered by gasoline is that the history of the car industry is in many ways the story of oil. Verging on an industrial symbiosis, this relationship was every bit as important as innovations in the machine shop and factory line. Ford’s breakthroughs followed a great national oil boom, which began some years earlier. After oil was struck at Spindletop in Beaumont, Texas in 1901, the United States quickly became the world’s largest oil producer. It drilled with an extraordinary intensity, echoed more recently by the speed with which its fracking industry has developed. The profligacy implied by the popular image of Texan oil pioneers dancing under the spray of a gusher is belied by the endeavour which spread a dense network of wells across the oil-rich state. The success of Henry Ford was foreshadowed by the riches of J.D. Rockefeller, the mogul who founded Standard Oil as far back as 1870.
The car industry expanded in tandem with the need for gasoline derived from crude oil. But as fields opened beyond the US, oil became a geopolitical reference point, implicated in numerous wars, invasions, coups, briberies and underhand manipulations. The story of oil is also one of spillage disasters, whether caused by accidents involving tankers, such as the 1989 Exxon Valdez spill in Alaska, or oil rig catastrophes, including BP’s Deepwater Horizon in the Gulf of Mexico in 2010. Providing the fuel for the vehicles that warm the planet was another nail in the coffin for oil’s reputation.
The story of oil and its exploitation also shows how technologies associated with progress can prompt the bitterest abuses of indigenous populations unlucky enough to have critical resources on their lands. In the US, the huckstering of members of the Osage Nation in Oklahoma (having already been forced off their reservation in Kansas), whose territorial settlements gave them rights in oil, degenerated into a series of murders over a 20-year period, peaking in a five year ‘reign of terror’ from 1921. It took them until 2011 to settle resource management with the Federal government.
But the most severe example of exploitation of indigenous peoples took place in the Congo, where the first expansions of the car industry fed the growing market for what became known as red rubber. The Congo Free State was established by King Leopold II of Belgium in 1885. European adventurers, backed by Leopold’s private army (the Free State was his personal possession), moved on from harvesting the ivory of elephants to forcing the Congolese into the forests to tap the wild rubber tree. The Belgian government assumed control in 1908, but it is estimated that by 1920 up to half the Congolese population had been lost to disease, famine and slaughter. There is no direct connection between the sufferings of the Congolese and the spirit of invention that led to pneumatic tyres and vulcanised rubber. But the tubes, tyres, hoses, raincoats and gloves which demanded rubber for their mass production are also part of this unsavoury legacy.
Back to the future
And history repeats itself. While the electric car does promise to liberate the world from CO2 and oil spills, the new battery technologies rely on critical materials, such as lithium, cobalt and nickel. These fresh dependencies mean new points of political tension and environmental degradation, remarkably similar to those produced by the industrial exploitation of oil and rubber.
Demand for the critical materials to feed electric car production is already pushing up against supply. In September 2020 Elon Musk held the Tesla Battery Day Event, in which he announced a slew of proposed advancements to shareholders. Little more than a week later, President Donald Trump signed an Executive Order declaring American dependency on imports of rare earth minerals, many essential for electronics, a national emergency. China, whose dominance in lithium processing and lithium-ion batteries also makes it integral to electric car manufacture in the United States, is described as a foreign adversary. The obvious alternative source for lithium is Bolivia, a country that has been in political turmoil. With the world’s most substantial lithium reserves, Bolivia may be emerging as a focal point in a developing international struggle for access to what is now being called ‘white petroleum’.
In an especially troubling echo of the past, in the Democratic Republic of Congo, Congolese labour is being deployed in cobalt extraction operations in unsafe and contaminated mines. Reports from Amnesty International have monitored the widespread use of child labour and found multiple abuses.
In the same way that the Dunlop tyre and improvements to the waterproofed raincoat created new demands for rubber, the smartphone and the personal computer create demand for cobalt. Congo supplies over 60 per cent of the world’s cobalt consumption. The electric car is part of this. While phasing out cobalt has been an aspiration for some years, it remains to be seen whether, and how quickly, this will actually happen.
Mining operations on the scale necessary to support new mass car production will inevitably disturb, divert and pollute scarce land and water resources. The case of Bolivia, and the freedom of its population to control its resources without external pressures or interference, is one to watch.
Cars for all
There are other lessons from history. Alfred Sloan, another towering figure from the glory days of gasoline, worked with his team at General Motors during the 1920s to improve Ford’s business model. There were five brands to manage, running from the mass-market Chevrolet up to the luxury Cadillac models, each with its particular pricing points. Where Ford aimed to change the world by making cars accessible to buyers of modest means, Sloan’s aim was to pitch cars at ‘every purse and purpose’.
While successful, this helped feed the American ambition to own a bigger and better car than the last one bought. Although hard to believe now, the earliest commercial car makers in the United States built on more modest lines than their European counterparts. But abundant gasoline, sold cheaply, together with public investments to improve on mud roads, made big cars viable. When price hikes by the OPEC oil consortium in the 1970s left many predicting a bright future for smaller cars, the reality was that as oil prices fell again the quest for the perfect big car continued. Japanese manufacturers, who began by selling smaller cars to Americans looking for fuel economy, made their cars larger. Ford and General Motors accommodated this new competition by greatly expanding the market for expensive – and large – SUVs.
The problem now is that, while bets are still being taken on whether Tesla succeeds with an electric car for the masses, practically every car manufacturer with a stake in the game is working on battery electric designs for solvent aficionados of larger cars, sportier cars and luxury cars. This will only ramp up the immediate pressure to mine for metals.
Many tech-led manufacturing businesses, including Tesla, are redeveloping their resourcing and supply chains. Ford and Sloan could not have predicted the downsides of what they saw as their landmark contributions to civilisation – whether environmental or geopolitical. But history matters because it reveals such unexpected consequences. Looking only at oil-dependent technology, the electric car offers a greener and more equitable future. When appraising the realities of its own dependencies, it does not.
Dan Coffey researches the global car industry at Leeds University Business School.
In modern western society, time for oneself, alone and in private, is taken for granted. Since the late 19th century, access to solitude has been central to understandings of privacy, which was defined in an influential article from the 1890 Harvard Law Review as ‘the right to be let alone’. But this was not always the case.
In medieval Europe, when life was a far more communal experience than it is today, solitude was considered ‘the worst form of poverty’. The Latin word solitudo, from which the modern English word solitude derives, implied a negative, uncivilised condition. The word lonely, which appeared in English in the early 17th century, was barely distinguishable from solitary for a further two centuries. Solitude was certainly familiar to monks, nuns, hermits and anchorites, but this was a particularly pious sort of isolation, designed to facilitate communion with God. It was not something that ordinary people should have access to.
The asceticism that some of these religious lifestyles embraced also reflected something fundamental about medieval views of solitude – it was a punishment. In the first century AD, Seneca argued that ‘solitude persuades us to every kind of evil … everyone is better off in tahe company of somebody or other no matter who, than in his own company’. His views were cited throughout the Middle Ages and are emblematic of the suspicion with which medieval authorities regarded even temporary isolation.
Despite this overarching suspicion of being alone, it does seem that medieval people experienced the desire to be by themselves, at least some of the time. This is best reflected in the trope of the lovesick protagonist who flees to a chamber in order to indulge (often very dramatically), in their emotions. Troilus in Chaucer’s Troilus and Criseyde speeds ‘un-to his chambre … faste allone … every dore he shette’ from whence he proceeds to ‘[smyte] his brest ay with his festes’. But even Troilus, a Trojan prince, cannot take this privacy for granted and has to dismiss ‘a man of his or two’ from his chamber to obtain his solitude.
This reminds us of a problem which has faced people for much of history. Finding space to be alone was a challenge for rich and poor alike. Larger households would be filled with staff, while in the houses of those lower down the social scale, there was simply not enough room. The lack of privacy caused by all these bodies jostling for space was compounded by the nature of premodern architecture. Until corridors came into fashion during the 18th century (which in itself affected only the wealthiest households), houses were designed en enfilade, with rooms running onto each other. Household traffic was not contained within corridors, but rather moved through rooms, meaning that doors could (and did) swing open at compromising moments.
As the early modern period progressed, medieval suspicions around the concept of solitude diminished. Historians associate this transition with rising literacy and an expanding print culture, which facilitated silent reading. The Protestant imperative to private reflection also normalised temporary solitude (albeit for the purposes of communion with God and nothing else). There were, still, many in the 16th and 17th centuries who were nervous about it; 16th-century protestants, for example, fretted about which lustful sins the devil might tempt the solitary individual with. Yet overall, there is evidence that a transition from idealising rather than fearing solitude had been made by the end of the 17th century.
In William Congreve’s The Way of the World (1700) a husband and wife negotiate her right to solitude. The wife argues that she should have the right to:
dine in my Dressing room when I’m out of Humour, without giving a reason. To have my closet inviolate; to be sole empress of my tea table … and lastly … you shall always knock at the door before you come in.
But private rooms were, again, a luxury reserved for the wealthy and, even though architecture started to favour specialised rooms for sleeping, eating and entertaining, in the 17th and 18th centuries most people still found securing true solitude a difficult task.
It was in the 19th century that the celebration of solitude reached its apogee, encapsulated by the work of the Romantic poets, such as Byron and Wordsworth. The Romantics rejected the Enlightenment commitment to sociability, arguing that prolonged solitude fostered rather than shackled creativity. When Wordsworth ‘wandered lonely as a cloud’ he was celebrating solitude as a means of connecting with the self, finding inspiration in nature. Yet the Romantic focus on the great outdoors reminds us that, for many, going outside was often the only way of securing personal privacy. Even in 19th-century middle-class residences, to have one’s own bedroom was unusual. In 1911 three quarters of the English population still lived in one- or two-roomed dwellings. It was only as the family shrunk throughout the 20th century that fewer bodies in the house made the separating of boys from girls and children from parents more achievable.
The history of solitude reminds us that even the things we take for granted as universals have a past and a context. In the case of solitude, this context is both physical and ideological. For centuries people struggled to carve out space for themselves in crowded dwellings, just as they wrestled with how solitude affected the human condition. Medieval philosophers, accepting Aristotle’s philosophy that ‘whosoever is delighted in solitude is either a wild beast or a god’, could never have understood Virginia Woolf as she extolled the virtues of being alone: ‘How much better is silence … How much better to sit by myself like the solitary sea-bird … Let me sit here for ever … myself being myself.’
Martha Bailey is a historian specialising in the history of ideas in medieval and early modern Europe.
‘In my own village’, the film-maker Luis Buñuel said of his birthplace in rural Spain, ‘the Middle Ages lasted until the First World War.’ Buñuel would escape the dead hand of the past through surrealism. But the Italian writer Filippo Tommaso Marinetti went one better: he invented futurism, launched like a political movement through a manifesto on the front page of Le Figaro on 20 February 1909.
At first, Marinetti was futurism’s sole adherent. But his words willed the movement into being. Art became life. Other manifestos would follow, covering everything from architecture, cinema and menswear to music hall, syntax and lust.
Futurism was the idea that modernity and its rapid scientific advances changed what it meant to be human. For Marinetti, this new human sensibility required the creative destruction of all institutions. Museums, libraries, academies: all were varieties of graveyard. Youth, speed and technology were the thing now; futurist artist Giacomo Balla was so enamoured of the latter he named his daughters Elica and Luce – Propeller and Light.
Marinetti understood the mass media and the value of generating outrage. He was delighted to be prosecuted for obscenity for his 1910 novel Mafarka futurista, the eponymous hero of which has an 11-metre penis. Futurist evenings mixed poetry recitals and manifesto readings with extended invectives against the audience. Their success was measured, he said, in abuse rather than applause.
Futurism also celebrated violence as a moral virtue. In this Marinetti followed the kind of anarchist thought expressed by the French poet Laurent Tailhade: ‘What do the victims matter, if the gesture be beautiful?’
Marinetti would ultimately be co-opted by fascism; he saw no contradiction between anarchism and nationalism; the exultant self-expression that anarchism allowed the individual, nationalism allowed a people. But he didn’t live to see the ‘beautiful gesture’ Italians made with Mussolini’s corpse at the end of the war; he died of a heart attack in the fascist rump state of Salò in December 1944.
The broad outlines of Japan’s historical encounters with western culture are well known. They began with the arrival of Jesuit priests from Portugal in the 16th century, but after 1639 contact all but ceased under the Tokugawa shogunate’s new policy of seclusion (sakoku), with limited trade with the Chinese and the Dutch being the only exception. The policy of isolation ended in 1854 and the restoration of imperial rule in 1858 opened Japan to the West. By 1867, artefacts of Japanese culture were on display for the first time at the Exposition Universelle in Paris.
More cultural exchanges followed. Western artists began to incorporate Japanese prints and costumes into their work, as in Vincent van Gogh’s Portrait of Père Tanguy (1887), Claude Monet’s La Japonaise (1876) and Alfred Stevens’ La parisienne japonaise (1872). Japonisme – a term coined by the French critic Philippe Burty in 1872 – was born and it was not limited to painting and the decorative arts. The composer Claude Debussy became fascinated by Japan (the first edition of the score of La mer featured a Hokusai print on its cover) and, to move from the sublime to the ridiculous, Gilbert and Sullivan’s operetta The Mikado was first staged in 1885.
Most of these influences were superficial. They might enrich a painting’s surface colours and suggest various exotic allusions, but engaged little with complex Japanese culture and its social character, one that was itself slowly changing under the influence of contact with the West. One work did, however, starkly reveal the underlying dissonances between eastern and western cultures. An opera based on a Japanese subject, Giacomo Puccini’s Madam Butterfly was first performed at La Scala, Milan in 1904. Eventually, the opera also became known in Japan, where Puccini’s vision was seen as forging (in the sense of dishonestly manufacturing) a view of the Japanese, while subsequent Japanese adaptations and productions of such western music played a part in forging (in the sense of fashioning out of raw material) the seeds of a new westernised Japan.
That the tale of Butterfly found its way to the West itself was a result of Japan’s new ‘openness’. The opera was based on a short story by the American writer John Luther Long, written in 1898 and based on the recollections of Long’s sister, who had recently been able to spend time in Japan as the wife of a missionary. Long’s story also borrowed from a novel of 1887 by Pierre Loti, Madame Chrysanthème, which drew on a liaison Loti had with a geisha in Nagasaki in 1885. Both sources, therefore, contained elements of an ‘authentic’ Japan. Long’s story was turned into a play by the American playwright David Belasco, which Puccini saw in London in 1900.
In many ways the plot is a typical colonial story, with all that implies about oppression, exploitation and the imbalance of power. This should have offered a warning about the likely reception of the opera in Japan itself. The story concerns a US naval officer, Pinkerton, stationed in Nagasaki, who marries Cio-Cio san (Butterfly) but has no intention of a long-term commitment. He sets about re-educating the ex-geisha, who speaks only pidgin English, and she falls in love with him. Eventually Pinkerton leaves Japan and the now impoverished Butterfly gives birth to their son. When Pinkerton returns, he is accompanied by his American wife. Butterfly then ‘dies with honour’ by committing harakiri, a traditional suicide method employed by the honourable Samurai class. This ending, found only in Belasco’s play, suited Puccini well, as did the drama’s concentration on Butterfly’s character and emotions during her final day. Whether such a portrayal was in keeping with the true nature of Japanese womanhood is an issue that has arisen more than once in the critical afterlife of the work.
Butterfly travels east
The reception of Madam Butterfly in Japan should be considered alongside the history of the country’s assimilation of western music. This in itself is a remarkable story, since indigenous Japanese music, with its own instruments, scales and genres, is so different from its western counterpart. Although music is often dubbed a ‘universal language’, among the arts it is one of the most stylistically differentiated globally.
The earliest known critical Japanese reactions to Madam Butterfly are from 1906, when Henry Savage’s Grand Opera Company gave a performance (in English) at the Garden Theatre, New York. In the audience were two Japanese spectators: Seiichirō Mori, a businessman and writer, and Shūichi Takaori, a music critic and theatre director. Takaori was delighted with the music, which incorporated some Japanese tunes, and he thought such a mixture had much to offer the future of his native country’s music. Mori was more concerned with authenticity: ‘This work may be good for Japanese women as it introduces to the Americans their chastity and virtue, but the staging was just ludicrous: flowers were all in full bloom regardless of their respective seasons; a Shintoist shrine gate led towards a Buddhist temple; and the costumes were just like those worn by prostitutes.’
Puccini’s Madam Butterfly had premiered just a few days after Japan had declared war on Russia. Japan was aware that it needed to modernise, take on progressive elements from western society and establish itself politically. At the start of the Russo-Japanese War, Japan was still regarded as something of an ‘adolescent boy’ by the international community; a cartoon in the Dutch magazine the Amsterdammer published in 1903 shows John Bull and Uncle Sam coercing a Japanese child into stealing a Cossack’s chestnuts. Yet the unexpected victory against Russia altered the West’s view of Japan; it was now mature enough to be potentially threatening, a ‘yellow peril’. Takaori and Mori’s presence at the opera in New York was another sign of change: Japanese men now went to the opera, wore western clothes, learnt about other cultures and were more than willing to defend their own – in this case, the self-effacing and submissive attributes of their women. (Japanese women did not obtain the vote until a new constitution was imposed on the country by the United States after the Second World War – another instance of the forging of Japanese society.)
The ‘westernisation’ of arts and culture in Japan proceeded rapidly. The Tokyo Music School, the country’s first conservatoire of western-style music, opened in 1890. Another major development took place in 1911 with the inauguration of the Imperial Theatre (Teikoku Gekijō). It was built in the western style, equipped with an orchestra pit, and was affiliated to western music training centres taught by German musicians. Unlike other Japanese theatres, which tended to be little more than simple tents with rugs on the ground, it had proscenium seating. Moreover, it was in this theatre that Shūichi Takaori (the critic and director who saw Madam Butterfly in New York), presented the Japanese premiere of excerpts from the opera in 1914.
The title role was sung by Takaori’s wife, Sumiko, who claimed to have studied with Geraldine Farrar in America. But the performance was not a success. The audience objected to the representation of Japanese women, who, one audience member noted, were presented in a manner ‘similar to those of ill-repute found in places like Yokohama. Why did they dare to present something like this in the Imperial Theatre?’
It was not the ‘alien’ music that disturbed the Japanese audience, but the threat to traditional hierarchies between men and women. Later, in the 1930s, feminist writers such as Ichiko Kamichika and Akiko Yosano criticised the opera for promoting a ‘victim’ like Butterfly as something of a Japanese ‘paragon’. Somewhat ironically, Butterfly thus proved to be an effective catalyst for the emergence of a new model of womanhood in Japan. Moreover, the Japanese themselves gradually began to find Madam Butterfly exotic and alien. The opera represented a past Japan, not the modernised version which was already beginning to emulate and even surpass the West.
As Japan was becoming disenchanted with aspects of Madam Butterfly, the West gained an enthusiasm for what it perceived as its cultural authenticity. The San Carlo Opera Company, based in Boston, routinely contracted Japanese sopranos such as Tamaki Miura, Nobuko Hara and Hisako Koike to sing the role of Butterfly. For many years, this was the only way for Japanese female singers to make their debut on the western stage. Only relatively recently have Far Eastern women begun to become prominent on the world stage as soloists across the whole range of western music.
In 1948, shortly after Japan’s surrender in the Second World War, the US authorities occupying Japan requested that the Fujiwara Opera Company perform Madam Butterfly in the Imperial Theatre, in what it fondly believed would be ‘the most authentic manner’. The same company was also invited to perform a ‘bilingual’ Madam Butterfly (where the Japanese sang in their own language) at the New York City Opera in 1952-53. Puccini, of course, would not have recognised such performances as an authentic production of his opera. His Madam Butterfly contained a number of misunderstandings and misrepresentations of Japanese culture, falling foul of what we would now recognise as cultural appropriation.
Unsurprisingly, the total number of Madam Butterfly performances in Japan to date is remarkably fewer than those of other canonical works such as Verdi’s La Traviata, Mozart’s Magic Flute or Bizet’s Carmen. Interestingly, the popularity of Carmen in Japan owed much to the fascination of the Japanese audience with the titular character, a woman of loose morals who, they assumed, was typical of western society. They failed to understand that Carmen, as a gypsy, is an exotic outsider, far from the western norm. Just as the West had focused on a distorted and simplistic notion of Japanese womanhood embodied in the ‘character-cypher’ of Butterfly, so the Japanese did the same in reverse with their understanding of Carmen.
Such ironies and misunderstandings of cultural appropriation and stereotyping would eventually be dissolved as Japan became fully integrated into the global community of nations. And Puccini’s Madam Butterfly played a crucial part in that process, a part out of all proportion to its status as a three-act opera.
Naomi Matsumoto is Lecturer in Music at Goldsmiths, University of London.
Psycho, The Texas Chainsaw Massacre, Silence of the Lambs, and American Horror Story all have characters that were based on serial killer Ed Gein, who was so devastated by his mother’s death that he began to make a ‘woman suit’ so he could “become her and literally crawl into her skin.”
We all know, or think we do, that Russians have ‘empire’ lodged deep in their genes. The Russian Empire is said to have expanded faster and further than any other in history. Russia today may occupy an area smaller than at any time since the 17th century, but many believe that the country, led by Putin, is itching to recover its empire by force, fraud and subversion.
Many of these certainties are little more than myths. Kees Boterbloem’s Russia as Empire musters the facts into a coherent narrative, which is a model of clarity, brevity and cool common sense. He traces Russian history from the founding of Kiev in the ninth century through to the present (Ukrainians claim Kiev for themselves and deny Russians’ right to do the same). Despite its growing pretensions it was not until Peter the Great, at the beginning of the 18th century, successfully forced his neighbours to accept Russia as one of the five Great Powers that it formally became an empire.
Russia had no geographically defined borders to its east and south and even in the 18th century it was exposed to endless attacks by raiders on horseback. For most of its history its neighbours to its west had superior military technology and were at least as keen on territorial expansion. Russia lost the semblance of organised statehood when it was conquered by the Mongols in the 13th century and again by the Poles in the 17th. Its existence was seriously challenged by Napoleon in the 19th century and by the Germans in the 20th. It fell apart during the Revolution and Civil War of 1917-22. The brutal and rickety system devised by Stalin won the war against Germany but was unable to compete with America and its allies. In 1991 Russia collapsed again into poverty, incoherence and international irrelevance. This was hardly a record of unbroken imperial triumph.
Like many other empires in Europe and elsewhere, Boterbloem argues, the Russian Empire began largely as a defensive enterprise. In this it differed from the maritime empire of the British, whose driving impulse was to promote their trade in distant countries by using superior military technology to overwhelm their competitors. That empire, on which the sun never set, grew faster and further than the Russian Empire and ruled over far more people.
Land-based empires usually start with the fight for defensible – or as they would say ‘legitimate’ – borders. They go on to rob their neighbours of territory and riches and often of people as well. Some seek to impose their beliefs on foreigners who profoundly disagree. Yesterday’s predators become today’s victims. Russia has been both a predator and a victim. It has inflicted dreadful wounds on others and it has suffered terribly at their hands. Europe’s horrible record of bloody internecine slaughter is a source of pride to no one.
All nations that once had empires feel a nostalgia for past greatness. It motivated the British to free themselves of ‘Europe’ and become ‘Great’ again. The Russians’ imperial nostalgia and their sense of victimhood was not invented by Putin. Both will still be there when he is gone. Boterbloem concludes that it is unclear whether we face the return of an ‘imperial’ rather than an ‘imperious’ Russia. The historian Rana Mitter recently said we should be objective; but we do not need to be neutral. That’s the distinction we should consider as we seek ways of dealing constructively with Russia where it is profitable and blocking it where it threatens our interests or those of our friends.
Russia as Empire: Past and Present Kees Boterbloem Reaktion 248pp £20
Rodric Braithwaite was British Ambassador to the Soviet Union (1988-91) and is the author of Armageddon and Paranoia: the Nuclear Confrontation (Profile, 2017).