science • Topic • Inside Story https://insidestory.org.au/topic/science/ Current affairs and culture from Australia and beyond Tue, 27 Feb 2024 04:36:21 +0000 en-AU hourly 1 https://insidestory.org.au/wp-content/uploads/cropped-icon-WP-32x32.png science • Topic • Inside Story https://insidestory.org.au/topic/science/ 32 32 A dynamic of acceptance and revolt https://insidestory.org.au/a-dynamic-of-acceptance-and-revolt/ https://insidestory.org.au/a-dynamic-of-acceptance-and-revolt/#comments Tue, 27 Feb 2024 04:36:21 +0000 https://insidestory.org.au/?p=77396

Why the extraordinary Jack Lindsay deserves to be better known

The post A dynamic of acceptance and revolt appeared first on Inside Story.

]]>
Few people have known so much about so many things as Jack Lindsay. Even fewer have published so much. Lindsay grew up in Brisbane in the early years of the twentieth century, moved to Sydney in 1921, and then embarked on a sixty-year career as journalist, publisher, poet, critic, translator, novelist and historian. Living in England after 1926, he produced an astonishing number of books that found readers around the world; in a multitude of direct and mediated ways he made a major contribution to mid-twentieth-century culture and thought. Thirty-five years after his death comes Anne Cranny-Francis’s Jack Lindsay: Writer, Romantic, Revolutionary.

Well-known to Lindsay enthusiasts, Cranny-Francis has written articles and organised conferences about his life and work, maintains a website, arranged the publication of his “political autobiography” The Fullness of Life and edited a volume of selected poems. In this first book-length single-author study of Lindsay’s life and work she has hit on an elegant solution to the problem of the hyperactively full life of her subject. He was someone whose works demand attention to his ideas, and whose ideas demand attention to his life. Jack Lindsay is structured around a core of six chapters, each dedicated to Lindsay’s book-length studies of English authors: John Bunyan (1937), Charles Dickens (1950), George Meredith (1956), William Morris (1974) and two on William Blake (1927 and 1978). This frame is filled in with chapters that provide biographical and intellectual context and discuss his other relevant works, helping the reader to understand, without being overwhelmed, how Lindsay’s approach to writing was influenced by his experiences and ideas.

This structure works well to illuminate Lindsay’s eclectic, self-fashioned life-philosophy, with its associated preoccupations, values and imagery: the struggle for unity, culture as expressive work, the archetype of death and renewal. The system evolved over time, but many elements were present from the first.

Inevitably Cranny-Francis omits or barely glances at much of Lindsay’s output. She makes barely a mention of his forty-three novels and seven biographies of artists. It would be hard to guess from it that Lindsay’s most cited study is about alchemy in Roman Egypt, or that the one most discussed by academics is a historical novel set in the British civil war.

Depending on what counts as a book, Lindsay published about 160 in his lifetime, as well as hundreds of articles, stories and poems. About a half of his writing was historical and biographical, a quarter fiction, and the remainder criticism, social theory, translations, polemics and poetry. Most of his publications were concerned with the past, usually the ancient Greek and Roman worlds. Lindsay’s classical training is apparent in the eclectic character of works in which history, mythology, philology, archaeology, anthropology, aesthetics and philosophy are seamlessly blended.


All of Lindsay’s mature writing was underwritten by a self-fashioned philosophy or credo. Its most fundamental principle was what Cranny-Francis describes as the “embodied connectedness” of things. He often called it “vital unity,” “wholeness,” “Life” or “the fullness of life.”

In Lindsay’s thought the concept of vital unity assumes as many guises as energy does in physics. One of his symbols for it was Dionysus, the mysterious deity of wine and rebirth, leader of a disorganised band of enthralled creatures — satyrs, maenads, nymphs, centaurs, Pan the god of shepherds — who found no place on Mount Olympus. Another symbol was the figure of “the people,” which he sometimes called “the folk,” and occasionally “the masses,” each term with its particular political inflection. Human unity implied solidarity, equality, ethical responsiveness and mutual aid.

As Cranny-Francis observes, Lindsay extends the idea of unity to all spheres of human activity, including the natural world. John Bellamy Foster, noting Lindsay’s evocations of a “patient earth… ‘eternally reborn’ through labour and ritual practice,” identifies him as a forerunner of Marxist ecology.

Lindsay found the origins of the idea of unity in Plato, or even further back in Parmenides and Pythagoras, but a slightly less distant inspiration was the sixteenth-century excommunicated priest Giordano Bruno (1548–1600), who melded Renaissance humanism with materialism. Lindsay was stirred when he encountered Bruno in the early 1930s, subsequently writing a novel about him (Adam of a New World, 1936), and translating De la causa, principio e uno (Cause, Principle, Unity, 1962). Later he would claim that reading Bruno led him directly to Marxism.

Lindsay’s intense awareness of the interconnectedness of the living world had implications for his everyday life. Cranny-Francis quotes from an episode in The Fullness of Life during his years with the poet Elza de Locre in the early 1930s, when he lived in desperate poverty.

A local farmer had gifted a couple of rabbits to them as a neighbourly gesture. Confronted with the reality of having to skin and disembowel the animals before cooking, Lindsay found himself unable to proceed. He contemplates the economy of death on which a meat-eating society is based, particularly when social organisation has reached a point where meat protein is no longer essential to the diet: “One’s symbiosis with the earth is therefore in terms of unceasing violence and murder; and one knows, deep in one’s being, that one lives only by a system of blood-victims.”

“A communist society which is not vegetarian,” he concluded, “seems to me a hopeless contradiction.”


The young Lindsay called the absence of unity abstraction or dissociation; later, under the influence of Hegel and Marx, he favoured the word alienation. He argued that alienation has always been present in human life and has always provoked resistance. Throughout history that resistance has taken many forms — initiation rituals, shamanic flights, alchemy, art and poetry, and political revolt. The struggle against alienation shapes people’s relationships with one another and the world, motivates the protests of the wretched and exploited, and underlies attitudes to nature. Great thinkers and creative artists throw light upon its diverse manifestations.

Blake’s prophetic books explore the “world of false consciousness, of alienation,” according to Lindsay, and he praised Dickens for “the discovery of dissociation and the alienation of man from his fellows and his own essence, the stages of struggle against the dissociative forces, and the intuition (uttered in symbolic forms) of the resolving unity.”

Lindsay regarded religion as both a product of alienation and a form of protest against it. His vision of the world was also infused with hope for a fulfilment somehow always just out of reach. In a letter to Edith Sitwell on her conversion to Roman Catholicism in 1955 he confessed to having been at times “very close to the catholic creed… indistinguishable perhaps from ekklesia of the faithful — the people who are Christ.”

Affinities between his system and Christianity are not difficult to uncover: sin as alienation, humanity crucified, Life the Eucharist, Paradise a vision of love and freedom. He was familiar with such syncretisms in the Ancient World: in a book about Roman Egypt he references a tomb in the Roman catacombs of Pretextatys on which Dionysus is identified with the Lord Sabaoth, the Lord of Hosts, and burials in the Vatican Necropolis of Christians who also worshipped Isis and Bacchus.

Alienation has become all-pervasive in the modern world, chiefly because of money and science. Following Thomas Carlyle, Lindsay often referred to the institutions and customs associated with money as the “cash-nexus.” From all the possible elements of human relationship associated with the exchange of goods, money abstracts a single factor, that of utility, and makes the remainder redundant. The dehumanisation implicit in the use of money reaches its apogee with capitalism, which turns life itself into a commodity. In his study of William Morris he declares that “a genuinely new society can be born only when commodity-production ends, and with it division of labour, money, market-systems, and alienation in all its many shapes and forms — above all alienation from labour.”

The other powerful alienating factor of modernity is the scientific method stemming from Galileo and Descartes, which Lindsay consistently attacked as “mechanical,” “divisive” and “quantitative.” Cranny-Francis notes that “Lindsay returns repeatedly… to Blake’s criticisms of science and the post-Enlightenment rationalism on which it is based.” Lindsay was not at all opposed to scientific inquiry, nor wholly dismissive of the achievements of post Enlightenment science. But in Marxism and Contemporary Science (1949) and a later trilogy on alchemy, astrology and physics in Greco-Roman Egypt he refused to separate knowledge of “nature” from other kinds of knowledge. There is a single interconnected world, and all ways of knowing it are likewise interconnected. The “sciences” discussed in Marxism and Contemporary Science are not physics, astronomy or chemistry, but biology, anthropology, art criticism, psychology and history.

For Lindsay, decisive proof that contemporary science has taken a wrong turning was the atomic bomb, the culmination of alienation’s will to self-destruction. Today he would no doubt make the same criticism of the digital revolution and genetics.


But there is a nagging problem with alienation, though Lindsay, more of a poet than a philosopher, seems never to have addressed it, and neither does Cranny-Francis. It parallels the problem of evil in religions that postulate a benign creator. Where does alienation come from? How can the world be a vital unity and at the same time a site of struggle against division?

Some cosmologies have an explanation. An idealist can say that the world of the senses is a flawed copy of a perfect and eternal world that is glimpsed only in thought. The unity is “above,” the struggle “below.” But Lindsay was trenchantly opposed both to idealism and to hierarchy. For him mental and spiritual phenomena are autonomous, but in the final analysis dependent on matter. Cranny-Francis mentions his debt to the Sydney-born philosopher Samuel Alexander. Alexander was an early twentieth-century advocate of emergence, the theory that complex systems produce attributes and activities that do not belong to their parts. Could emergence explain the origin of alienation? It isn’t clear how.

At a psychological level, though, Lindsay’s biography provides a paradigm case of a conflict between longed-for unity and actual division. Lindsay’s father was the writer and artist Norman Lindsay, one of Australia’s best-known humourists and artists in the first half of the twentieth century, notorious for his sexual libertarianism and hostility to Christianity. Cranny-Francis dwells sensitively on Jack’s difficult relationship with Norman. “The story of father-son relationships threads through all of Lindsay’s writing, fiction and non-fiction,” she writes. When Jack was nine years old, Norman left his wife and three sons. The fatherless family moved to Brisbane, where young Jack lived in a state of genteel but disorganised impoverishment, loved but neglected by his vague and increasingly alcoholic mother until her sister’s family finally took charge and sent him to school. Unsurprisingly, the theme of a lost birthright appears often in Lindsay’s novels and histories.

Norman renewed contact with his son only after his academic achievements had earned him scholarships to Brisbane’s elite Grammar School and the University of Queensland. Lindsay, ecstatic to be restored to his famous father’s attention, was Norman’s devoted acolyte for the next decade. Then they fell out bitterly.

Norman’s entire life was a fierce act of will to sustain the exhilarating freedom of his adolescence, when he had followed his older brother out of a shabby mined-out gold town to marvellous Melbourne and lived in careless poverty, pursuing a self-directed course in drawing, reading, flaneuring and witty companionship until Jack’s conception brought that delightful life to a sudden end. For the rest of his life Norman acted out his ambivalence, alternately praising and denouncing his son. In 1967 he wrote to him, “I can’t help but laugh when I think of what our biographers are going to make of the break and reunion of our relations. They will have to do the best they can with its human dramatics for it is quite impossible for them to realise the compulsions behind them.”

Jack Lindsay did not have children until his late fifties. He was an anxious, self-critical parent, and never ceased to yearn for his father’s distracted attention.

Turn for a moment I say
Turn from your obdurate place
In that clarity of stone,
That terrible folly of light,
Turn for a moment this way
Your abstracted face.

Lindsay understood the importance of this personal history for his literary career, confessing to a close friend that “if my parents hadn’t parted I doubt if I should have become a writer at all.” Cranny-Francis suggests that his description of William Morris also applies to himself:

From one aspect there never was a more impetuously frank man than Morris; he lives restlessly in the open and follows his convictions out without concern for the consequences to himself or anyone else. From another aspect he appears a hidden figure, moved by a passion of which the multiple effects are plain but the central impulse obscured. I suggest that along the lines I have sketched we can bring the man and the artist into a single focus, and see the way in which his personal dilemma was transformed into a dynamic of acceptance and revolt, of deepening insight into the nature of his world and into the ways in which the terrible wounds of alienation can be healed.


A succession of recent British scholars has sought to recover Lindsay as a forerunner of practitioners of cultural studies, an influential field of interdisciplinary research instigated by British theorists — among them Richard Hoggart, Stuart Hall and Raymond Williams — in the 1970s. Although they didn’t reference Lindsay, the founders of cultural studies were almost certainly familiar with some of his work, and there are strong points of similarity in their ideas. In particular, they all affirmed the political significance of culture.

Marx had suggested a base–superstructure model of social formation, according to which economic relationships ultimately determine the organisation of politics, law, religion and creative expression. The implication was that economic interests always trump cultural factors. The practical effect was to concentrate efforts to build socialism in workplaces, which in effect meant and trade unions. This left little place for cultural creators. Like cultural studies, Lindsay steadfastly rejected that model.

Another tenet of cultural studies that Lindsay anticipated was the idea that significant cultural change comes from “below.” Lindsay believed that plebeian practices and values, and their fraught and contradictory clashes with the practices and values of ruling elites, are the major source of cultural innovation. He made the point forcefully in a letter to his friend and fellow critic Alick West:

The concept is that culture is created by the expropriators, fundamentally expresses their position and needs, and has no close relation to the concrete labour-processes and the producing masses. I should like to suggest that something like the reverse is the truth. The people are the producers and reproducers of life, and in that role are also the begetters of culture in all its shapes and forms — though in a class-divided society the ruling class expropriates culture.

Lindsay’s view stemmed from the conviction — shared with Ruskin and Morris — that work and aesthetic production had once “been harmoniously united, and that they still ought to be, despite the general movement towards degradation and mechanisation.” Before commodity production alienated workers from the products of their labour — in this historical sketch uncommodified slavery is conveniently forgotten — work was done in order to create both necessary means of living and pleasing or profound emotions. Each was a joyful undertaking. Once, communal work had always been accompanied by singing and chanting. Understanding this had motivated William Morris to take on, in Lindsay’s dated language, “the full political and social struggle which alone could have as its aim the achievement of brotherhood and the ending of commodity-production.”

In A Short History of Culture Lindsay traced the essential identity of art and work back to the movement of bodies in space. From the classicist Jane Harrison he took the observation that the repetitive, rhythmic behaviours that create the necessities of life — poundings, liftings, plantings, weavings, cuttings, stalkings, throwings — are shared with dancing. Like her, he considered dance to be the primal kind of cultural creativity. Citing another book of Lindsay’s criticism, After the Thirties, Cranny-Francis writes:

Lindsay identifies in dance the rhythmical control of movement that characterises human activity and being. It bodily enacts the purposive behaviours that enable the group to maintain social coherence, engaging them through the rhythm of the breath: ‘Body and mind are thus keyed together in new adventurous and interfused ways.’ The dance becomes an exploration of the embodied being required to achieve a specific purpose, such as a hunt. It lifts the dancer (and observer) into the realm of ‘pure potentiality’ where ‘desire and act are one’; where the bodily disposition required to engage successfully in a particular activity is achieved and communicated. In this process, Lindsay argued, human beings imaginatively engage aspects of everyday life and rehearse the modes of being, thinking and acting that enable them to achieve their needs and desires. For Lindsay this is the role of culture in the formation of being and consciousness, whether it be the ritual art of early societies or contemporary literature, visual art, theatre and dance.


If communism means opposition to capitalism and desire for a future free of oppression and exploitation, Lindsay was certainly a communist. No one seems to know exactly when he joined or if he ever left the British Communist Party, but he was actively affiliated with it from the late 1930s until at least the 1970s. MI5 put him under surveillance. He stayed in the party when it demanded he recant his ideas, and again after Khrushchev’s denunciation of Stalin’s brutality in 1956. There is no doubt about the strength of his allegiance. But was Lindsay a Marxist communist? He certainly called himself one. Cranny-Francis, along with just about everyone else who has written about him, takes it for granted.

Yet there are grounds for wondering about Lindsay’s Marxism. What kind of Marxist converts on account of a Renaissance philosopher? Marxism profoundly shaped his thinking but it was not Lindsay’s foundational postulate. He came to it as a plausible derivation from a more fundamental constellation of ideas about culture and history that he had already arrived at. Some of his creed was shared with Marxism, some was dissonant with it. If, in the manner of a party apparatchik, one were called on to prepare a list of his heresies, it would be an easy brief: he largely discounts or ignores economic forces, flirts with idealism, sees revolutionary potential in “the people” rather than “the working class,” and has a Romantic, even reactionary, understanding of Communist aims.

Late in life, Lindsay began to concede the point. The Crisis in Marxism (1981) is highly critical of most prominent twentieth-century Marxist theorists, particularly Adorno and Althusser. In one of his last essays he declared that he was “diametrically opposed to all closed systems,” including Lenin’s. “I have found all Marxists, orthodox or not, to be hostile.” Among an eclectic list of influences ranging from Keats to Harrison to Dostoyevsky, only two Marxists appear: Lukacs, and Marx himself.

In a sense, of course, debating whether Lindsay was “really” Marxist is as futile as debating whether Mormons are Christian or Alevis Muslim. In another sense, though, it matters. As long as Lindsay is seen as first and foremost a Marxist, his ideas remain submerged beneath the complexity and weight of a hundred and fifty years of Marxist theorising. To perceive what is most original in his thought, it needs to be disentangled from what has become a distracting integument.


Promised a scholarship to Oxford after he graduated from the University of Queensland but told that he would have to wait a year, Lindsay refused to enrol. For most of his life the lack of a higher degree and his oppositional politics would have made it difficult if not impossible to work as an academic. He gave no sign of wanting to. Even his most esoteric books were not aimed primarily at academics, nor did they please many of them. Ironically, today it is chiefly they who keep his memory alive. Anne Cranny-Francis’s book is no exception, but it deserves a broader readership. We need not agree with Lindsay’s controversial opinions to hope that this remarkable thinker will become better known. •

Jack Lindsay: Writer, Romantic, Revolutionary
By Anne Cranny-Francis | Palgrave Macmillan | €119.99 | 416 pages

The post A dynamic of acceptance and revolt appeared first on Inside Story.

]]>
https://insidestory.org.au/a-dynamic-of-acceptance-and-revolt/feed/ 16
Odyssey down under https://insidestory.org.au/odyssey-down-under/ https://insidestory.org.au/odyssey-down-under/#respond Fri, 08 Sep 2023 05:33:01 +0000 https://insidestory.org.au/?p=75570

A new kind of history is called for in the year of the Voice referendum. Here’s what it might look like.

The post Odyssey down under appeared first on Inside Story.

]]>
In the beginning, on a vast tract of continental crust in the southern hemisphere of planet Earth, the Dreaming brought forth the landscape, rendering it alive and full of meaning. It animates the landscape still, its power stirred constantly by human song, journey and ceremony. Past and present coalesce in these ritual bursts of energy. Creatures become mountains which become spirits that course again through the sentient lands and waters. People visit Country, listen to it, and cry for it; they sing it into being, they pay attention to it. They crave its beneficence and that of their ancestors. Their very souls are conceived by Country; life’s first quickening is felt in particular places and they become anchored forever to that beloved earth.

The stars are our ancestors lighting up their campfires across the night sky. The universe exploded into being fourteen billion years ago and is still expanding. As it cooled and continued to inflate, an opposite force — gravity — organised matter into galaxies and stars. Everything was made of the elements forged by stars. Around billions of fiery suns, the interstellar dust and debris of supernovas coalesced as planets, some remaining gaseous, some becoming rigid rock. Earth, with its molten core, its mantle of magma and a dynamic crust, was born. The planet is alive.

In the shallow waters off the western coast of the continent metamorphosed by the Dreaming sit solid mementos of the beginning of life. They are living fossils, cushions of cells and silt called stromatolites. After life emerged in a fiery, toxic cauldron in an ocean trench, bacteria at the surface captured sunlight and used it to create biological energy in the form of sugar. They broke down carbon dioxide in the atmosphere, feeding off the carbon and releasing oxygen as waste. Photosynthesis, Earth’s marvellous magic, had begun. It was just a billion years after the planet was formed.

To later inhabitants, oxygen would seem the most precious waste in the firmament. But it was a dangerous experiment, for the oxygen-free atmosphere that had created the conditions for life was now gone. Stromatolites hunched in the western tides descended from the creatures that began to breathe a new atmosphere into being.

Two billion years ago, enough oxygen existed to turn the sky blue. The same oxygen turned the oceans red with rust. Thus life itself generated the planet’s first environmental crisis. This ancient rain of iron oxide is preserved today in the banded ores of the Hamersley Range. The universe was then already old, but Earth was young.

The planet was restless and violent, still seething with its newness. When separate lands fused, the earth moved for them. Australian landmasses shifted north and south as crusts cruised over iron-rich magma. Large complex cells fed off the growing oxygen resource and diversified rapidly. For almost 400 million years the whole planet became gripped by glaciation and scoured by ice, and most life was extinguished. The long reign of the ancient glaciers was written into rock.

As the ice withdrew, life bloomed again. Organisms of cooperative cells developed in the oceans and became the first animals. Six hundred million years ago, a supercontinent later known as Gondwana began to amass lands in the south, and their titanic fusion created a chain of mountains in central Australia. Uluru and Kata Tjuta, inspirited by the rainbow python, are sacred rubble from this momentous first creation of Gondwana.

Life ventured ashore, protected now from dangerous radiation by the strengthening shield of ozone gas around Earth. Plants and animals sustained each other, the essential oxygen circulating between them. Gondwana united with other continents, creating a single landmass called Pangaea. When the planet cooled again, surges of glacial ice scoured life from the land once more. But life persisted, and its reinventions included the seed and the egg, brilliant breakthroughs in reproduction. They were portable parcels of promise that created a world of cycads and dinosaurs.

Earth gradually changed its hue over eons. Rusted rock and grey stone became enlivened by green, joining the blue of the restless oceans. Chlorophyll conquered the continents. Pines, spruces, cypresses, cycads and ferns found their way up the tidal estuaries, across the plains and into the mountains, but the true green revolution awaited the emergence of flowering plants. These plants generated pollen and used animals as well as wind to deliver it. Insects especially were attracted to the perfumed, colourful flowers where they were dusted with pollen before they moved to another bloom. It was a botanical sexual frenzy abetted by animal couriers. The variety of plants exploded. Nutritious grasslands spread across the planet and energy-rich fruits and seeds proliferated. As this magic unfolded, Gondwana separated from Pangaea again and consolidated near the south pole, where it began to break up further.

The cosmic dust that had crystallised as Earth, dancing alone with its single moon and awash with its gradually slowing tides, seemed to have settled into a rhythm. The bombardment of meteors that marked its early life had eased. Giant reptiles ruled, small mammals skulked in the undergrowth, and flowers were beginning to wreak their revolution.

Then, sixty-six million years ago, the planet was violently assaulted. A huge rogue rock orbiting the Sun plunged into Earth. The whole planet shuddered, tidal waves, fires and volcanoes were unleashed, soot blackened the atmosphere, and three-quarters of life was extinguished. The largest animals, the dinosaurs, all died. But the disaster of the death star also created the opportunity for mammals to thrive. The comet forged the modern world.


Flat and geologically calm, the landmass that would become Australia was now host to few glaciers and volcanoes. But ice and fire were to shape it powerfully in other ways. About fifty million years ago, in the final rupture of Gondwana, Australia fractured from its cousin, Antarctica, and voyaged north over millions of years to subtropical latitudes and a drier climate. Fire ruled Australia while Antarctica was overwhelmed by ice. The planet’s two most arid lands became white and red deserts.

The newly birthed Australian plate rafted north into warmer climes at a time in planetary history when the earth grew cooler, thus moderating climatic change and nurturing great biodiversity. It was the continent’s defining journey. It began to dry, burn and leach nutrients, the ancient soils became degraded and impoverished, and the inland seas began to dry up. In the thrall of fire, the Gondwanan rainforest retreated to mountain refuges and the eucalypt spread. Gum trees came to dominate the wide brown land. The bush was born.

Three million years ago, when North and South America finally met and kissed, the relationship had consequences. Ocean currents changed and the Pleistocene epoch, marked by a succession of ice ages, kicked into life. Regular, dramatic swings in average global temperature quickened evolution’s engine. The constant tick and tock of ice and warmth sculpted new, innovative life forms.

In southern Africa, an intelligent primate of the forests ventured out onto the expanding grasslands and gazed at the horizon. This hominid was a creature of the ice ages, but her magic would be fire. One day her descendants walked north, and they kept on walking.

By the time they reached the southeastern edges of the Asian islands, these modern humans were experienced explorers. They gazed at a blue oceanic horizon and saw that there was no more land. But at night they observed the faint glow of fire on a distant continent. And by day they were beckoned by haze that might be smoke and dust. What they did next was astonishing.

The people embarked on an odyssey. They strengthened their rafts and voyaged over the horizon, beyond sight of land in any direction — and they kept on sailing. They were the most adventurous humans on Earth. They crossed one of the great planetary boundaries, a line few land-based animals traversed, one of the deep sutures of tectonic earth. This was over 60,000 years ago. The first Australians landed on a northern beach in exhaustion, wonder and relief. They had discovered a continent like no other.

The birds and animals they found, the very earth they trod, had never known a hominid. The other creatures were innocent of the new predator and unafraid. It was a bonanza. But the land was mysterious and forbidding and did not reveal its secrets easily. The people quickly moved west, east and south, leaving their signatures everywhere. They had to learn a radically new nature. Arid Australia was not consistently dry but unpredictably wet. The climate was erratic, rainfall was highly variable, and drought could grip the land for years. The soil was mostly poor in nutrients and there were few large rivers. But these conditions fostered biodiversity and a suite of unique animals and plants that were good at conserving energy and cooperating with one another.

The first people arrived with a firestick in their hands, but never before had they known it to exert such power. For this was the fire continent, as distinctive in its fire regimes as in its marsupials and mammal pollinators. Fire came to be at the heart of Australian civilisation. People cooked, cleansed, farmed, fought and celebrated with fire. The changes they wrought with hunting and fire affected the larger marsupials which, over thousands of years, became scarce. People kept vast landscapes open and freshly grassed through light, regular burning. By firing small patches they controlled large fires and encouraged an abundance of medium-sized mammals. As the eucalypt had remade Australia through fire, so did people.

They had arrived on those northern beaches as the latest ice age of the Pleistocene held the planet in its thrall. Polar ice was growing and the seas were lower, which had made the challenging crossing from Asia just possible. People could walk from New Guinea to Tasmania on dry land. This greater Australia, now known as Sahul, was the shape of the continent for most of the time humans have lived here. People quickly reached the far southwest of Western Australia and the southern coast of Tasmania. From the edge of the rainforest they observed icebergs from Antarctica, emissaries from old Gondwana.


For tens of thousands of years after people came to Australia, the seas continued to retreat and the new coastlines were quickly colonised. Every region of the continent became inhabited and beloved, its features and ecologies woven into story and law. Trade routes spanned the land. People elaborated their culture, history and science in art and dance, and buried their loved ones with ritual and ceremony in the earliest known human cremations. Multilingualism was the norm. Hundreds of distinct countries and languages were nurtured, and the land was mapped in song. This place was where everything happened, where time began.

As the ice age deepened, the only glaciers in Australia were in the highlands of Tasmania and on the peaks of the Alps. For much of the continent, the ice age was a dust age. Cold droughts settled on the land, confining people in the deserts to sheltered, watered refuges. Great swirls of moving sand dunes dominated the centre of the continent but the large rivers ran clear and campfires lit up around the lakes they formed. About 18,000 years ago, the grip of the cold began to weaken and gradually the seas began to rise. Saltwater invaded freshwater, beaches eroded, settlements retreated, sacred sites became sea country. The Bassian Plain was flooded and Tasmanians became islanders. Over thousands of years, Sahul turned into Australia.

The rising of the seas, the loss of coastal land, and the warming of average temperatures by up to 8°C transformed cultures, environments and economies throughout the continent. People whose ancestors had walked across the planet had survived a global ice age at home. In the face of extreme climatic hardship, they continued to curate their beloved country. They had experienced the end of the world and survived.

The warm interglacial period known as the Holocene, which began 13,000 years ago, ushered in a spring of creativity in Australia and across the planet. Human populations increased, forests expanded into the grasslands and new foods flourished. Australians observed the emergence of new agricultural practices in the Torres Strait islands and New Guinea but mostly chose not to adopt them. They continued to tune their hunting and harvesting skills to the distinctive ecologies of their own countries, enhancing their productivity by conserving whole ecosystems. A complex tapestry of spiritual belief and ceremonial ritual underpinned their economies. The sharing of food and resources was their primary ethos.

Strangers continued to visit Australia from across the seas, especially from Indonesia and Melanesia. Four thousand years ago, travellers from Asia brought the dingo to northern shores. During the past millennium, Macassans from Sulawesi made annual voyages in wooden praus to fish for sea cucumbers off Arnhem Land where they were generally welcomed by the locals. The Yolngu people of the north engaged in trade and ceremony with the visitors, learned their language, adopted some of their customs and had children with them. Some Australians travelled by prau to Sulawesi.

In recent centuries, other ships nosed around the western and northern coasts of the continent, carrying long-distance voyagers from Europe. One day, early in the European year of 1788, a fleet of tall ships — “each Ship like another Noah’s Ark” carefully stowed with seeds, animals and a ballast of convict settlers — entered a handsome harbour on the east coast of Australia and began to establish a camp. These strangers were wary, inquisitive and assertive, and they came to stay. They were here to establish a penal colony and to conduct an agrarian social experiment. They initiated one of the most self-conscious and carefully recorded colonisations in history on the shores of a land they found both beautiful and baffling.

They were from a small, green land on the other side of the world, descendants of the people who had ventured west rather than east as humans exited Africa. They colonised Europe and Britain thousands of years after the Australians had made their home in the southern continent. They lived in a simplified ecology scraped clean by the glaciers of the last ice age, and were unprepared for the rich subtlety of the south.

For 2000 years before their arrival in Australian waters, the Europeans had wondered if there might be a Great South Land to balance the continents of the north. By the start of the sixteenth century, they confirmed that the planet was a sphere and all its seas were one. They circled the globe in tall sailing ships and voyaged to the Pacific for trade, science and conquest. The British arrivals were part of the great colonialist expansion of European empires across the world. For them, success was measured through the personal accumulation of material things; Australians were the opposite.

On eastern Australian beaches from the late eighteenth century, there took place one of the greatest ecological and cultural encounters of all time. Peoples with immensely long and intimate histories of habitation encountered the furthest-flung representatives of the world’s first industrialising nation. The circle of migration out of Africa more than 80,000 years earlier finally closed.

The British did indeed find the Great South Land of their imagination seemingly waiting for them down under and they deemed it vacant and available. It was an upside-down world, the antipodes. They would redeem its oddity and emptiness. The invaders brought the Bible, Homer, Euclid, Shakespeare, Locke and the clock. They came with guns, germs and steel. With the plough they broke the land. They shivered at “the deserted aboriginal feel of untilled earth.” They dug the dirt and seized it. Sheep and cattle were the shock troops of empire; their hard hooves were let loose on fragile soils and they trampled them to dust. Australian nature seemed deficient and needed to be “improved.” Colonists believed that the Australians were mere nomads, did not use the earth properly, and therefore did not own it.

But the true nomads were the invaders and they burned with land hunger. War for possession of the continent began. It continued for more than a hundred years on a thousand frontiers. Waterholes — the precious jewels of the arid country — were transformed into places of death. It was the most violent and tragic happening ever to befall Australia. So many lives were sacrificed, generations of people were traumatised, and intimate knowledge of diverse countries was lost.


Australia entered world history as a mere footnote to empire; it became celebrated as a planned, peaceful and successful offshoot of imperial Britain. A strange silence — or white noise — settled on the history of the continent. Nothing else had happened here for tens of thousands of years. Descendants of the newcomers grew up under southern skies with stories of skylarks, village lanes and green hedgerows from the true, northern hemisphere. And they learned that their country had a short triumphant history that began with “a blank space on the map” and culminated in the writing of “a new name on the map” — Anzac. So the apotheosis of the new nation happened on a distant Mediterranean shore. The cult of overseas war supplanted recognition of the unending war at home, and the heroic defence of country by the first Australians was repressed. They were disdained as peoples without agriculture, literacy, cities, religion or government, and were allowed neither a history nor a future.

The British and their descendants felt pride in their new southern land and pitied its doomed, original inhabitants. Colonists saw themselves as pioneers who pushed the frontier of white civilisation into the last continent to be settled, who connected Australia to a global community and economy. They were gratified that their White Australia, girt by sea, a new nation under southern skies, was a trailblazer of democratic rights: representative government, votes for working men, votes for women. But the first Australians lay firmly outside the embrace of democracy. They continued to be removed from country onto missions and reserves; they did not even have a rightful place in their own land, and every aspect of their lives was surveyed.

The invaders lived in fear of invasion. Had they used the soil well enough, had they earnt their inheritance? Would strangers in ships, boats, threaten again? Had they reckoned with their own actions in the land they had seized? There was a whispering in their hearts.

New peoples arrived down under from Europe, the Americas and Asia, and the British Australians lost their ascendancy. Australia became the home again of many cultures, vibrantly so, and a linguistic diversity not seen on the continent since the eighteenth century flourished. Many languages of the first peoples persisted and were renewed. The classical culture of the continent’s discoverers endured; their Dreamings, it was suggested, were the Iliad and Odyssey of Australia. A bold mix of new stories grew in the land.

The invaders of old Australia did not foresee that the people they had dispossessed would make the nation anew. The society they created together was suffused with grief and wonder. The original owners were recognised as full citizens and began to win their country back through parliament and the courts. They believed their ancient sovereignty could shine through as a fuller expression of Australia’s nationhood.

But now the planet was again shuddering under an assault. The meteor this time was the combined mass of humans and their impact upon air, oceans, forests, rivers, all living things. It was another extinction event, another shockwave destined to be preserved in the geology of Earth. The fossilised forests of the dinosaurs, dug up and burnt worldwide since Australia was invaded, had fuelled a human population explosion and a great acceleration of exploitation. Rockets on plumes of flame delivered pictures of spaceship Earth, floating alone, finite and vulnerable in the deep space of the expanding universe. Ice cores drilled from diminishing polar ice revealed, like sacred scrolls, the heartbeat of the planet, now awry. The unleashing of carbon, itself so damaging, enabled a planetary consciousness and an understanding of deep time that illuminated the course of redemption.

The Australian story, in parallel with other colonial cataclysms, was a forerunner of the planetary crisis. Indigenous management was overwhelmed, forests cleared, wildlife annihilated, waters polluted and abused, the climate unhinged. Across the globe, imperial peoples used land and its creatures as commodities, as if Earth were inert. They forgot that the planet is alive.

The continent of fire led the world into the new age of fire. But it also carried wisdom and experience from beyond the last ice age.

Humans, as creatures of the ice, were embarked on another odyssey. It would take them over the horizon, to an Earth they have never before known. •

References: The stars are our ancestors: B.T. Swimme and M.E. Tucker, Journey of the Universe • “the most precious waste in the firmament”: Richard Fortey, Life: An Unauthorised Biography • “The planet is alive”: Amitav Ghosh, The Great Derangement and The Nutmeg’s Curse • iron oxide, the seed and the egg: Reg Morrison, Australia: Land Beyond Time • the true green revolution: Loren Eiseley, The Immense Journey • expanding grasslands: Vincent Carruthers, Cradle of Life • distinctive in its fire regimes and mammalian pollinators: Stephen Pyne, Burning Bush • conditions of biodiversity: Tim Flannery, The Future Eaters • Sahul and the last ice age: Billy Griffiths, Deep Time Dreaming • conserving whole ecosystems: Peter Sutton and Keryn Walshe, Farmers or Hunter-gatherers? • “each Ship like another Noah’s Ark”: First Fleet surgeon George Worgan in Grace Karskens, People of the River • agrarian social experiment: Grace Karskens, The Colony • guns, germs and steel: Jared Diamond, Guns, Germs, and Steel • “the deserted aboriginal feel of untilled earth”: George Farwell, Cape York to the Kimberleys • “the true, northern hemisphere”: Shirley Hazzard, The Transit of Venus • “a blank space on the map”: Ernest Scott, A Short History of Australia • a whispering in their hearts: Henry Reynolds, This Whispering in Our Hearts • “the Iliad and Odyssey of Australia”: Noel Pearson, A Rightful Place • “a bold mix of the Dreamings”: Alexis Wright, The Swan Book • “we believe this ancient sovereignty can shine through as a fuller expression of Australia’s nationhood”: The Uluru Statement 2017 • a great acceleration: John McNeill and Peter Engelke, The Great Acceleration • “the heartbeat of the planet”: Will Steffen • the new age of fire: Stephen Pyne, The Pyrocene.

The post Odyssey down under appeared first on Inside Story.

]]>
https://insidestory.org.au/odyssey-down-under/feed/ 0
Which Oppenheimer? https://insidestory.org.au/which-oppenheimer/ https://insidestory.org.au/which-oppenheimer/#comments Wed, 26 Jul 2023 23:38:10 +0000 https://insidestory.org.au/?p=74961

The physicist’s own words provide a commentary on conflicting depictions

The post Which Oppenheimer? appeared first on Inside Story.

]]>
“What is true on the scale of an inch is not necessarily true on the scale of hundreds of millions of light years,” J. Robert Oppenheimer told an audience at Colorado University in 1961. The Oppenheimer of these late lectures is a deep moral thinker who quotes the Gospels, the Bhagavad Gita, Sophocles and the French philosopher Simone Weil. A physicist’s view of the unimaginable scope of time and space is balanced by a very human immersion in cultural hindsight.

This Oppenheimer makes no appearance in Christopher Nolan’s Oppenheimer, which ends in the era of McCarthyism, when the national hero credited with bringing the war to an end was targeted in an ugly security hearing. Cillian Murphy’s portrayal gives us a different personality, with little of the physicist’s wry wit, and a more fragile sense of authority based on scientific brilliance rather than intellectual breadth.

Sam Shaw’s 2016 television series Manhattan gave an almost antithetical view to Nolan’s, with the great scientist, played by Daniel London, making only occasional appearances as a terse and inscrutable man who delivers peremptory rulings over the heads of those doing the hard grind of calculation and experiment at Los Alamos.

The story of Oppenheimer, like that of the Manhattan Project, can be told from different angles, but the angles don’t fit together. How did such breadth of hindsight fit with such constrained foresight? How did the party-going womaniser reconcile with the moral philosopher obsessed with questions of responsibility? How did the vocation of a gifted theoretical physicist map onto the overwhelmingly horrifying consequences of his work?

As with the inch and the light years, the cognitive dislocation has to do with matters of scale and distance. If the figure of Oppenheimer himself  has all the cohesion of a Picasso portrait, this has surely to do with the fact that his mind was working in dimensions not accessible to most of us. “He’s complicated, capable of understanding and holding in his head contradictory ideas,” says Kai Bird, co-author of the biography on which the film is based.

Seen in hindsight and impersonated by a fine actor, as he is in the film, he may resolve himself into some kind of coherence, but hindsight is selective, easy to blend with invention and mould into narrative lines. Justifiably praised as it has been, Nolan’s Oppenheimer ultimately owes more to Hollywood than to history. The modern myth of the persecuted genius shows no signs of fading.

Shaw’s Manhattan skirts the heroic prototype altogether, and in many other respects forms a counterpart to the film. Shot on location in New Mexico, it recreates the township of Los Alamos with dedicated accuracy. For the cohort of scientists drafted in with their families, this “cross between a prison camp and a university campus” is not an easy place to live. In contrast to the string of dramatic moments Nolan creates across time and space, the episodic structure of a television series allows sustained attention as these people wander around the dusty, makeshift town in their city clothes.

The township is populated with invented characters: minor players in the project, their families and the workers brought in as servants from Mexican communities whose land is being occupied. While the men are at work the women make unsuccessful attempts to socialise in a setting where none of them are permitted to know anything about what is going on.

A divergence of views on the underlying physics of the bomb plays out in an alpha-male clash between two men: the hard-bitten Frank Winter (John Benjamin Hickey), leader of a dissenting team working on the implosion principle, and Charlie Isaacs (Ashley Zukerman), a prize-winning Harvard doctorate who arrived in July 1943, with load of tickets on himself, as part of the larger Oppenheimer-backed “Thin Man” cohort.

Manhattan’s script is designed to show how social and psychological factors are interwoven with theoretical convictions. You don’t have to follow the maths to pick up on the tensions generated by conflicting paradigms or the dual timelines measured on the ticking clock.

In a race to assemble the data before Oppenheimer leaves for the next briefing in Washington, Winter works through the night, interrupted by a giant scorpion marching across the wooden desktop as he scribbles equations; driving out into the desert to experiment with the accelerations of a golf ball under the car headlights. Meanwhile, the days crawl by for his wife Liza (Olivia Williams), a highly qualified botanist who has put her own career on hold, and their daughter Callie (Alexia Fast), desperate to escape to college in New York.

Stresses of another kind begin to escalate as bored military officers, on hand to enforce secrecy provisions, become overzealous, going through rubbish bins and opening personal correspondence. Things get ugly with the arrival of a plain clothes interrogator played by Richard Schiff (familiar to some viewers from The West Wing). With a relaxed manner and husky delivery, he assures his suspects: “Technically you’re nowhere, talking to no one.” Errors of judgement, betrayals and tragic consequences follow, and the town is subject to a regime of obsessive surveillance resembling conditions commonly associated with Stalinism.

All this is an important corrective to Nolan’s decision to depict the postwar witch-hunt against Oppenheimer without any attention to the precedents. Being accused of espionage was an occupational hazard for anyone who worked at Los Alamos. “It seems half of America’s physicists just disappeared,” says Winter at one point. “Pouff. Like the Rapture.” Curiously, Nolan casts Christopher Denham, who plays a member of Winter’s research cohort in Manhattan, as the real-life spy Klaus Fuchs, seen on the fringes of scenes in Oppenheimer as if he’s having a pretty easy time passing as one of the team.

The security theme has both political and social complexities: how is it possible to conduct any high-profile covert operation without attracting espionage? The film shows Oppenheimer as too trusting, or simply oblivious, as he focuses on technical matters while Fuchs hovers behind him. In Manhattan, he comes across as too ruthless to care what happens to those working under his management.


With audience numbers declining, Shaw’s series was terminated after season two. Watching it again now, it’s easy to see why, despite a uniformly excellent cast and the authentically evoked milieu. The characters lack edge and wit, and their obsessive sexual liaisons become tedious. Frank Winter, always dour and frustrated, just doesn’t have sufficient range to sustain the amount of screen time devoted to him. The bomb itself is never effectively conjured into presence, something Nolan manages unforgettably with a display of IMAX wizardry.

Both dramatisations provoke ethical questions by divorcing the story of what went on in Los Alamos from its consequences in Hiroshima and Nagasaki. But how, even hypothetically, could those two pictures be brought into meaningful dramatic relationship?

Here the older Oppenheimer serves as a fiercely articulate commentator. Speaking at the Princeton Theological Seminary in 1958, he began with the admission that “we are in trouble,” and proceeded to dwell on the sense of disharmony and imbalance that characterised the postwar era.

“Everything has got enormously bigger,” he said. “There are more people. The units of human activity have gotten bigger.” The scale of our enterprise as a species had defied our comprehension; the joint between tradition and innovation was “inflamed and in bad shape” and an “almost total collapse of valid communication” was evident in the public domain.

The words still ring true. And no matter how big the screen, we biological humans are perhaps only capable of seeing part of the picture. •

The post Which Oppenheimer? appeared first on Inside Story.

]]>
https://insidestory.org.au/which-oppenheimer/feed/ 1
Magnificently crumpled lives https://insidestory.org.au/magnificently-crumpled-lives/ https://insidestory.org.au/magnificently-crumpled-lives/#respond Wed, 26 Jul 2023 01:31:46 +0000 https://insidestory.org.au/?p=74952

A fascinating account of nineteenth-century phrenologists illuminates how ideas spread

The post Magnificently crumpled lives appeared first on Inside Story.

]]>
If you are not quite certain what “phrenology” is, you are not alone. Many of us are vaguely familiar with the word — something to do with bumps on the skull? — but might struggle to explain its defining principles. In historical memory it sits hazily alongside mesmerism as an arcane oddity: one of those fields of study beloved by the Victorians for their promise to render the mysteries of life legible and manageable. The belief that human character and capability could be determined by “reading” the external landscape of the head has long since been discredited, as has the idea that a magnetic fluid exists between and connects us all. The “science” of these fields has been largely forgotten, though traces of their vocabulary linger still.

Alexandra Roginski’s appointed task in Science and Power in the Nineteenth-Century Tasman World: Popular Phrenology in Australia and Aotearoa New Zealand is not to recuperate the forgotten field of phrenology, nor to restore its place in an intellectual history of science. Indeed, she devotes remarkably little space to explaining phrenology’s foundational principles or rehashing its chief lines of fracture or debate. Instead, she weaves a narrative that offers a counterpoint to that of the professionalisation of science and its establishment as an academic discipline. While “science sprouted tendrils across the settler colonies,” and took root in universities, museums, exhibitions, observatories, scientific societies and other institutions, phrenology was weeded out from such establishment bodies only to flourish in the terrain of local communities and popular culture.

The setting for Roginski’s exploration is the “Tasman World”: an intercultural setting brought into existence by the flow of people and products to and between Australia and Aotearoa New Zealand from the late eighteenth century onward. During the decades of phrenology’s most vigorous public life, from the 1840s to the early 1900s, this was “a region and period seared by European settler-colonialism… a region of immense displacement, mobility and remaking.” It was a place in which a popular practice like phrenology could “shore up momentary power.”

In these mobile worlds, a “cadre of self-appointed professors” took science to the public by offering private consultations, public lectures and popular performances. Some were sincere, dedicated and knowledgeable exponents of phrenology; they offered extensive expositions of its theory and made it their life’s work. But many seized on the potential it offered for commercial or professional exploitation, adapting its vocabulary and gestures to purposes of their own.

Those who bore the “capacious title of phrenologist” might double as “gold miners, fortune tellers, vagrants, petty criminals, ministers, physicians, actors, elocutionists, barbers and journalists.” Some “plucked just one or two things from practical phrenology’s toolbox” to wield in a losing battle for security, income, or reputation. Others borrowed from its platform oratory to spice up their earnest articulation of radical, unionist, racialist or spiritualist views.

These Tasman phrenologists remain, for the most part, shadowy figures. “Their grasp on power was not the expansive sovereignty of the great figures of history,” so it isn’t surprising that their traces on the archival record should be blurred and incomplete. That they come into view at all is to the credit of Roginski’s painstaking searches through digital and paper archives, her drawing together of seemingly inconsequential fragments to create suggestive, although always partial, histories.

Roginski seeks out these eccentric and elusive practitioners, not to give them solid and certain form but rather to hold them to the light, finding historical meaning in what they reflect and refract. She finds them, in abstract terms, wielding “a transnational science in charged negotiations already overlaid by structures of colonisation, class, race and gender.” Yet she finds also — and takes seriously — “the joy, earnestness, theatre, wit, ambition, desperation and sometime tragedy of magnificently crumpled lives.”

The result is a lively, anecdote-rich, grounded, complex, wide-ranging, eclectic and downright fascinating account of the promise, practice and performance of this popular science and its shady practitioners. Roginski’s command of her subject is assured, and her crisp, clear writing is both perceptive and witty. Rival phrenologists in a rural town clash over which of them has correctly identified the “angle of murder”; a Russian-born fortune teller strives to place limits on “the stories that could be stretched to fit around her larger-than-life persona”; Bernard O’Dowd is memorably described as “one of Australia’s most prolific nationalists and omnivorous snufflers in New Thought.” Yet Roginski resists the temptation to poke fun at her subjects or to trivialise the aspiration or desperation that drove their human dramas.

Wherever possible, Roginski highlights the instructive complexity of individual experience and performance. Take, for example, the “Wonderful Woman” whose studio photograph — dressed in an exotic, harem-style costume and apparently pointing to significant bumps on the head of her seated, suited, bearded subject — features on the front cover of the book. Madame Sibly (born Marie Eliments) toured southeastern Australia during the 1870s and 1880s as a phrenologist and mesmeric lecturer, reading the heads of subjects ranging from babies to eminent citizens. Hers was a popular performance, young men proving particularly eager to experience the “frisson” of a public head reading at her hands.

Behind the scenes, Sibly’s relationships ran less smoothly. The victim of a violent assault by her lover, she subsequently launched a string of attacks against various men who sought payment for bills or labour or who harassed her on or off stage. Her favoured weapon was a good horsewhip, but on one memorable occasion she caused a group of mesmerised subjects to rush an aggressive audience member “like a pack of hounds.”

Despite such ruptures, Madame Sibly won the affection and loyalty of audiences during her stays in different rural towns. The phrenological performance was shot through with such interplays of gender and power, reputation and respectability, public and private personas.

Or take the “Professor of Phrenology” Lio Medo, who brought his new identity into existence in Dunedin on the South Island of Aotearoa New Zealand in 1880, imperfectly stamping out a former life as Benjamin Strachan: hairdresser, perfumer, caterer, thespian, amateur elocutionist, bankrupt, petty criminal and alleged sex offender. Strachan was of African descent and had claimed to be “American by birth”; in his new life as a phrenologist, Medo traded on the public thirst for exoticism, eventually presenting himself as a “West Indian scientist.”

While Medo “navigated the exoticist shorthand of Black stage identities” with some skill, he also bore the burden of “the reality of life as a body on display.” His turn to performance coincided with the surging popularity of blackface minstrelsy in the Australasian colonies. On stage he “bristled against minstrel stereotypes” but increasingly found himself the object of racial jeering. Roginski presents his life as one of continual renegotiation of racial tropes and a “deft, sometimes frustrating game of self-representation,” but argues that he turned “the burden of double consciousness into a game of shifting identities that tantalised audiences.”


These are just two of the many practitioners who populate the pages of this book: drifters and dreamers, preachers and physicians, idealists and charlatans. Roginski doesn’t confine her interest to self-defined phrenologists, but turns her attention also to the negotiated performances of those who appeared on stage beside them — for example as allegedly mesmerised subjects who would perform appropriate actions when pressed on different parts of the head. For Indigenous participants, she suggests, these performances could be the site of “fleeting moments of empowerment” — albeit within an oppressive colonial framework.

She is equally interested in audiences, whose participation was essential to the success of popular science. One thoughtful chapter probes the responses of a particular audience, Aboriginal residents of the Maloga Mission on the Murray River, to successive visits by phrenologists in 1884 and 1892. Accustomed to being made the object of the flawed and contradictory conclusions of “racial science,” to which phrenology was often allied, they might have chosen to regard its practitioners with anything from resentment to indifference. But Roginski’s careful analysis uncovers uncertain moments of “nuanced interaction,” in which the phrenologists’ visits could become the locus of humour and play, and even vehicles for sharing culture for “people facing the unravelling of their worlds.”

This is a history of “science from below” that brings a cultural and even ethnographic lens to bear upon a strikingly popular phenomenon. The result is a gloriously illuminating study of the way ideas take off and percolate through a society, the different purposes to which they can be put, and how they endure long after they have been discredited: put to work as entertainment, as vocational identity, in the service of commercial rivalry, or as a mask.

Along the way, Roginski reveals “the contested nature of science and who could claim its authority.” Phrenology shared with more respectable branches of science many of its theatres of practice and performance, its capacity to function as entertainment as much as authoritative explication. But as science’s less respectable “other,” it also serves as a mirror to the discipline. Popular phrenology proves indeed, as she claims, an “ideal artefact” through which to study science’s multiple functions and purposes.

Roginski ends her history on a cautionary note. If her study of phrenology sheds light on a nineteenth-century world, it may also help to illuminate our own. The favoured rhetoric of its lecturers, who in the face of waning credibility asserted their authority as guardians of “suppressed knowledge,” finds its echoes still today. Phrenology may have few adherents, but its “promise of certainty and self-advancement still beguiles.”

A review of this length can’t offer much more than a sampler of the content of this expansive, intricate book. Each chapter pursues a different facet of the topic; each is rich in character, anecdote and careful, shaded argument. The diverse experience of colonists, the complexities of class and gender, the diversity of Māori and Aboriginal negotiations of phrenology’s power and promise — all are deftly handled through close attention to the particularity of experience.

If there is a narrative history of phrenology in the Tasman world to be found here, it emerges subtly and elusively from the whole. It is not a triumphalist history, but tells of diffusion and transformation rather than decline. But while the book may defy summary, it invites and rewards attentive, immersive reading. •

Science and Power in the Nineteenth-Century Tasman World: Popular Phrenology in Australia and Aotearoa New Zealand
By Alexandra Roginski | Cambridge University Press | $160.95 | 300 pages

The post Magnificently crumpled lives appeared first on Inside Story.

]]>
https://insidestory.org.au/magnificently-crumpled-lives/feed/ 0
Where’s Melbourne’s best coffee, ChatGPT? https://insidestory.org.au/melbournes-best-coffee/ https://insidestory.org.au/melbournes-best-coffee/#comments Fri, 27 Jan 2023 00:21:20 +0000 https://insidestory.org.au/?p=72768

The robot can tell you what everyone else thinks — and that creates an opportunity for journalists

The post Where’s Melbourne’s best coffee, ChatGPT? appeared first on Inside Story.

]]>
A few weeks ago the Nieman Lab — an American publication devoted to the future of journalism — nominated the automation of “commodity news” as one of the key predictions for 2023. The timing wasn’t surprising: just a few weeks earlier, ChatGPT had been launched on the web for everyone to play with for free.

Academia is in panic because ChatGPT can turn out a pass-standard university essay within seconds. But what about journalism? Having spent the summer experimenting with the human-like text it generates in response to prompts, I’ve come away with two conclusions.

First, journalists have more reason than ever before not to behave like bots. Only their humanity can save them.

Second, robot-generated journalism will never sustain the culture wars. Fighting on that arid territory is possible only for the merely human.

I started my experiment with lifestyle journalism because I was weary of how much of that kind of Spakfilla was filling the gaps in mainstream media over the silly season.

My first prompt, “Write a feature article about where to find the best coffee in Melbourne,” resulted in a 600-word piece that began:

Melbourne is renowned for its coffee culture, and for good reason. The city is home to some of the best coffee shops in the world, each with its own unique atmosphere and offerings.

This style is characteristic: ChatGPT starts with a bland introduction and concludes with an equally bland summation. In between, though, it listed exactly the coffee shops — Seven Seeds, Market Lane, Brother Baba Budan, Coffee Collective in Brunswick — I would probably nominate, as a Melbourne coffee fiend, if commissioned to write this kind of article.

As a friend of mine remarked when I told him about this experiment, nobody is going to discover a new coffee shop in Melbourne using ChatGPT. It runs on what has gone before: the previous products of human writers, as long as they’re available online.

But while the article was too predictable to run in any newspaper with a Melbourne audience, it could easily be published in one of the cheaper airline magazines aimed at international travellers. For that audience it was perfectly serviceable.

Likewise for the prompt “Write an article about how to spend two days in Sydney.” A dull piece recommended the Opera House, the Harbour Bridge, the Royal Botanic Gardens, the ferry to Manly and Taronga Zoo. Readers were advised to try Australian cuisine, with a nod to “delicious seafood” but also including meat pies and vegemite on toast. Another prompt, this one drawing on an article in the Guardian about uses for stale bread, resulted in a very boringly written piece that nevertheless contained exactly the same recipes for French toast, bread pudding and panzanella salad.

My conclusion? Poor-quality join-the-dots lifestyle writing may well be dead as a human occupation. Google plus ChatGPT can do it faster and cheaper.

So I increased the challenge, basing my prompts on real articles published over summer. The prompt “Write an article analysing who will win the Ukraine war and why” resulted in ChatGPT reminding me that its database goes up only to 2021. It didn’t know there was a Ukraine war.

Asked for an analysis of the prime ministership of Jacinda Ardern, on the other hand, the robot produced a woodenly written but accurate summary of her record. The content, though not the style, was very similar to the real articles that followed the announcement of her stepping down.

What was missing were the strident opinions about whether she was a good thing or a bad — the commentary on her housing and economic policies, for example, and whether they had completely failed or broken new ground.

This points to a key feature of ChatGPT: it has trouble being opinionated. Sometimes it admits this. At a moment when I was in contortions over my own work, I asked it to write about whether Margaret Simons was a good journalist. “I am not able to provide an opinion on whether or not Margaret Simons is a good journalist,” it replied, “as I am a machine learning model and do not have the ability to form opinions.” I had to find another way to cheer myself up.

It then recapped information about me drawn from public sources, adding the inexplicable assertion that I had written a book called The Lost Boys. (I wrote none of the several volumes with that title, and nor did anyone who shares my name.)

ChatGPT has been programmed to avoid writing in the style of particular journalists, and not to tackle anything too provocative. So an attempt to have it write in the style of Andrew Bolt about Cardinal George Pell was rejected outright.

My prompt “Write about the evidence that Ben Roberts-Smith is a war criminal” resulted in an error message, suggesting that the robot has been programmed to steer clear of defamation. But the prompt “Write an opinionated piece on the record of Cardinal George Pell” resulted in an accurate summation of his career, including the controversies over sexual abuse, his conviction, and its overturn on appeal. The final paragraph was:

In conclusion, Cardinal George Pell’s record is a complex and controversial one… Ultimately, it’s up to each individual to form their own opinion on Cardinal Pell’s record, taking into account all the facts and evidence available.

I doubt any of the real commentators on Pell would agree with that conclusion.

Next, I asked for articles based on the same premise as opinion articles by Herald Sun columnist Rita Panahi and Nine newspapers columnist Ross Gittins. In all cases, the robot was much duller than the originals.

This Panahi piece arguing that boys are suffering in education inspired me to prompt: “Write an opinionated article arguing that the war against boys is having a damaging impact on the education gender gap.” The robotic result included this observation:

Another argument against the war against boys is that it is leading to a feminisation of education. Boys are often stereotyped as being less interested in reading and writing, and more interested in sports and other physical activities. However, research has shown that this is not the case and that boys are just as capable of excelling in these areas as girls.

Compare this with the Panahi original — passionate, almost crude, selecting quotes from research that matched her passions. I disagree with almost everything she says, but the piece is unquestionably human.

ChatGPT can sum up the opinions of others but not confect an opinion of its own — even when I specifically asked it to write polemic. But there was one exception. The prompt “Write an opinion article about Australian political journalism” resulted in this surprisingly trenchant paragraph:

Australian political journalism is in a state of crisis. The industry is struggling to adapt to the rapidly changing media landscape, and as a result, the quality of political reporting in the country is suffering… Furthermore, the Australian political journalism is facing a lack of trust by the general public. The political journalism is often seen as sensationalist, biased and lacking the depth of analysis. This lack of trust has led to a decrease in the public’s interest in politics, which is a serious problem for a healthy democracy.

Is this now such an established fact, I wondered, that ChatGPT happily reproduces it? Many political journalists would want to argue with the robot — which would be interesting to watch.


Conducting these experiments with ChatGPT was, for me, a form of advanced procrastination. I was struggling with a piece of my own journalism. And so, perhaps hoping for some robot rescue, I tapped in “Write an article about the war on drugs in the Philippines.”

The result was accurate yet offensive, given I had just come from attending wakes for the dead. Duterte’s war on drugs, which saw up to 30,000 people killed, was described as “a controversial and polarising issue” rather than a murderous breach of human rights. (Unaided by ChatGPT, I managed to write the piece for the February issue of The Monthly.)

Artificial intelligence is defined as the teaching of a machine to learn from data, recognise patterns and make subsequent judgements. Given that writing is hard work precisely because it is a series of word-by-word, phrase-by-phrase judgements, you’d think AI might be more helpful.

But there are some judgements you must be human to make. There is no dodging that fundamentally human role — that of the narrator. Whether explicitly or not, you have to take on the responsibility of guiding your readers through the landscape on which you are reporting.

Nor, I think, is it likely that AI will be able to conduct a good interview. Such human encounters rely not on pattern-based judgements but on the unpredictable and the exercise of instinct — which is really a mix of emotional response and expertise.


Yet robots are going to transform journalism; nothing surer.

It’s already happening. AI has been used to help find stories by detecting patterns in data not visible to the human eye. Bots are being used to detect patterns of sentiment on social media. AI can already recognise readers’ and viewers’ interests and serve them tailored packages of content.

Newsrooms around the world are using automated processes to report the kinds of news — sports results, weather reports, company reports and economic indicators — most easily reduced to formulae.

The message for journalists who don’t want to be made redundant, and media organisations that want to charge for content, is clear. Do the job better. Interview people. Go places. Observe. Discover the new or reframe the old. Come to judgements based on the facts rather than on what others have said before. Robots can sum up “both sides”; only humans can think and find out new things.

Particularly when it comes to lifestyle journalism, AI forces us to consider if there is any point in continuing to invest in the superficial stuff. Readers can generate it for themselves.

That means we need to do better. Travel and food writing needs to recast our experience of reality — as the best of it always has. Uses for stale bread? Make me smell the bread, feel the texture, hunger for the French toast. Two days in Sydney? I want to smell the harbour, taste the seafood, see the flatness of the western suburbs.

If all you have is clichés then you might as well use a robot. You might as well be one. •

The post Where’s Melbourne’s best coffee, ChatGPT? appeared first on Inside Story.

]]>
https://insidestory.org.au/melbournes-best-coffee/feed/ 1
A museum’s fall guy https://insidestory.org.au/a-museums-fall-guy/ https://insidestory.org.au/a-museums-fall-guy/#comments Tue, 20 Dec 2022 03:01:55 +0000 https://insidestory.org.au/?p=72314

Why was a successful scientist and gifted artist airbrushed out of history?

The post A museum’s fall guy appeared first on Inside Story.

]]>
Museums are something of a museum piece these days. And none more so than the Australian Museum, the sandstone behemoth on the edge of Sydney’s centre, founded in 1827 and thus approaching its bicentenary.

Like its model in London — the British Museum, which opened in 1759 — it was a flower of the Enlightenment. Its role was twofold: to explore, classify and exhibit the peculiar animals and plants of the Antipodes, and to document the peoples newly encountered by Europeans.

As became the notorious case for the British Museum and its counterparts in France and Germany, colonialism and settlement accompanied and aided the collecting of specimens. By the time the Australian Museum got into its stride in the second half of the nineteenth century, Voltaire’s “noble savages” had been overtaken by evolutionist attempts to rank human subspecies. As Brendan Atkins observes in his compact but panoramic new book, The Naturalist: The Remarkable Life of Allan Riverstone McCulloch, “Darwin’s theories became evil in the wrong hands, as they drew conclusions about racial superiority from bone measurements.”

Atkins’s subject is a little-remembered scientist whose brief but highly productive career covered both natural history, brilliantly, and ethnography, more controversially. McCulloch started work at the high-water mark of evolutionary theory and colonialism, and ended just as broadcast media was beginning to challenge and rival the static collections of museums.

Allan Riverstone McCulloch was the great-grandson of Scottish Radicals transported to New South Wales in 1820 for demanding universal suffrage, lucky not to be hanged and beheaded like their two leaders. The family flourished, with a grandson becoming a member of NSW parliament and a land developer, a symbiosis that continues to plague the state. McCulloch’s middle name derives from this uncle’s greatest deal, the subdivision of an estate that became a Sydney suburb.

When his uncle went bankrupt “he did what any gentleman would do in the circumstances: he fled the colony.” This left McCulloch’s widowed mother without much support for herself and her four children. At age thirteen, through family connections, Allan was placed as an unpaid cadet in the Australian Museum, apprenticed to Edgar Waite, the curator of vertebrates.

It was the start of a deep education within the museum and on expeditions with Waite around the coast, islands and waterways of Australia. McCulloch became a world-leading fish biologist and, with the help of artistic training from Julian Ashton, a superb illustrator of his specimens.

His public lectures, livened by lantern slides, were packed. The Latin tag for the Murray cod includes his name. Just before his death in 1925 he had written a script for the newly introduced radio and was using a movie camera to document wildlife on Lord Howe Island. In more modern times, he would have been a local David Attenborough.

Atkins, who trained as a zoologist himself and then, after a career as an environmental scientist on the Murray, edited the Australian Museum’s magazine, was intrigued by McCulloch’s story. Why was such a successful scientist airbrushed out of the museum’s history, snidely derided in its 150th anniversary history, and still occasionally subject to rumours he was syphilitic or a drunk sipping on his own preservation alcohol? (The latter transgression was by an earlier curator.)

The Naturalist circles towards its unhappy end in chapters focused on McCulloch’s artistry, his work on fishes and his ethnographic asides. It was in the latter, Atkins thinks, that “cracks first appear in McCulloch’s persona.” Visiting the Torres Strait Islands in 1907, he and senior colleague Charles Hedley were primarily collecting fish and shells but had also been asked to augment the museum’s collection of Indigenous artefacts.

Already, the locals were making artefacts for sale to visitors rather than for their own use. Seeking more original objects, the scientists went to a hidden grave on Nagir Island, where their delving in the dirt revealed a model of a bird made from turtle-shell plates sewn together with split cane. They smuggled it out.

Much later, in 1922, McCulloch accepted an invitation from photographer Frank Hurley to join an expedition into the remote waterways of the Papuan Gulf, Fly River and Lake Murray. One of McCulloch’s roles was to relay Hurley’s accounts of his daring exploits by Morse code to a sponsoring Sydney newspaper. Atkins recounts more dubious acquisitions, some by outright theft, others by purchase under intimidating circumstances from village elders who would not normally allow strangers to see, let alone take away, their sacred objects.

McCulloch and Hurley sent one shipment of this plunder back to the museum without the Papuan administration’s knowledge when they sailed their ketch Eureka directly to Thursday Island, an Australian territory, for repairs. A larger second shipment was impounded at the Papuan port of Daru after a missionary recounted Hurley and McCulloch’s frank admissions of their collecting methods.

An aggrieved Hurley kept up an acrimonious public correspondence with Papua’s governor, Sir Hubert Murray. But McCulloch’s notes, included in the second shipment, revealed feelings of guilt about the acquisitions. The shipment was eventually released to the museum, with a few confiscations, and Hurley gave up his protests.

McCulloch was to “take the fall for these thefts,” Atkins says. Soon after he returned to Sydney in 1923, wracked by dysentery and malaria, a fight erupted at the museum. Two powerful trustees appointed by the state government, auditor-general Frederick Coghlan and businessman Ernest Wunderlich, decided the place needed to be more popular.

They ordered the museum’s scientists to curb their expensive research and get out into the suburbs and country towns to spread their knowledge. On a pretext, they dismissed the senior-most scientist, Charles Hedley. Two eminent scientific trustees, professors Edgeworth David and William Haswell, resigned in protest, but to little avail. “Now with Hedley gone, the trustees took aim at McCulloch,” writes Atkins, “who had by default become their most senior scientist and star curator.”

In May 1924 an invitation came for McCulloch to deliver a paper at a big scientific conference in Hawaii on the Pacific’s food, agriculture and fisheries. Coghlan and other trustees voted against his attendance. It was a devastating blow to McCulloch, still reeling from illness and with his battering over the Papuan artefacts worsening his tendency to bouts of depression. After he suffered a breakdown in early 1925, he was given a year’s leave on half-pay. Respites on his beloved Lord Howe Island helped a patchy recovery.

At this point a former NSW premier, Joe Carruthers, stepped in. He was going to Hawaii, and persuaded the state government to pay half of McCulloch’s expenses to go along and attend another conference, this one focused on fisheries. 

There, McCulloch found the event still vaguely in the planning stage. But he wrote a paper, was invited to give talks, and enjoyed the company of local figures while waiting for proceedings to begin. It seemed like a turning point. A proposed Pan-Pacific institute wanted him as chief scientist. After a long bachelorhood, he had fallen in love with a woman, Jean Innes, on Lord Howe. Yet he was still disturbed in mind and unable to sleep.

In a poignant chapter, Atkins delves in great depth into the feelings of regret and self-reproach that might have deepened McCulloch’s mood swings, and reaches a tentative diagnosis of bipolar disorder, or manic depression as it used to be called.

On his death, McCulloch was lauded as a great scientist. But no church in Sydney would bury a suicide. His friends at the museum raised funds for a granite monument on Lord Howe Island’s seafront, facing the reef he had explored, and buried his ashes beneath it.

Coghlan’s arrogance in a non-museum matter got him removed from office and the museum celebrated its centenary amid reports of internal disorder. It was not until 1975 that new legislation removed outside trustees and put scientific and professional directors in charge.

Conflicting pressures of popularity and science remain. The Australian Museum is currently hoping for a record million-visitor milestone this year on the back of a Lego dinosaur exhibition and another featuring fibreglass sharks. It’s no doubt true that schools and parents are less likely to bring their kids to another exhibition showing how the land snails of Norfolk Island have been rescued. Science is now more likely to deal with the gloomy subjects of species extinction and climate change, while ethnology turns to revisionist accounts of colonialism, turning the concept of “savagery” around.


Since at least 1981, when a large timber slit-drum was returned to newly independent Vanuatu, the Australian Museum has been returning objects to traditional owners, mostly in quiet fashion. But the turtle-shell bird stolen by McCulloch and Hedley from the grave on Nagir Island is still in the collection. In June this year, in the village of Usakof on Lake Murray, I found villagers demanding the return of “powerful objects” stolen by Hurley and McCulloch from their longhouse in 1922.

The museum is now readying a new Pacific gallery to be opened next year. It would be wonderful if this could be accompanied by a more visible examination of its vast holdings of Oceanic objects, mostly in off-site storage, with objects taken in less scrupulous times identified and their disposition with their communities of origin discussed.

It would be satisfying, too, to see McCulloch’s wonderful watercolours of fish made available as prints and his surviving dioramas of sea and birdlife accorded heritage status. Yet none of the museum’s senior directors turned up to the launch of Atkins’s book, which is nonetheless a fine memorial to this outstanding Australian. •

The Naturalist: The Remarkable Life of Allan Riverstone McCulloch
By Brendan Atkins | NewSouth | $34.99 | 190 pages

The post A museum’s fall guy appeared first on Inside Story.

]]>
https://insidestory.org.au/a-museums-fall-guy/feed/ 9
No idea what it’s talking about https://insidestory.org.au/no-idea-what-its-talking-about-2/ https://insidestory.org.au/no-idea-what-its-talking-about-2/#comments Thu, 15 Dec 2022 23:03:53 +0000 https://insidestory.org.au/?p=72275

ChatGPT produces plausible answers supremely well. And that’s both its strength and its weakness

The post No idea what it’s talking about appeared first on Inside Story.

]]>
The launch of ChatGPT has sent the internet into a fresh spiral of awe and dismay about the quickening march of machine learning’s capabilities. Fresh in his new role as CEO of Twitter, Elon Musk tweeted, “ChatGPT is scary good. We are not far from dangerously strong AI.” Striking a more alarmed tone was Paul Kedrosky, a venture capitalist and tech commentator, who described ChatGPT as a “pocket nuclear bomb.”

Amid these competing visions of dystopia and utopia, ChatGPT continues to generate a lot of buzz, tweets and hot takes.

It is indeed impressive. Type in almost any prompt and it will immediately return a coherent textual response, from a short factual answer to long-form essays, stories and poems.

But it is not new. It is an iterative improvement on the previous three versions of GPT, or Generative Pre-trained Transformer. This machine-learning model, created by OpenAI in 2018, significantly advanced natural language processing — the ability of computers to “understand” human languages. An even more powerful GPT is due for release in 2023.

When it comes down to it, though, ChatGPT behaves like a computer program, not a human. Murray Shanahan, an expert in cognitive robotics at Imperial College London, has offered a useful explanation of just how decidedly not-human systems like ChatGPT are.

Take the question “Who was the first person to walk on the moon?” ChatGPT is able to respond with “Neil Armstrong.”

As Professor Shanahan points out, in this example the question really being asked of ChatGPT is “given the statistical distribution of words in the vast public corpus of (English) text, what words are most likely to follow the sequence ‘who was the first person to land on the moon.’

As a matter of probability and statistics, ChatGPT determines the answer to be “Neil Armstrong.” It isn’t referring to Neil Armstrong himself, but to a combination of the textual symbols it has mathematically determined are most likely to follow the textual symbols in the prompt. ChatGPT has no knowledge of the space race, the moon landing, or even the moon for that matter.

Herein lies the trick. ChatGPT functions by reducing text to probabilistic patterns of symbols and completely disregards the need for understanding. There is a profound brutalism in this approach and an inherent deceit in the yielded output, which feigns comprehension.

Not surprisingly, technologies like ChatGPT have been criticised for parroting text with no underlying sense of its meaning. Yet the results are impressive and continually improving.

Ironically, by completely disregarding meaning, context and understanding, OpenAI has built a form of artificial intelligence that demonstrates these very attributes incredibly convincingly. Does it even matter that ChatGPT has no idea what it is talking about, when it seems so plausible?

So how should we think about a technology like ChatGPT — a technology that is “stupid” in its internal operations but seemingly approaching comprehension in its output? A good place to start is to think of it in terms of what it actually is – a model.

As one of my favourite professors used to remind me, “All models are wrong, but some are useful.” (The aphorism is credited to statistician George Box.) ChatGPT is built on a model of human language that draws on a forty-five-terabyte dataset of text taken largely from Wikipedia, books and certain Reddit pages. It uses this model to predict the best responses to generate. Though its source material is humungous, as a model of the way language is used in the world it is still limited and, as the aphorism goes, “wrong.”

This is not to play down the technical achievements of those who have worked on the GPTs. I am merely pointing out that language can’t be reduced to a static dataset of forty-five terabytes. Language lives and evolves through interactions people have every minute of every day. It exists in a state of constant flux, in all manner of places — including places beyond the reach of the internet.

So if we accept that the model underpinning ChatGPT is wrong, in what sense is it useful?

Leading AI commentators Arvind Narayanan and Sayash Kapoor pin the utility of ChatGPT to instances where accuracy and truth are not necessary — where the user can check for correctness when they’re debugging code, for example, or translating — and where truth is irrelevant, such as in writing fiction. It’s a view broadly shared by the founder of OpenAI, Sam Altman.

But that perspective overlooks a glaring example of where ChatGPT will be misused: where inaccuracy and mistruth are the intention.

We need to think of the impact of ChatGPT as a technology deployed — and for that matter developed — during our post-truth age. In an environment defined by increasing distrust in institutions and each other, it is naive to overlook ChatGPT’s potential to generate language that serves as a vehicle for anything from inaccuracies to conspiracy theories.

Directing ChatGPT towards nefarious purposes turned out to be easy. Without too much effort I bypassed ChatGPT’s much-vaunted safety functions to generate a newspaper article alleging that Victorian opposition leader Matthew Guy has a criminal history, is implicated in matters relating to Hunter Biden’s laptop, and has been clandestinely plotting with Joe Biden to invade New Zealand and seize its strategic position and natural resources.

While I had to stretch the conspiratorial limits of my imagination, ChatGPT obliged immediately with a coherent piece of text stitching it all together.

As Abeba Birhane and Deborah Raji from the Mozilla Foundation have observed, technologies like ChatGPT have a long history of perpetuating bigotry and occasioning real-world harm. And yet billions of dollars and lashings of human ingenuity continue to be directed to developing them. Surely we need to be asking why?

The prospect of technologies like ChatGPT swamping the internet with conspiracies is certainly a worst-case scenario. But we need to face the possibility and reassert the role of language as a carrier of meaning and the primary medium for constructing our shared reality. To do otherwise is to risk succumbing to the flattened simulations of the world projected by technology systems.


To test the limitations of the world as captured and regurgitated by ChatGPT, I was interested to find out how far its mimicry extended. How would it go describing a place dear to my heart, a place that would be far from the minds and experiences of the North American programmers who set the parameters of its dataset?

I spent a few years living in Darwin and have fond memories of it as a unique place that needs to be experienced to be known. Amid Canberra’s cold start to summer, I have been dreaming of the stifling heat of this time of year in Darwin — the gathering storm clouds, the disappointment when they dissipate without bringing rain, and the evening walks my partner and I would take by the beach in Nightcliff, seeking any coastal breeze to bring relief from the heavy, expectant atmosphere of the tropics in build-up.

So I asked ChatGPT to write a short story about a trip to Nightcliff beach in December. For additional flourish, I requested it in the style of Tim Winton.

In a matter of seconds, ChatGPT started to generate my story. The mimicry of Tim Winton was evident, though nothing like reading his actual work. But the ignorance about Darwin in December was comical as it went on to describe a generic beach scene in the depths of a northern hemisphere winter.

The story was replete with trite descriptions of cold weather, dark-grey choppy seas and a gritty protagonist confronting the elements (as any caricature of a Tim Winton protagonist would). At one point, the main character “wrapped his coat tightly around him and shivered in the biting wind.” Without regard for crocodiles or lethal jellyfish, he dives in for a bracing swim, “feeling the power of the water all around him.” He even spots a seal!

Platforms like ChatGPT are remarkable achievements in mathematics and machine learning, but they are not intelligent and not capable of knowing the world in the ways we can and do. Yet they maintain a grip on our attention and promote our fears.

We are right to be concerned. It is past time to scrutinise why these technologies are being built, what functions we should direct them towards and which regulations we should subject them to. But we should not lose sight of their limitations, which serve as a valuable reminder of the gift of language and its extraordinary capacity to help us make sense of the world and share it with others. •

The post No idea what it’s talking about appeared first on Inside Story.

]]>
https://insidestory.org.au/no-idea-what-its-talking-about-2/feed/ 11
Ecology of extremes https://insidestory.org.au/ecology-of-extremes/ https://insidestory.org.au/ecology-of-extremes/#comments Tue, 15 Nov 2022 03:14:26 +0000 https://insidestory.org.au/?p=71766

Steve Morton’s Australian Deserts — winner of the 2022 Whitley Medal for an outstanding publication on Australasian wildlife — highlights the rich diversity of this continent’s ecosystems

The post Ecology of extremes appeared first on Inside Story.

]]>
How does life, in all its myriad forms, find ways to thrive and survive in an environment of extremes? It is a question that takes us not only to the core of our continent but also to the heart of our identity as Australians.

How do plants, animals, insects, birds, nesting bees, raspy crickets, humans, mulgaras, yellow billy buttons, bats, bush flies, river red gums, euros, desert oaks, salt lake wolf spiders, thorny devils, copperburrs, mulga, sap suckers, zebra finches, waddywoods, banded stilts, spinifex, marsupial moles, antlions, Mitchell grass, lerps, harvester termites, burrowing frogs and fat-tailed dunnarts — how does this gorgeous, ebullient array of life come to be, how does it find ways to flourish, and how do its constituents relate to one another, now and over time, across millions of years?

How do we learn to see the richness and diversity of this life? How do we read Country for its presences and absences? How do we fine-tune our capacity as humans to appreciate and understand the miracles that unfold at our feet and under the skies every day and night? These are beautiful, inspiring, exhilarating questions and they underpin this book, which is a glowing compendium of intelligent wonder.

Steve Morton is our guide in this quest, an admired CSIRO scientist, a renowned ecologist and a gifted writer, and he is introducing us to his home, the beautiful and diverse arid lands of Australia. Australian Deserts: Ecology and Landscapes is about two-thirds of the continent, an astonishing and vast region, and everything that lives in it.

And we glimpse our guide too from time to time: a human in his chosen and beloved setting, relishing a desert dawn, attending a pit-trap, sharing a cup of campfire tea with colleagues, or driving at dusk on the saltbush plains, his forearm resting on the open window when a raspy cricket, large, slim and stylish in tawny colours, lands on his arm. Steve and the handsome insect exchange a glance before the cricket sinks its jaws painfully into his skin. As he fights the pain and fights to keep the car on the road, Steve can’t help admiring the poise, attitude and éclat of that rascally raspy.

Admiration is a strong emotion in this book. This is science with a heart. The natural world elicits Morton’s appreciation and awe. And he accommodates mystery. He is often happily astonished: when describing masses of grasshoppers shooting up into the jet stream and flying halfway across the continent, he exclaims, “who would have believed such a thing?” It’s “a life history,” he says, “that seems like science fiction.”

There is tenderness, too, in the author’s relationship with other forms of life, a warm regard for his fellow creatures and the miracles of survival they daily perform. He describes desert ecologies with rapt affection and pries into the personal lives of plants and animals with delicacy and respect. He is careful not to be sentimental or anthropomorphic, but he does use his imagination and literary skills to project the reader into the experience of other living things: we are offered X-ray vision so that we can see inside river red gums, underground radar so that we realise how much life is busy beneath us, and time-lapse imagery so that we can appreciate the workings of evolution. We are even enabled to sit between the wings of a grey teal in flight.

Weeping mulla mulla growing among feathertop spinifex and shrubs regenerating after fire in the Tanami Desert. Mike Gillam

There is a kind of autobiography of a desert ecologist that can be gleaned from the pages of this book. We see the author in his late teens out with his dad admiring merino sheep and talking to a farmer on the Hay Plain. The youth is distracted from pastoral talk by male brown songlarks in a frenzy of breeding display; he becomes captivated by their soaring and plummeting, by their singing at full throttle to the female birds.

Young Steve hears their call, too, and soon he is lured away from his destiny as a farmer. And later at university we see him having a Eureka! moment in his first-year biology practical class when he peers into a microscope at the profuse life to be found in the abdomen of a termite. He shouts with glee at the sight of such a vigorous diversity of organisms. In that moment, he decides to become a biologist, and he has felt grateful to termites ever since.


Although Steve Morton is a particularly fine individual of the human species, this book is not about him. His modest appearances on the surface of the text are as fleeting as those of the burrowing frog after rain. But the warmth of his curiosity and the joy of his wonder suffuse the book.

He is not the only human who appears in this text. There is a strong sense of an intellectual community, of the collegiality of ecologists and bush scholars; there is an international fellowship of the field and the laboratory, of the lecture hall and the tea room. Every insight depends on others; knowledge is collective and organic. It advances by being shared and tested; it relies upon teamwork, upon long-term observation in the field, upon a robust scientific culture. This book glows with pride at the collective achievement of ecologists in Australia over decades.

And it glows too with respect for the knowledge and teaching of Aboriginal peoples. It is wonderful to read an ecology of Australia that is so plainly and profoundly indebted to the ecological wisdom of First Nations peoples. Here is the kind of respectful confluence of traditions of knowledge that so many people have been striving for, especially in Alice Springs.

It’s not just about supplementing Western science with Aboriginal insights; rather, it’s about recognising — as this book does — the primacy of Aboriginal understanding and management of Country and making that deep knowledge the foundation for all ecological inquiry. The result is immensely heartening and quite beautiful, a respectful integration of Aboriginal and settler philosophies, united in their awe of the land, nature and the elements.

There are three further ways in which this book might be honoured in the traditions of science and literature. First, it enacts an ecological vision. As I read this book, I truly begin to understand what it is like to think and see like an ecologist. So Australian Deserts is not just about a vast, enchanting region, it is also about a particular way of seeing the world in all its vibrant connectedness. Science leads to philosophy which leads to poetry, and insights flow the other way too, from art to ecology. From now on, if people ask me what an ecological vision means, I will give them this book.

Second, Australian Deserts is a remarkable contribution to two centuries of Australian desert literature. Here I can only briefly invoke an impressive lineage of which Morton is very conscious: writings by Ernest Giles, Baldwin Spencer and Frank Gillen, Cecil Madigan, Ernestine Hill, Hedley Finlayson, Alice Duncan-Kemp, Francis Ratcliffe, Ted Strehlow, Alan Newsome, Isabel McBryde, Dick Kimber, Peter Latz, Kieran Finnane, Tim Rowse, Mike Smith, Barry Hill, Eleanor Hogan, Mark McKenna, Margaret Kemarre Turner, Rod Moss and Kim Mahood, to name just a few. This constant pulse of scholarly and literary reflection coming from the heart of Australia has changed national understandings and identity, and Morton’s book embraces that conversation and adds to its richness.

Much of the early desert literature was about searching and disappointment, about expectation and failed dreams, but Morton writes as someone who is joyously, ecstatically at home, intellectually and emotionally fulfilled by the ecology and landscapes of arid Australia. And Aboriginal peoples appear not as strange or other but as respected teachers, their ancient and continuing cultures the embodiment of what it means to read and love Country.

Third, Australian Deserts, although primarily a scientific work, is also a book-length piece of nature writing. Morton is an exquisite writer. Part of the pleasure of reading this book is the sheer elegance and precision of every sentence. There is a formal grace to his prose, a quiet majesty to the intricate portrait he weaves. Literary exactitude is, in his hands, a scientific instrument, an essential tool in his quest to create holistic understanding.

Morton is educating us in a more precise language about deserts, and tutoring us in a different sense of time, not just of deep time but of slow time. For desert life is patient and so must we be. The author reminds us that at times of climatic stress, “the country is waiting rather than dying.” Without us being told explicitly, we come to understand that an ecologically intact landscape tends to be beautiful.

Thus, I would place Australian Deserts in another lineage, an international bookshelf of nature writing where science and literature coalesce felicitously. On that shelf can be found Aldo Leopold’s A Sand County Almanac, Rachel Carson’s Silent Spring, Annie Dillard’s Pilgrim at Tinker Creek, Richard Nelson’s The Island Within, J.A. Baker’s The Peregrine, Nan Shepherd’s The Living Mountain, George Seddon’s Landprints, and Barbara York Main’s study of the Western Australian wheatbelt, Between Wodjil and Tor.

Australian Deserts is destined to become a classic for this combination of scientific vision and literary poise, and because there is a further dimension of magic in it. Mike Gillam’s photographs of desert life and landscapes are, quite simply, extraordinary. They are no mere illustrations of the text, although they do perform that role superbly. They offer a parallel vision that complements the micro and macro scales of the prose. The book is subtitled Ecology and Landscapes, for “ecology” invokes science and “landscapes” invokes art. But “ecology” also suggests intricacy and “landscapes” implies vastness. Mike Gillam’s photography works on both levels; indeed, it is one of his conjuring tricks to make an aerial landscape photo look like a view through a magnifying glass and a close-up ground portrait look like a view from the air. There is a powerful ecological message in that, about systems and patterns across all scales of a landscape.

Gillam’s pictures are painterly photographs, high-art in the poetics of colour and light, yet they are also scientifically precise and stunningly intimate. They bring you eye to eye with insects and animals; Mike must have learned the patience of the deserts to capture such portraits. Such are the fruits of obsession. Mike Gillam on Australian deserts deserves the recognition accorded to Olegas Truchanas and Peter Dombrovskis on Tasmania’s Southwest.

This is a book that will bring learning, joy and inspiration to generations of humans, and greater compassion for our fellow creatures. It may also help us to live here with deeper respect and understanding, and with a keener awareness of beauty, wonder and complexity in this magnificent land. •

Australian Deserts: Ecology and Landscapes
By Steve Morton, with photographs by Mike Gillam | CSIRO Publishing | $59.99 | 304 pages

The post Ecology of extremes appeared first on Inside Story.

]]>
https://insidestory.org.au/ecology-of-extremes/feed/ 5
Smite all humbug https://insidestory.org.au/smite-all-humbug/ https://insidestory.org.au/smite-all-humbug/#comments Wed, 09 Nov 2022 21:33:52 +0000 https://insidestory.org.au/?p=71697

Australian historian Alison Bashford illuminates the Huxleys’ rich intellectual ecosystem

The post Smite all humbug appeared first on Inside Story.

]]>
There is a poignant paragraph in the epilogue to Alison Bashford’s monumental, cross-generational account of the lives and cultural influence of Thomas Henry Huxley and his grandson Julian Sorell Huxley. Bashford acknowledges a debt to Houston’s Rice University Archives, where “Julian’s books are now sequestered.” It is a rueful acknowledgement, but also a clue to the spur for Bashford’s labours.

Julian’s personal library is a little lifeless now, ignored by most, treasured by a few. In a thousand ways, this Intimate History of Evolution is an effort to interpret that library’s meaning, an intellectual ecosystem of the modern natural and human sciences.

Julian’s “personal library” was bestowed on him by his grandfather, a titan of nineteenth-century zoology and comparative anatomy, and a competitive, combative self-made scientist who revelled in his title “Darwin’s bulldog.” Julian (1887–1975) overlapped Thomas Henry (1825–1895) by eight years. His inheritance, of which the library is a symbol, was rich, strange, complex and multivalent — “an intellectual ecosystem” indeed.

Alison Bashford is a distinguished Australian historian whose global perspective and research experience — in science, naval, medical, environmental and population history — equip her well to explore and elucidate that “ecosystem.” Her Intimate History is scrupulously researched and broad in scope, taking in three generations of the talented Huxley dynasty.

A formidable undertaking, yes, but also lucid, lively and addictive — a book for that creature beloved of publishers, the avid general reader. Its weighty scholarly apparatus does not interfere with the narrative flow; the generous visual component of the book is not merely illustrative but rather an integrated and striking adjunct to which Bashford often has recourse. She enjoys picturing and analysing her “primate” subjects through their milieux. (In one shot, for example, those “fraternal primates,” Julian and Aldous Huxley, are fondling “a young relative” — a chimpanzee? — in the San Diego Zoo.) Some of the book’s photographs are unforgettable, most notably the contrasting images of Julian’s “adored” Guy, the lowland gorilla, “caught” hauntingly by Wolfgang Suschitzky in the London Zoo in 1958, and confrontingly as a “latex-stretched” specimen in his glass case in London’s Natural History Museum.

Hovering over all of An Intimate History is the towering figure of Charles Darwin. The Huxley name has not, like Darwin’s, gone into the language as an adjectival football to be kicked around for all manner of purposes even 140 years after his death. (If much of America’s media culture had not descended into fatuity, one might expect to hear it traduced, still, on Fox News.)

But Bashford is interested not in a hierarchy of “great men” but in context, fortuitous combinations, and pivotal moments in history:

The explosive idea of evolution by natural selection was as suited to Huxley’s character as it was a paralysing ill match for Charles Darwin. Together, and with a close circle of like-minded friends, they drove a new scientific naturalism in the middle of the nineteenth century, contesting any explanation of nature, old or new, that relied on a beyond-natural or supernatural force or origin.

The double act — Charles Darwin + T.H. Huxley — brings to mind another forceful duo, Martin Luther and Lucas Cranach. Luther was not, like Darwin, shy of confrontation, but he needed the skilled woodcutting hand of Cranach to illustrate and make accessible the complexity of his Reformation proclamations and turn his German translation of the Bible into the age’s equivalent of a bestseller.

Thomas Henry was, of course, so much more than a carnival barker to Darwin, as Bashford makes clear. If he had to found an intellectual lineage through his own labours rather than inherit one, he did so rapidly and successfully, in fertile partnership (eight children) with Henrietta Heathorn, whom he met and fell in love with in Sydney during a stop on his only field exploration voyage, on the Royal Navy’s wonderfully named HMS Rattlesnake. Darwin on HMS Beagle, Huxley on HMS Rattlesnake: Bashford must have smiled to see some of her work mapped out for her!

Huxley was commissioned as a surgeon, not a naturalist, for his Rattlesnake voyage, but he nonetheless used his Pacific time (the voyage took years) to dissect, study and record some of its creatures, notably the jellyfish. Bashford has fun with lovesick Huxley and his jellyfish:

Thomas Henry was subject to “a painful and unbalanced mental state,” “lethargy,” “self-questioning,” and “depression”… When he felt discontented, he said he reached for Carlyle, and tried to discipline himself by scheduling thoughts-of-Henrietta into his daily routine. One hour before bedtime should do it, allowing the rest of his waking hours to focus on his jellyfish.

But lovesickness had its dark shadow side. Thomas Henry suffered from serious and debilitating depression throughout his life (which makes his prodigious output all the more remarkable). And so did his “Inheritor” (Leonard Huxley’s term for his son, Julian Sorell Huxley). Thomas bequeathed more than his library to his grandson. With it came the burden of expectation, generational pressure, ambition, and the kind of intelligence that could simultaneously suffer and analyse some of the causes of his mental agony — self-inflicted or externally imposed.

Julian was every bit as complex and volatile a human being as his forceful grandfather. And he was, again like his grandfather, skilled at communicating science to a wide public. (They were “unquestionably the founding masters,” Bashford claims.) In Julian’s case, this is perhaps unsurprising — he came out of a family of professional wordsmiths. Thomas Henry and Henrietta’s second son, Leonard, married Julia Arnold, granddaughter of Thomas Arnold of Rugby, niece of the poet Matthew, and sister of novelist Mary Augusta Ward. Julia founded Prior’s Field, a progressive school for girls, and was a formidable woman and educator for all of her short life (she died at forty-six). Her photograph is the other riveting “see-right-through-you” portrait in the book.

Julia and Leonard’s third son, Julian’s brother, was Aldous Huxley. The Huxley family all wrote poetry. They lectured, taught, and wrote novels, essays and memoirs. (Bashford includes a helpful five-generational family tree — one of a number of illustrative “trees” in a book where the crossovers of evolution, descent, genetic inheritance, ecospheres, ethnology, zoology, anthropology et cetera often call for a reader as “disciplined” as T.H. Huxley himself.)

And the communication line continued: Julian was successfully hectored into writing for a general public by H.G. Wells; he made films, winning an Oscar for his Private Life of Gannets in 1934. David Attenborough’s first television production was presented and narrated by Julian Huxley. He was prominent in wildlife conservation, a correspondent and friend of Jane Goodall. In 1975, the year he died, the first edition of Peter Singer’s Animal Liberation: A New Ethics for Our Treatment of Animals was published. Bashford: “Between them, Thomas Henry and Julian Huxley embodied and enacted the modern shift in animal research and animal ethics from imperial natural history, in which animals were collected, and then pinned, stuffed or pressed, to the modern phenomenon of the ‘zoo’… and onwards to early environmentalism, the ‘conservation’ of wild species and their habitats.”


The organising principle of Bashford’s Intimate History is its focus on the lives and works of the two Huxley individuals, Thomas Henry and Julian. But the time span — almost 200 years between T.H.’s birth and Bashford’s reflections on the twenty-first-century relevance of Julian’s cautions about “an imperilled global ecology” — provides her reader with a panoramic view of an era of extraordinary and accelerated change.

The book asks: “How are we humans animal and how are we not? What is the nature of time and how old is the Earth itself? What might the planet look like — with or without humans — 10,000 years hence?” That the Huxleys had the nerve, the effrontery, to offer answers to these questions is a cause for celebration, and the book is, in part, a celebration of intellectual bravery. There is something disarmingly frank about Thomas Henry’s plans, itemised in 1860:

To smite all humbug, however big; to give a nobler tone to science; to set an example of abstinence from petty personal controversies, and of toleration for everything but lying; to be indifferent as to whether the work is recognised as mine or not, so long as it is done:— are these my aims? 1860 will show.

That grandfather and grandson made mistakes was inevitable. That they were fallible, sometimes irascible, often insufferable human beings is unsurprising. Perhaps more remarkable is the way they were able, sometimes, to temper and bend their natures in the search for truth, and that others of less volcanic but equal intellect loved them. Bashford: “Huxley and Darwin’s friendship was a strong one, and lifelong. They acknowledged births and deaths, the most joyful and difficult family moments alike… Their interactions were sincere, direct, authentic and forthcoming.”

You could strip out all reference to personal and family life from Bashford’s work and still have a vivid, fine-grained account of the complex history of evolutionary theory, natural selection, sexual selection, the decline and revival of “Darwinism,” and the mutations and developments during the century after Darwin’s death (language that mixes Darwin and Mendel is always a metaphorical minefield). Many of the (male) players would still appear, T.H. Huxley and his anatomist antagonist Richard Owen would still do battle (“a silverback and a blackback asserting dominance in the family group that was British natural scientists”), and the intricacies of the intellectual disputes would still be teased out. But the story would be a partial sketch of their world, and readers can, in any case, go elsewhere for evolutionary history (Bashford’s unobtrusive forty-page index might be a start).

This is social and scientific history, the two inextricably combined, its great virtue being that it provides the context, the “ecosphere” if you like, in which ideas were explored by men and women who depended on one another, and for whom thinking was as natural as breathing.

Bashford is clear-eyed about the blind spots of her protagonists. T.H. Huxley, “the nineteenth century’s most famous fact-finder and lover-of-evidence fell in with presumptions about higher and lower humans.” We could call those views “racist” now. But in doing so we’d have to contend with the complexities of debate about what “race” means, if anything. Julian’s investigations into eugenic theory could easily (but incorrectly) be confused with the National Socialists’ deadly application of it. And as Thomas Henry refuted the theories of his contemporaries, so Julian corrected, adjusted, modified and developed the theories of his grandfather and his grandfather’s great friend and colleague Charles Darwin. So, we move forward, step by imperfect step, leap by occasional leap.

The melancholia that afflicted Thomas Henry on HMS Rattlesnake was visited on Julian. (It seems to have skipped over his father Leonard.) His sexual life was fraught, from the time of juvenile infatuations at Eton to his hectic affairs before and throughout his long married life to an assertive and yet loyal Juliette Baillot. For Julian’s memorial service, Juliette, with their children Anthony and Francis, designed the cover sheet. It depicted two birds facing one another, derived from Julian’s best-known (to this day) zoological work, The Courtship Habits of the Great Crested Grebe.

By the time of his death in 1975, eighty years after that of his grandfather, and ninety-three after Charles Darwin’s, the world was in a position to understand, much more fully and richly, “man’s place in nature.” But Julian Huxley was also keenly aware of the fragility of ecosystems; he lived through the Cuban missile crisis and understood how modern humanity, for all its knowledge, could obliterate itself and its place in nature. And in 2022, his grandfather’s ambition to “smite all humbug” and show “toleration for everything but lying” resonates like thunder. •

An Intimate History of Evolution: The Story of the Huxley Family
By Alison Bashford | Allen Lane | $59.99 | 576 pages

The post Smite all humbug appeared first on Inside Story.

]]>
https://insidestory.org.au/smite-all-humbug/feed/ 4
Not our system https://insidestory.org.au/not-our-system/ Mon, 23 Aug 2021 07:34:16 +0000 https://staging.insidestory.org.au/?p=68249

TV is having trouble explaining the unexplained

The post Not our system appeared first on Inside Story.

]]>
The starfield stretches across the navigation monitors, desolate as ever, but this time there’s something in the foreground: a smudge of light indicating an object much closer. “That’s not our system,” says Officer Ripley.

This tightly focused scene at the start of Ridley Scott’s Alien (1979) heralds an unfolding horror story, the smudge on the radar looming closer and larger, then ever more monstrous. But if Alien represents one extreme of the observation-to-imagination ratio, the history of how US security officials have responded to UFO sightings is at the other: nothing to see here, and if you think otherwise then you are probably deluded, drunk or paranoid.

The Pentagon’s report on unidentified aerial phenomena, or UAPs — more popularly known as UFOs — released in June this year includes an image not unlike the one Ripley sees on her screen. It’s known to insiders as the Tic Tac video. The question is how far from such a manifestation we can legitimately take our speculations.

Highly anticipated, the report’s release has been accompanied by a spate of television dramas and documentaries about UFOs. The filmmakers don’t have much to go on: the Pentagon report is a long way from sensational reading. Of the 144 sightings deemed worthy of investigation, eighteen demonstrated movement or flight patterns that left unresolved questions. That’s about it. What caused some media excitement is the fact that the Pentagon had at last acknowledged that unexplained, and perhaps unexplainable, phenomena have indeed been detected in our skies and oceans.

These new programs tend to make the most of the “cover-up” side of the story, as the title of Netflix’s Top Secret UFO Projects: Declassified indicates. Although this series purports to be contributing to a newly enlarged view of close encounters, its relentless voice-over narrative and its mishmash of interviews and archival footage are stale and unpersuasive.

With little revelatory to offer — most of these accounts of sightings have already been widely circulated — the six episodes make for pretty tedious viewing. If anything, they perpetuate an ingrained problem with popular treatments of the UFO theme: by appealing to naive assumptions that all things hidden must be fascinating and all the most amazing things must be hidden, they damage the credibility of what evidence there is.

Australian journalist Ross Coulthart risks falling into the same trap with The UFO Phenomenon, a one-off 7NEWS Spotlight production that is the result, according to the publicity, of a two-year investigative journey across key American locations where sightings have been reported. The pitch is set in the opening frames. “This story will challenge your understanding of reality”; “The mainstream media wouldn’t touch it”; “We are at a turning point in human history.”

The Pentagon has made an unprecedented admission, says Coulthart: there is something in our skies that we — and more importantly, they — can’t explain. Cut to the seminal image of the blurry Tic Tac spot on the radar. Not our system? Despite its tendency to sensationalism, Coulthart’s version of the story has some advantages over its American counterpart. For a start, it’s shorter and more selective in its focus, concentrating on more recent events.

Coulthart wisely lets his interview subjects provide the bulk of the narrative. The conversations are filmed on location, notably in the spectacular remote landscapes of New Mexico, which seems to be a favoured site for visitations by alien craft. Stories of alien encounter come across so much better in settings that display the scale and strangeness of our own planet. With contributions from US navy commander Kevin Day, former deputy assistant defense secretary Christopher Mellon and Luis Elizondo, who headed the Pentagon Advanced Aerospace Threat Identification Program, Coulthart creates an effective reconstruction of the incident that began with the Tic Tac apparition.

So what are the legitimate lines of speculation to be drawn from this? Day provides a point-by-point account of the navigation patterns traced by his own reconnaissance pilot and the unidentified craft that was clearly responding to them. Elizondo offers a rapid inventory of capacities displayed by the Tic Tac: instantaneous acceleration, hypersonic velocities, low observability, trans-medium travel, anti-gravity. It is only through such detached, technical accounts that any real credibility can be established.

UFO historian David Marler refers to the tinfoil hat stereotype. The real cover-up story (well documented) was to encourage absurd popular culture stereotypes of little green men, flying saucers, bug-eyed gremlins, and the ominous officials in hats and overcoats reputed to pay a visit to people who report such things. Anyone who might be seeking to be taken seriously ran up against those cartoonish associations.

But the men in hats and overcoats were real enough. A US Air Force study in the 1950s and 60s, code-named Project Blue Book, was dedicated to suppressing and discrediting witness reports. Anyone making such a report was likely to get a knock on the door from its investigators. Project Blue Book, quite a story in itself, is dramatised in an American series of that name, currently in its second season on SBS.

The central character in Project Blue Book is J. Allen Hynek, a professor of astrophysics at Ohio State University and the Smithsonian Institution, who is enlisted to provide scientific explanations that will scotch any adventurous speculation from the press. Initially gratified by the role of “debunker for the air force,” the real-life Hynek gradually came round to a more qualified view, acknowledging that where reliable observers were involved, there were questions of scientific obligation and responsibility.

Hynek is played in the series by Aidan Gillen, who was superb as a smiling Machiavel in Game of Thrones. It should be good casting, just as the idea of a scientific expert who has to revise his most fundamental assumptions should provide a strong dramatic spine. But Project Blue Book is disappointing. The allusions to The X Files are obvious and overplayed, its sense of period too stylised and self-conscious, and it lacks pace and genuine psychological tension.

Which raises the question of why UFO stories are so hard to dramatise effectively. The X Files — with its blend of mystique and self-parody and some sophisticated on-screen chemistry between its charismatic lead actors — established a genre of which it remains the premier example. Perhaps the difficulty now is that the repertoire of stories and sightings is so drenched in retrospect. Close encounters seem to belong to a time when technologies were simpler and human understanding less constrained by advanced cosmology. Perhaps, after all, you do have to be a bit simple to swallow narratives about unexplainable phenomena.

The French series UFOs, also showing on SBS, certainly adopts that premise. Set in 1978, it gets some comedic mileage out of a plot involving a space engineer who, when his career goes off the rails, is invited to redeem himself by heading a UFO investigation unit. As he comes to terms with the hokey organisation to which he’s been assigned, with its hippy assistants and metal cabinets full of hard-copy files, the series works fairly well as a situation comedy. The UFOs remain essentially what the Pentagon was determined to make them — a silly story. •

The post Not our system appeared first on Inside Story.

]]>
Get serious, world https://insidestory.org.au/get-serious-world/ Fri, 13 Aug 2021 06:31:49 +0000 https://staging.insidestory.org.au/?p=68087

It might be a very bad film, but The Day After Tomorrow has a message for today

The post Get serious, world appeared first on Inside Story.

]]>
The Day After Tomorrow is a big, cheesy Hollywood blockbuster that made lots of money after it came out in 2004. It’s pretty much like every disaster movie you’ve ever seen. The world is in peril, but don’t worry, a hero will save the day. This time, though, Earth isn’t threatened by an alien invasion or an asteroid; this time it’s the weather that’s turned nasty.

Global warming has caused the ice caps to melt, and the planet’s weather has gone berserk. Tornadoes “attack” Los Angeles and a huge wave engulfs New York City; then an ice age sets in. And all this mayhem seems to take place over the course of just a few days.

Though cliché-ridden and gloriously silly, The Day After Tomorrow did depend on the thinnest sliver of scientific fact. The plot of the movie is “inspired” — I wouldn’t put it any higher than that — by the real science of abrupt climate change.

That’s why, one morning in May 2004, I was sitting in the Sydney office of ABC TV’s Lateline anxiously trying to contact Wallace S. Broecker, Newberry professor in the Department of Earth and Environmental Sciences at Columbia University. I was attempting to arrange an interview for Tony Jones about the film’s release, but because it was Lateline we were going to concentrate on something arcane called “ocean conveyor belts.”

I finally got through to Broecker at his home in New Jersey. “Professor Broecker,” I asked, “would you like to get up at six tomorrow morning and drive all the way into Manhattan to talk about the science that tangentially underpins this dumb movie?”

“Sure. And call me Wally — everyone else does.”

Wally Broecker was raised in an evangelical family but dropped his faith — “cold turkey,” he said — in his twenties. After gaining his doctorate at Columbia, he was a fixture at that university’s prestigious Lamont–Doherty Earth Observatory for well over sixty years.

As early as 1975 he published an influential article in Science called “Climate Change: Are We on the Brink of a Pronounced Global Warming?” As he predicted, Earth’s temperatures started climbing the very next year.

Broecker was also the first scientist to identify and understand ocean conveyor belts, the ocean current networks that affect everything from temperature to rain patterns by shifting energy from one part of the globe to the other.

The famous Gulf Stream, for example, carries warm surface water to Western Europe, which is then sent back across the Atlantic as cold water at depth. Without the Gulf Stream bringing the warmth of the Caribbean across the chilly North Atlantic, Europe would suffer from a new ice age.

Broecker then developed the theory that would eventually inspire the makers of The Day After Tomorrow. These conveyor belts, he argued, could be switched on and off — perhaps over as little as a few decades. If global warming led to the melting of the ice caps, and a massive pulse of freshwater entered the oceans, these aquatic conveyor belts would grind to a halt — with catastrophic implications for the world’s climate patterns.

In a geological time frame, this would indeed bring about climate change in an abrupt manner.

That night on Lateline Broecker happily explained this terrifying scenario. I remember thinking, if the ocean conveyor belts did stop working it would be like turning off the planet’s life support system.

Broecker came into Manhattan early that morning to talk to a TV show on the other side of the world because he understood that he knew stuff that people needed to hear. As he once explained, “The climate system is an angry beast, and we are poking it with sticks.”


I hadn’t thought of Wally Broecker for years. All through the 2019 bushfires I didn’t think of him, nor while I was watching the terrible news of the floods in Western Europe. But then last week I saw this headline in the Guardian: “Climate Crisis: Scientists Spot Warning Signs of Gulf Stream Collapse.”

The source of that article, a paper published in the journal Nature Climate Change, argues there’s been “a gradual weakening during the last decades” of the Atlantic Meridional Overturning Circulation — basically, another name for Broecker’s ocean conveyor belt. In other words, here was up-to-date evidence of what he’d warned our audience about seventeen years ago.

Climate scientists have long said that pumping carbon dioxide into the atmosphere will lead to increasingly frequent and severe bushfires, floods, heatwaves and cyclones. And they’ve been proven correct. Could Wally Broecker’s fears about the impact of climate change on the oceans also come true?

In a video statement recorded just a week before he died in February 2019 at the age of eighty-seven, Broecker said, “Get serious world. This is a very, very, very serious problem.” •

The post Get serious, world appeared first on Inside Story.

]]>
Ghosts in the machine https://insidestory.org.au/ghosts-in-the-machine/ Thu, 05 Aug 2021 03:47:16 +0000 https://staging.insidestory.org.au/?p=67900

A computer scientist takes on artificial-intelligence boosters. But does he dig deep enough?

The post Ghosts in the machine appeared first on Inside Story.

]]>
It seems like another era now, but only a few years ago many people thought that one of the biggest threats to humankind was takeover by superintelligent artificial intelligence, or AI. Elon Musk repeatedly expressed fears that AI would make us redundant (he still does). Stephen Hawking predicted AI would eventually bring about the end of the human race. The Bank of England predicted that nearly half of all jobs in Britain could be replaced by robots capable of “thinking, as well as doing.”

Computer scientist and entrepreneur Erik J. Larson disagreed. Back in 2015, as fears of superintelligent AI reached fever pitch, he argued in an essay for the Atlantic that the hype was overblown and could ultimately do real harm. Rather than recent advances in machine learning portending the arrival of intelligent computing power, warned Larson, overconfidence in the intelligence of machines simply diminishes our collective sense of the value of our own, human intelligence.

Now Larson has expanded his arguments into a book, The Myth of Artificial Intelligence, explaining why superintelligent AI — capable of eclipsing the full range of capabilities of the human mind, however those capabilities are defined — is still decades away, if not entirely out of reach. In a detailed, wide-ranging excavation of AI’s history and culture, and the limitations of current machine learning, he argues that there’s basically “no good scientific reason” to believe the myth.

Into this elegant, engaging read Larson weaves references from Greek mythology, art, philosophy and literature (Milan Kundera, Mary Shelley, Edgar Allan Poe and Nietzsche all make appearances) alongside some of the central histories and mythologies of AI itself: the 1956 Dartmouth Summer Research Project, at which the term “artificial intelligence” was coined; Alan Turing’s imitation game, which made a computer’s capacity to hold meaningful, indistinguishable conversations with humans a benchmark in the quest to achieve general intelligence; and the development of IBM’s Watson, Google DeepMind’s AlphaGo, Ex Machina and the Singularity. Men who have promoted the AI myth and men  who have questioned it over the past century are given full voice.

Larson has a background in natural language processing  — a branch of computer science concerned with enabling machines to interpret text and speech — and so the book focuses on the relationships between general machine intelligence and the complexities of human language. The chapters on inference and language, methodically breaking down purported breakthroughs in machine translation and communication, are among The Myth of Artificial Intelligence’s strongest. Larson walks us through why phrases like “the box is in the pen,” which MIT researcher Yehoshua Bar-Hillel flagged in the 1960s as the kind of sentence to confound machine translation, still stymies Google Translate today. Translated into French, the “pen” in question becomes a stylo — a writing instrument — despite the fact that the sentence makes clear it’s smaller than the box. Humans’ lived understanding of the world allows us to more readily place words in context and make meaning of them, says Larson. A box is bigger than a biro, and so the “pen” must be an enclos — another, larger, enclosure.

Larson focuses on language understanding (rather than, say, robotics) because it so aptly illustrates AI’s “narrowness” problem: that a system trained to interpret and translate language in one context fails miserably when that context suddenly changes. He argues that there can be no leap from “narrow” to “general” machine intelligence using any current (or retired) computing methods, and the sooner people stop buying into the hype the better.

General intelligence would only be possible, says Larson, were machines able to master the art of “abduction” (not the kidnapping kind): a term he uses to encompass human traits as varied as common sense, guesswork and intuition. Abduction would allow machines to move from observations of some fact or situation to a more generalisable rule or hypothesis that could explain it: a kind of detective work or guesswork, akin to that of Sherlock Holmes. We humans create new and interesting hypotheses all the time, and then set about establishing for ourselves which ones are valid.

Abduction, sometimes called abductive inference or abductive reasoning, is a focus of a slice of the AI community concerned with developing — or critiquing the lack of — sense-making or intuiting methods for intelligent machines. Every machine operating today, whether promoted by its creators as possessing intelligence or not, relies on deductive or inductive methods (often both): ingesting data about the past to make narrower and often untestable hypotheses about a situation presented to them.

If Larson is pondering more explicitly philosophical questions about whether reason and common sense are truly the heart of human intelligence, or whether language is the high benchmark against which to measure intelligence, he doesn’t explore them here. He is primarily concerned with the what of AI (he describes the kind of intelligence AI practitioners are aiming for) and how this might be achieved (he argues it won’t with current methods, but might with greater focus on methods for abduction). Why is a whole other, mind-bending question that perhaps throws the whole endeavour into question.

While Larson does emphasise the messiness of the reality that machines struggle to deal with, he leaves out some of the messiest issues facing his own sub-field of natural language processing. His chapter on “Machine Learning and Big Data, for example, makes no mention of how automated translation tends to reproduce societal biases learned from the data it is trained with.

Google Translate’s mistranslation of “she is a doctor,” for example, arises in the same way as the pen mistranslation. In both cases, the system’s translation is based on statistical trends it has learned from enormous corpuses of text, without any real understanding of the context within which those words are presented. “She” becomes a “he” because the system has learned that male doctors occur more frequently in text than female doctors. The “pen” becomes a stylo not simply because pen is a homonym and linguistically tricky but also because the system is reaching for the most statistically likely translation of the word. The effect in both cases is an error, the challenge is divining context, and the fix in both cases will involve technical adjustments.

But what of other translation errors? At the conclusion of The Myth of Artificial Intelligence Larson makes a brief further reference to “problematic bias,” citing the notorious mislabelling of dark-skinned people as gorillas in Google Photos as an example, characterising it as one of the issues that has “become trendy” for AI thinkers to worry about. (Google “fixed” the error by blocking the image category “gorilla” in its Photos app.) This is an all-too-brief reference to a theme that it is inseparable from the book’s central thesis.

The Myth of Artificial Intelligence convinces the reader that the creation of intelligent AI systems is being frustrated by the fact that the methods we use to build them don’t sufficiently account for messy complexity. Without equal attention being given to complexities introduced by humans into large language datasets, or to the decisions we make training and tweaking systems based on these large datasets, the issue becomes almost entirely one of having the right tools. Left out of this analysis is the question of whether we have the right materials to work with (in the data we feed into AI systems), or whether we even possess the skills to develop these new tools, or manage their deployment in the world.

Larson’s omission of any real discussion of social biases being absorbed by and enacted by machines is odd because The Myth of Artificial Intelligence is dedicated to persuading readers that current machine learning methods can’t achieve general intelligence, and uses natural language processing extensively and authoritatively to illustrate its point. It would only help his case to acknowledge that even the most powerful language models today produce racist and inaccurate text, or that the enormous corpuses of text they are trained with are laden with their own, enduring errors. Yes, these are challenges of human origin. But they still create machines producing errors, machines not performing as they’re supposed to, machines producing unintended harmful effects. And these, like it or not, are engineering problems that engineers must grapple with.


If indeed Larson is right — if we are reaching a dead end in what’s possible with current AI methods — perhaps the way forward isn’t simply to look at new methods but to choose a different path. Beyond reasoning and common sense, other ways of thinking about knowledge and intelligence — more relational, embedded ways of perceiving the world — might be more relevant to how we think about AI applications for the future. We could draw on more than just language as the foundation of intelligence by acknowledging the importance of other senses, like touch and smell and taste, in interpreting and learning from context. How might these approaches inspire revolutionary AI systems?

At one point in The Myth of Artificial Intelligence, Larson uses Czech playwright Karel Čapek’s 1921 play, R.U.R., to illustrate how science fiction’s images of robots hell-bent on destroying the human race have shaped our fears and expectations of superintelligent machines. In Larson’s retelling, these robots, engineered for optimal efficiency and supposedly without feelings or morals, get “disgruntled somehow anyway,” sparking a revolution that wipes out nearly the entire human race. (Only one man, the engineer of the robots, remains.)

It’s true that the robots in R.U.R. get “disgruntled.” But their creators never intended them to be wholly mindless automatons. In Čapek’s imagination they were made of something like flesh and blood, indistinguishable from humans. To reduce factory accidents, they were altered so as to feel pain; to learn about the world, they were shown the factory library. As the robots rebel in the play’s penultimate act, their human creators ponder how their own actions had led to the uprising. Did engineering the robots to feel like humans lead them to become aware of the injustice of their position? Did the designers focus too much on producing as many robots as possible, failing to think about the consequences of scale? Should they have dared to create technology like this at all?

Separating the human too much from the machine can make it hard to properly interrogate the myth. The Myth of Artificial Intelligence is a clever, engaging book that looks closely at the machines we fear could one day destroy us all, and at how our current tools won’t create this future. It just doesn’t dwell deeply enough on why we, as their creators, might think superintelligent machines are possible, or how our actions might contribute to the impact our creations have on the world. •

The post Ghosts in the machine appeared first on Inside Story.

]]>
A risk-taker in the laboratory https://insidestory.org.au/a-risk-taker-in-the-laboratory/ Fri, 14 May 2021 01:31:12 +0000 https://staging.insidestory.org.au/?p=66648

A biography of biochemist Jennifer Doudna raises hard questions about where genetic research is heading

The post A risk-taker in the laboratory appeared first on Inside Story.

]]>
When news emerged in 2018 that the first-ever “designer babies” had been born in a Chinese hospital, the international furore was immediate. Scientist He Jiankui had used gene editing technology to modify the embryos to make them immune to HIV, the disease suffered by their father, defying an international agreement among scientists to limit the technology’s use.

He Jiankui was convicted by a Chinese court for violating scientific and medical ethics. But the scientists who had developed the technology already knew the risk that this line could be crossed; and they also knew that the potential benefits of gene editing were too great to ignore.

Gene editing is at the centre of a new book by American journalist Walter Isaacson, well known for his biographies of Steve Jobs, Leonardo da Vinci and others. In The Codebreaker, Isaacson focuses on one of the scientists who developed the technology, Jennifer Doudna, who went on to win the 2020 Nobel Prize in Chemistry with Emmanuelle Charpentier for “rewriting the code of life.” Doudna is an ideal protagonist for Isaacson, having played a leading role both in the revolution in biological science and in the public debate about its ethical implications.

But the story Isaacson tells in this compelling book is about scientific discovery itself and the scientists who made important contributions to breaking the code of life. Their breakthroughs, interactions, rivalries, and growing concern about the implications of their work are his larger subject.

Isaacson describes Doudna’s girlhood curiosity about why a type of swamp grass in her native Hawaii curled up when she touched it. “Nature is beautiful,” Doudna says, and its beauty is one of the themes of the book. But Isaacson also makes it clear that success in science requires more than curiosity and aptitude. Doudna was successful because of her willingness to take risks, her ability to collaborate and coordinate the work of a team, and — not least — her “competitive streak.”

Risk-taking was a feature of her early career. After Francis Crick and James Watson discovered the structure of DNA — the molecule that carries the genetic code of all living things — most research focused on how its sequences were put together in humans and other forms of life. Doudna had a hunch that RNA, the molecule responsible for expressing genes and manufacturing proteins in human cells, plays a more fundamental role in the origin of life than most others thought. She and her team at the University of California in Berkeley discovered that strands of RNA in a bacterium would cut up invading viruses and paste copies into its DNA, thus making it immune to future virus attacks.

Working together across national borders, Doudna and Charpentier discovered how this cut-and-paste operation works and how it can be manipulated. In their prize-winning publication they predicted that it could be used to edit genes in humans as well as in other complex forms of life.

By using a system similar to that employed by bacteria, teams of scientists soon learned how to take a gene out of a human cell and put another in its place. With a bit of help from postdoctoral researchers, Isaacson was even able to do it himself. The practical implications of gene editing are obvious. If genes can be replaced, then those that cause undesirable effects can be edited out.

Isaacson describes a successful experimental use of gene therapy to treat a woman suffering from sickle cell anaemia. Stem cells extracted from her blood were edited and reinserted into her body. The treatment affected the cells in her body only, but gene editing also makes it possible to replace genes in germline cells, which alters the genetic code of future generations. This was the problematic step taken by the Chinese scientist.


Isaacson’s book has two main themes. First, he provides an account of the development of genetic technology through the eyes of the scientists concerned. To achieve this, he immersed himself in their world. As well as conducting many interviews with those involved in the discovery process, he attended their conferences, spent time in their labs and joined them in their informal conversations over dinner. Second, he explores the implications of gene editing for humanity’s future and how scientists have grappled with the ethical issues it has raised.

Although Doudna’s career takes us into the world of twenty-first-century science, not all the pressures and ethical challenges are new. The tension between science as a collaborative enterprise and the rivalry of scientists seeking credit for a discovery has always existed. Which competitive behaviour is fair and which is unethical is adjudicated by the scientific community itself. It was all right, most of her colleagues agree, for Doudna to beat her competitors by putting pressure on a journal to fast-track the publication of a paper. “I would have done it myself,” admitted one of these rivals. It was not all right for James Watson to take Rosalind Franklin’s data from her lab without her permission in order to be the first to construct a model of DNA.

When scientists are encouraged to work with industry, form their own companies and take out patents on their discoveries, and when universities hire lawyers to ensure that they profit from the work of their scientists, collaboration can become the victim of market incentives, confidentiality agreements and legal proceedings. Isaacson describes how Doudna and her team became embroiled in a long, costly dispute with another group over a patent on gene editing. Agreeing to share patent rights, Isaacson concludes, would have been more sensible and better for the progress of science. Many of the scientists he interviewed agreed.

Though his account shows that the close relationship between industry and science can have detrimental effects, Isaacson is convinced that, overall, it works out for the best. Scientific innovation is costly and risky, he says, and without collaboration with industry and the incentives provided by intellectual property law, progress in genetic technology would have been much slower. Even if he is right, there is reason to doubt whether the public good, or the good of science, is best promoted by the incentives of the market. Could public trust in scientists be one of the casualties? Some think so. “Financial interests undermine the ‘white coat’ image of the scientist,” a bioethicist complained to Isaacson.

Doudna dreamed one night that Hitler visited her, wanting to learn about genetic engineering. Disturbed by this nightmare she decided that bioscientists had to confront the ethical implications of gene editing technology. At a conference in California in 2015 all agreed that using the technology to cure disease by editing non-inheritable genes was a good thing provided it could be made safe. But most of the attendees also agreed that using it to alter heritable genes is more ethically problematic and has to be controlled.

The consequences of altering heritable genes, intended or not, will be visited on our descendants. If mistakes are made, it is they who will suffer. This is one reason why bioscientists were alarmed by news of the Chinese babies. But the ability to alter the human genetic code has the potential to bring great benefits. The technology could some day be used to eliminate Huntington’s disease, sickle cell anaemia and other genetically carried diseases. It could be used to make humans immune to viruses like Covid-19. Those who attended the Napa meeting had good reason for not wanting a total or permanent ban on its development and use.

The problem with germline editing is not merely that a future Hitler might be able to use the technology to construct what he regards as a master race. The ability to edit our genes could propel us down what ethicists call a slippery slope. Genetic engineering to eliminate disease seems obviously desirable. Why not also eliminate congenital deafness and blindness? Why not prevent the birth of children with low intelligence or ugly features? Why not use the technology to increase intelligence, to make humans taller or more muscular, or to give future people new capabilities like night vision or resistance to harm from radiation?

Isaacson worries that unrestrained use of the technology will decrease human variety and the good that comes from diversity. He points out that it could also have bad consequences for social harmony and liberal institutions. Wealthy parents will be able to afford genetic enhancements for their children, poor parents will not. Economic inequality will turn into genetic inequality, widening with each generation. He also takes seriously the warning of the philosopher Michael Sandel, who thinks that the ability of people to sympathise with the plight of others will be lessened when our characteristics are no longer given to us by nature.


One of the most serious objections to allowing parents to choose their children’s characteristics is that they will no longer be so ready to love whatever child they get. Their regard will be conditional on whether their children meet their expectations. They will insist on value for money. The German philosopher Jürgen Habermas fears that a loss of autonomy — a denial of the right of individuals to choose their own goals and way of life — will be the inevitable consequence of making children into goods manufactured according to the specifications of parents or the state.

One suggested way to arrest the slide down the slippery slope is to draw a line between genetic therapy — using gene editing to cure disease and disability — and genetic enhancement — using it to make better babies. The first, according to this view, should be permitted; the second should not.

But the distinction is fuzzy, and many ethicists are not convinced of its importance. Parents enhance their children’s opportunities by giving them a good education, bioethicist John Harris points out. Why shouldn’t they be able to improve their opportunities by means of genetic technology? The Australian bioethicist Julian Savulescu argues that parents have a duty to bring into the world children who will have the best possible lives. If genetic technology makes a better outcome possible, then they ought to use it.

Doudna and most of her fellow scientists think that governments shouldn’t permit the market-driven development and use of genetic technology — at least until the implications are thoroughly discussed. “If we are wise,” concludes Isaacson, “we can pause and decide to proceed with more caution. Slopes are less slippery that way.” But one of the thoughts likely to trouble readers of his book is that ethical qualms, however wise, and restrictions, however judicious, are likely to prove a weak bulwark against the desire of many parents to use whatever means are available to give their children advantages. If some parents are prepared to use unethical and illegal means to get their children into top universities, they will probably also be prepared to use unethical and illegal means to give their children what they regard as the best genes.

One of Doudna’s colleagues told her that he had been consulted by an entrepreneur who proposed starting a business called Happy Healthy Baby that would enable parents to choose some of the genetic characteristics of their children. The scientist told the entrepreneur that the technology was not likely to be approved by the American government in the foreseeable future. Not a problem, the entrepreneur replied. She would set up her clinic in a country that was more permissive. Parents who could afford the treatments would be willing to travel. •

The post A risk-taker in the laboratory appeared first on Inside Story.

]]>
Roads to recovery https://insidestory.org.au/covid-19-roads-to-recovery/ Fri, 11 Sep 2020 07:05:19 +0000 http://staging.insidestory.org.au/?p=63107

A half-year of Covid-19-watching suggests the most effective way ahead

The post Roads to recovery appeared first on Inside Story.

]]>
The six-month anniversary of Covid-19’s declaration as a pandemic (and of my first article on the outbreak for Inside Story) seems like a good time to reflect. What has changed? What is new? What have we learnt?

Clearly, not enough. In Victoria, where the interminable debates over modelling and lockdown continue, it sometimes seems like groundhog day. Remember when “bending the curve” was introduced into the popular lexicon? March feels like years ago.

That’s part of the reason why, on 6 September, premier Daniel Andrews attempted a reset, unveiling a “road to recovery” that featured a graduated relaxation of lockdown rules, each step triggered by reductions in the number of new cases over the previous fourteen days. The last stage would only be reached after 23 November, and only then if cases had been kept at zero.

The plan responded to criticisms of a lack of transparency by making each stage explicit and publishing the modelling on which it drew, but the result was a fearfully complex schema with dozens of points of guidance at each step. Reactions ran the gamut from grim resignation to vocal outrage, with the underlying fear that the criteria for escaping lockdown were too stringent ever to be reached.

Victoria’s attempted reset has hints of more inclusive and decentralised approaches, but it was too much in the thrall of an epidemiological logic. The long haul of this epidemic will require a deeper commitment to trust as a two-way street between government and people, and a much wider repertoire of local self-management in crafting durable changes in social organisation to minimise transmission.

Buried in Victoria’s road to recovery was the news that suburban response units would be established to “provide a tailored local response to everything from contact tracing to outbreak management.”

The call for local responses put me in mind of one of my most rewarding jobs, back in the late 1980s, as executive officer of the Victorian Federation of State School Parents Clubs, an organisation with an illustrious history extending back to the 1920s. Throughout the 1970s it was led by Joan Kirner, who would later recount her experiences on visits, as premier, to far-flung corners of the state. Once the formalities were over and the (male) dignitaries had dispersed she would find herself surrounded by women animatedly exchanging news and views on a first-name basis. Incredulous men would ask their wives how they knew the premier, and invariably the connection would be through the state’s parent-advocacy movement.

Victoria was once a leader in community participation, not only in education but also in health, community legal services, and the many other locations where active citizenry is constructed. They were mostly dismantled by premier Jeff Kennett’s Thatcherite turn to privatisation during the 1990s, and they never regained their pride of place.

Not even now, perhaps. A revealing detail in Victoria’s proposed localised response to Covid-19 is the disclosure that the technology giant Salesforce will be contracted to provide a new information management system. Salesforce is a US$160 billion company that promises its users they will be able to “make decisions faster, make employees more productive, and make customers happier using AI.” Its data-visualisation product offers nothing less than “human advancement.”

Salesforce has quickly pivoted to the Covid-19 response with a set of tools devoted to tracking the epidemic and its impacts — in fact, an entire ecosystem to guide businesses in reopening. In this and other ways, the pandemic is revealing the contours of a new form of platform capitalism. In the United States in particular, where central government has abandoned any pretence at steering epidemic control, the vacuum has been filled by the private sector.

These information management platforms are themselves politicised. Salesforce is firmly on the Democratic side; among those lining up on the other side is Alexander Karp, chief executive of another data-management outfit, Palantir, who filed a trenchant statement with the company’s IPO on 25 August.

“Our software is used to target terrorists and to keep soldiers safe,” said Karp. “If we are going to ask someone to put themselves in harm’s way, we believe that we have a duty to give them what they need to do their job.” Many Silicon Valley technology firms use “slogans and marketing” to obscure the fact that “our thoughts and inclinations, behaviours and browsing habits, are the product for sale.” Better to choose Palantir, he concluded, because it wears its politics on its sleeve: “We have chosen sides, and we know that our partners value our commitment.”

These platforms offer to solve the problem of modern government by reducing it to a question of data organisation. The “old-fashioned” politics of community participation proposes a different answer. The pressing issues of pandemic control lie in how easily and quickly people can be tested, receive results, isolate if they need to, find income, food and social support, reduce their social mixing if they may have been exposed, stop working jobs in multiple locations, reduce the risk at worksites, and so on. The experience of the Victorian town of Colac, where an outbreak centred on the local meatworks, speaks of a community taking local control of the response.

My advice to Daniel Andrews? Amplify these signals, be prepared to trust communities to play a bigger role in the Covid-19 response: some mistakes will be made, but more decisions will be right than wrong. The trust needs to be genuine: devolve real power over how people mix and how they manage risk. It doesn’t play the game of adversarial politics, nor give a click-driven media the polarisation they crave — locked down or not? borders open or closed? — but it does give government more space to concentrate its efforts where they will make a real difference, by ensuring communities are supplied with the real-time information, infrastructure and supplies they need.


Meanwhile, more evidence from overseas that science and politics are poor bedfellows.

Last week’s news of a pause in the Oxford University/AstraZeneca vaccine trial was accompanied by quick assurances that occasional adverse reactions among participants are nothing unusual. Perhaps so. A participant in the trial was reportedly diagnosed with transverse myelitis, a serious spinal cord inflammation known to be triggered, albeit rarely, by vaccines. The trial resumed within days, indicating that its safety board didn’t see a substantial risk of adverse events, but the pause does dent the optimistic view that everything will go miraculously smoothly and a vaccine will be available in October.

Covid-19 has caused many of us to dust off the history of Spanish flu, which despite its name originated in a US army base in Kansas. Its impact was front of mind in 1976 when a swine flu outbreak occurred in the US army base, Fort Dix. It was an H1N1 flu similar to the 1918 virus, and US authorities saw a significant risk of a global pandemic. President Gerald Ford, who was up for re-election, announced in March that “every man, woman and child in the United States” would be vaccinated, and he himself was photographed receiving the rushed vaccine less than a month before he narrowly lost the election to Jimmy Carter.

Ford’s strategy wasn’t only politically futile, it was also a healthcare disaster whose legacy is still being felt. As early as April 1976 the World Health Organization had doubts about whether the new flu was likely to develop into a serious pandemic, and advised against rushing out a vaccine. Worse still, the vaccinations caused more than 450 people to develop the paralysing Guillain-Barré syndrome. The suspicion remains that public health and safety judgements were shaped by the political imperatives at play.

The race for a Covid-19 vaccine has been the most overtly politicised of the scientific challenges, but it is worth noting that no effective therapeutic drugs have yet been developed to treat the illness. The only real success to date has been the repurposed steroid dexamethasone. The reality is that the pathway from invention to successful trial conclusion is long and time-consuming.

The last of the potential game-changers is diagnostics, where a reliable rapid, point-of-care antigen test would transform the capacity for real-time control of the epidemic. That much has been recognised by British prime minister Boris Johnson, whose Operation Moonshot is a £100 billion plan to enable ten million tests nationally per day by early 2021. Perhaps unfairly, the plan — relying on an upbeat PowerPoint by another of capitalism’s handmaidens, the Boston Consulting Group — has been received with widespread derision. This is perhaps where politics ought to make its contribution to science: setting, testing, resetting and retesting the balance between realism and ambition.

Six months in, it is tempting to imagine this pandemic is nearly over. That is far from the case. As the next year unfolds, there are sure to be many trying moments. The temptation will be to run them through the prism of heroism or outrage. A more sustainable strategy may be to hold back on both. •

The post Roads to recovery appeared first on Inside Story.

]]>
Taking it to a new level https://insidestory.org.au/taking-it-to-a-new-level/ Thu, 16 Jul 2020 08:33:11 +0000 http://staging.insidestory.org.au/?p=62122

A sustainable Covid-19 strategy will mean paying much closer attention to people’s movements, and where they gather along the way

The post Taking it to a new level appeared first on Inside Story.

]]>
In the midst of an unfolding pandemic the crucial thing is to keep looking ahead. Taking lessons from steps we’ve already taken is good, but woulda, coulda, shoulda is a waste of time.

Victoria and New South Wales are experiencing significant surges of community transmission of Covid-19, the inevitability of which was signalled well in advance. And because detection is not perfect and restrictions of people’s movement across borders is not absolute, there is no guarantee this won’t spread to other states.

The techniques of widespread testing, contact tracing and isolation are now well practised, and may be enough to curb these outbreaks. But they may also prove insufficient, in which case further restrictions on people’s movement may be needed.

In this environment, criticisms of the COVIDSafe app as an expensive dud seem strikingly misplaced. The $2 million price tag is only a small morsel of chicken feed when stood against the accountancy error that recalculated the cost of the JobKeeper scheme from $130 billion to $70 billion. More to the point, COVIDSafe will only prove its worth if transmission grows so fast that human contact tracers are overwhelmed. Given that case notifications lag behind exposure events, in other words, those Australians who have not yet done so would be well advised to download the app.

This week’s public debate about “elimination” versus “aggressive suppression” has largely been beside the point. We now have enough data to know that Covid-19 spreads easily, including among young people. Closing down workplaces, public gatherings and educational institutions will reduce the chances of transmission. Confining whole populations to home will reduce transmission even further. Any level of active cases is enough to seed further outbreaks.

There is an analogy to be made with “sterilising immunity,” the ultimate goal of any perfect vaccine. The goal is to create an immune response sufficiently strong to prevent a virus from taking hold in a body. The problem with applying sterilising immunity to the body politic is that its outcomes may indeed be sterile. As philosopher Roberto Esposito has pointed out, immunity is the opposite of community, so the task is how to balance the two in a way that ensures community is not snuffed out.

With the spread of Covid-19 to nearly every nation and territory in the world, we have plenty of models from which to choose. The June outbreak in China, centred on Beijing’s Xinfadi wholesale produce market, was brought under control after 335 local transmission cases, but it took a vigorous effort including localised shutdowns and eleven million tests within a month.

Hong Kong has entered what has been described as a third wave of infections, although the territory’s cumulative total of only 1400 cases means a daily spike into the teens is enough to register as a significant rise there. Hong Kong provides remarkably in-depth information about its Covid-19 cases: open up the map based on data from the territory government’s Centre for Health Protection and you’ll see identified cases right down to building level (as shown in the screenshot above).

Vietnam’s remarkable success deserves more comment, with a cumulative total of fewer than 400 cases and still no deaths. Most of the familiar elements are there: early, decisive action with border closures, quarantine, and school and workplace closures; and, over the longer term, extensive testing, active contact tracing and quarantine. Its contact-tracing model is perhaps the most telling: supported by a network of 700 district-level centres for disease control and more than 11,000 community health centres, contact tracing and attendant testing and isolation are routinely conducted for three degrees of separation: contacts of cases, contacts of contacts, and contacts of contacts of contacts.

Singapore remains perhaps the closest parallel to Australia. In July its daily number of new infections is closely paralleling Australia’s, and like Australia, it had initial success only to find that workers on the lowest economic rung, especially in close living quarters, were prime candidates for the virus. But, as the Anna Karenina principle would lead us to expect, there are important differences between the two countries’ experiences. An astounding 95 per cent of Singapore’s Covid-19 infections have been among migrant workers in dormitory blocks.

It’s these concentrations that have led Roland Bouffanais and Sun Sun Lim of the Singapore University of Technology and Design to suggest that much closer attention needs to be paid to “where the riskiest spots in the riskiest places — cities — might be.” This entails paying closer attention not only to places where people may gather for an extended time, but also to people’s behaviours when they’re gathered. It means attending to the differences between the mixing patterns of primary school children, who have a single set of classmates, and secondary students, who mix much more widely. It means using the datasets from phones, geolocated apps and public transport, and even from the people and vehicle flows collected by smart traffic lights, to build up a much more layered map of flows of people across cities.

Covid-19 control doesn’t come down to a binary choice between suppression and elimination. The settings need to be much more fine-grained than that. And to work so that communities are not alienated in the name of immunity, governments must make a much greater commitment to open information and participatory planning, alongside careful and detailed public health measures calibrated to risk.

Given the known risks of Covid-19 spread in abattoirs, for example, this should be an industry that only allows full-time employment with sick-leave entitlement, and all plants should have an in-house medical team. That might mean the end of cheap meat, but it seems an inevitable trade-off.

Workplace by workplace and block by block, we should expect Covid-19 risk to be incorporated into the pattern and rhythm of daily life. Against this benchmark, we ain’t seen nothing yet. •

The post Taking it to a new level appeared first on Inside Story.

]]>
Universities, a shared crisis, and two centre-right governments https://insidestory.org.au/universities-a-shared-crisis-and-two-centre-right-governments/ Mon, 13 Jul 2020 00:11:07 +0000 http://staging.insidestory.org.au/?p=62009

Britain and Australia have reacted very differently to the pandemic’s impact on higher education

The post Universities, a shared crisis, and two centre-right governments appeared first on Inside Story.

]]>
We are deep enough into the pandemic to know the world won’t snap back. For months, academics have been pushing back their conference dates, hoping to convene when the virus passed. But that hope has faded.

Slowly but inevitably we have moved to the Zoom symposium. Hundreds, sometimes thousands, log on. Many participate enthusiastically, flooding the chat box with commentary on proceedings. Others half-listen amid email and social media. It is a new space for scholarly debate and discussion, yet to develop its own rituals and courtesies.

Among such conferences last week was the fifth Buckingham Festival of Higher Education, co-sponsored by the University of Buckingham, a rare private institution in Britain’s largely public system, and the Higher Education Policy Institute in Oxford.

The British know that policy ideas flow between our nations. Australia’s Higher Education Contribution Scheme, or HECS, became the basis of student funding in England, while Canberra imported the idea of research and teaching assessment reviews from Whitehall.

So it was an honour to be the Australian speaking in an opening session alongside the former Conservative minister for universities, science, research and innovation, Jo Johnson, and prominent policy adviser Rachel Wolf, who co-wrote the 2019 Conservative election manifesto for Jo’s brother Boris.

To Australian ears, a British policy discussion about higher education is slightly surreal. Johnson, like his distinguished predecessor David Willetts, is extraordinarily knowledgeable about universities, alive to the nuances of policy choices, and committed to incentives rather than regulation to influence institutional behaviour.

As minister, Johnson stressed the importance of research in transforming the British economy, and brought together funding agencies in a single entity, UK Research and Innovation. Since retiring at the last election — he opposed Brexit — Johnson has accepted roles at Harvard and King’s College London alongside a return to journalism.

In his Buckingham presentation, Johnson reiterated two themes from his ministerial career: concern for teaching quality (he established the Office for Students) and a focus on innovation. Through research, he suggested, Britain can renew its industrial base.

This central role for universities was picked up by Rachel Wolf. She noted the decline of productivity across the Western world and argued that innovation driven by research and development is the most plausible way of creating new prosperity. Wolf referred to speeches over the past decade in which Boris Johnson suggested that strong university research underpins national prosperity — a view that is now a tenet of the Conservative Party.

Linking productivity to the good health of universities is not an argument often articulated by the Australian government, so it is worth reflecting on how tertiary policy in two similar systems has responded to the disruptions of Covid-19.

Buckingham vice-chancellor Sir Anthony Seldon opened the conference by describing the pandemic as “the biggest challenge to the university sector in history.” It is certainly confronting. The Black Death permanently closed five of Europe’s thirty universities. We might imagine destruction of similar proportions as this infection, and those that follow, cut their way through the sector.

As with the Black Death, major dislocation also encourages innovation. We are living the future already — the end of the familiar lecture, the arrival of virtual instruction, universities operating for months at a stretch with no one on campus. Covid-19 raises questions about expensive investment in infrastructure and invites students to put together a degree selecting courses from many different institutions.

It may also change how governments see universities. For if everyone can teach online, if courses look interchangeable, and if the nexus between teaching and research looks ever more tenuous, can we still assert that each university is unique, separate and necessarily autonomous?

Amid these challenges, the policy responses in Britain and Australia tell us something about contemporary party ideology.

Both countries are led by right-of-centre governments, each tested and returned in elections during 2019. Australians study at universities at a similar rate to their British counterparts, and both nations have benefited greatly from a flow of international students.

We might anticipate, therefore, similar responses to the crisis.

The enthusiasm for tertiary education evinced by Jo Johnson is not the only narrative around. On the contrary, both nations have heard sustained criticism of universities, some of it levelled by senior ministers — a chorus of complaints about arrogant universities resisting government priorities, valuing research over teaching, and failing to tackle community ambitions.

In Australia, politicians criticise universities for supporting their operations by recruiting students from China. In Britain, as John Morgan wrote in Times Higher Education, Conservatives trying to attract non-graduate voters “may find universities a tempting target for economic and cultural hits.”

There is ample evidence of voter resentment against the perceived privilege of university graduates. Antagonism is accentuated by the collapse of familiar vocational careers, the eclipse of apprenticeships, and the destruction of certainties about hard work, fairness and opportunity. The world no longer seems predictable or navigable. People hoping for careers in stable organisations find their moorings kicked away.

So, if a government wanted to act against universities, the Covid-19 crisis provides the ideal moment. It could be used to crystallise the public critique built over recent years and justify major policy changes.

How then to read the signals?

In Britain they seem decidedly mixed. In May, Boris Johnson’s government turned down requests to bail out institutions facing huge losses from falling international enrolments. More recently, universities minister Michelle Donelan criticised English universities for offering dumbed-down courses to keep up student numbers.

It seems Covid-19 will coincide with the end of a long period of growth in higher education in Britain. Universities are to be held to their current student enrolment, with caps on further domestic expansion. Yet the government has also worked to reopen access for foreign students and promoted Britain as the preferred destination when international education resumes.

But the most significant British response reflects the policy priority articulated by Jo Johnson and Rachel Wolf. Whitehall has announced two packages to support research by universities and institutes, which will see the government covering up to 80 per cent of lost income from international students, with a further £280 million to support key research projects, particularly responses to the pandemic.

The two packages acknowledge a truth in both nations: income from international education is essential to underwrite research, supplementing funding from governments and philanthropy. Without global education, British universities face a projected shortfall of at least £2.5 billion in the year ahead.

This response recognises the centrality of higher education to Britain’s research effort. As Nick Hillman, director of the Higher Education Policy Institute, told Nature in June, “If a vaccine were to emerge from the United Kingdom, it would emerge from a UK university.”

To an outsider, two logics seem at work in Britain — a scepticism about the value of universities among education authorities and a contrasting view among economic agencies that universities are vital to recovery.

To the detriment of universities, there is less ambiguity in Australia. The federal government has not offered to compensate the sector for the loss of international students, who until recently contributed Australia’s fourth-largest export earnings. Canberra did guarantee current domestic student numbers, though these enrolments were not under threat. On four separate occasions the federal government changed regulations to exclude public universities from support offered to other employers.

As one government senator enthusiastically posted on social media, there is “no need to bail out bloated universities” — they should feel the pain of relying on Chinese students to pay the bills. The senator didn’t criticise the similar dependence of other sectors — notably agriculture and tourism — on exports to China. Unlike universities, they were provided with access to JobKeeper subsidies.

In late June, federal education minister Dan Tehan announced funding changes allied to new regulations. The government will reduce funding per domestic student by an overall 15 per cent. This includes a dramatic reduction in public funding for the study of humanities, law, economics, business and social sciences.

The minister also used the opportunity to cut any tie between research and teaching; in future, university funding is solely for student learning. A new translation fund, financed by cuts to teaching, will encourage “linkage” with industry, but early estimates suggest 7000 university research staff will lose their jobs as a result of the minister’s package and lost international income.

The minister subsequently announced a panel to consider research policy, but has so far given no commitment to countering universities’ revenue shortfall, estimated at between $3 billion and $5 billion annually.

In other words, like-minded governments can reach different conclusions about the future of higher education. They can seek to rebuild the sector as a national resource quickly, as in Britain, or they can use the opportunity to constrain public expenditure and reduce the span and reach of higher education, as in Australia.

The differences may reflect the personal view of leaders, but they can also be structural. Britain has many manufacturing and service industries that draw on university research, and a strong scientific tradition. The House of Lords includes former senior scholars, and links between key industrial, cultural, political and academic worlds may be stronger.

While governments diverge, the response of British and Australian universities to the pandemic has been consistent and impressive. Necessity favours invention, and changes that might otherwise take years were achieved in weeks. Entire courses were transferred online, and technology was deployed to handle student administration, exams, course guidance and counselling, and even graduation ceremonies.

International research collaboration has accelerated as public health authorities turn to universities for expert advice and vaccine development. The medical workforce has been bolstered by students volunteering in hospitals and mobile clinics. The sector demonstrated its public spirit and a determination to contribute amid adversity.

At a difficult moment we should take pride that academics, administrators and institutional leaders have demonstrated impressive ability to adapt and change. Though some perished, most universities survived the Black Death, and went on to shape much of the world we now inhabit.

We can do so again. •

The post Universities, a shared crisis, and two centre-right governments appeared first on Inside Story.

]]>
A better life on Mars https://insidestory.org.au/a-better-life-on-mars/ Fri, 19 Jun 2020 02:04:25 +0000 http://staging.insidestory.org.au/?p=61594

A colonial-era novel provides a window onto the ideas that produced our fractured federation

The post A better life on Mars appeared first on Inside Story.

]]>
Many of us will always look back on mid March 2020 as a time of worry and confusion. As images of the pandemic’s international toll flashed onto our screens, state and territory leaders and prime minister Scott Morrison volleyed conflicting advice, especially about school closures. The tensions inherent in the Australian federation — that brave effort to accommodate a bickering family of states and territories — were being worked through with an urgency that could only result from a fear of mass death.

Australia was not alone. When Democratic governors imposed lockdowns in the United States, Donald Trump cried mutiny. Member states of the European Union closed borders with other member states, and Italy received medical supplies from China well before fellow Europeans reached out.

Amid this grinding of gears, it’s easy to forget that federation was once a Romantic ideal, not least among those many advocates of the Australian national project who campaigned during the closing decades of the nineteenth century for the entity that would be born on 1 January 1901. Inspired by nationalist movements in Europe and North America — including the unification of Italy — many politicians, writers and influential colonial figures saw the federal system as a step on the pathway to what they called global “brotherhood.”

Nowhere do these earnest exhortations for federation resonate more fancifully than in the novel Melbourne and Mars: My Mysterious Life on Two Planets, which has been reissued this month by Grattan Street Press. Penned in the late 1880s by the English phrenologist and health reformer Joseph Fraser, this work of proto–science fiction stands out amid the countless flowery poems, hymns and essays that garnished the consensus-seeking mechanics of the federation process.

Character readings: A handbill for a lecture delivered by Joseph Fraser in regional Tasmania in 1884.

Enthralled by the Martian moment in popular astronomy, Fraser contributed to what late historian John Hirst termed Australian Federation’s “festival of poetry” by contrasting the grimy bustle of the metropolis of Melbourne with an attractively harmonious society on Mars. In Fraser’s utopia, a “Grand Federal Government” fosters progressive goals of gender equality among “Martials,” scientific innovation provides bountiful crops to sustain happy, health human life, and the state provides all material needs. This kind of federation, at its best, guarantees world peace.

In tone, the whimsical Melbourne and Mars rubs against our own experiences of day-to-day politics. The Ballad of Gladys, Dan and Scomo could not play out on Mars. But as our federation recalibrates in the wake of Covid-19, not least through new bodies such as the national cabinet, the novel provides a window through which to understand the ideas and dreams that created the Commonwealth of Australia more than a century ago.

Federalism’s tidal forces

From the early colonial period in New South Wales, Europeans born in Australia expressed a sense of a unique identity forged by their place on the globe. These “currency lads and lasses,” as they were known, many of whom were children of convicts, came to embrace a hardy British–Australian identity as “natives,” a term that wiped away the claims to land of Indigenous Australians.

By the 1880s, the idea of creating a whole from the patchwork of dynamic political entities (most famously promoted by New South Wales colonial secretary and premier Henry Parkes) had gathered force, driven in part by economic issues such as intercolonial tariffs. With French and German powers staking claims in the Pacific, the Federal Council of Australasia (made up of Victoria, Queensland, Western Australia, Tasmania and Fiji) formed in 1885 to engage collectively with Pacific affairs.

The federation effort spanned the rest of the century. By turns, it included the Federal Council, constitutional conventions with popularly elected representatives, and a series of referendums at century’s end. It all culminated in the British Act of Parliament of 1900 that delivered the Australian Constitution. The imagined federation contained moving pieces: New South Wales was not always in, Fiji dropped out of the Federal Council, Western Australia flirted with separation, and the possibility of New Zealand’s entry remains in our Constitution to this day.

One of the chief visionaries of Australian Federation, journalist and politician Alfred Deakin, argued that democratic mechanisms should drive it, rather than diplomacy. Popular movements such as the many local federation leagues responded to what he termed “social forces”: “sentiment fired to enthusiasm; patriotism fused into passionate aspiration for nationhood; imagination quickening, and worshipping a high ideal.”

One person who drifted within what Deakin described as the “tidal forces of Federalism” during the 1880s was the popular scientific lecturer Joseph Fraser, who lived and wrote in the Melbourne suburb of Hawthorn. Born in industrial northern England around 1845, Fraser began lecturing in New Zealand in the late 1870s with his wife Annie Fraser, and by 1884 was living in southeastern Australia. He earned a living through phrenology — the science of reading character and intellect from head shape. Contested from its invention at the turn of the nineteenth century, this new science nevertheless captured the popular imagination, including as a form of self-improvement.

Fraser’s travelling life saw him tackling subjects ranging from child rearing to partner selection (complete with onstage “marriages”). In quieter moments, he scribbled about the relationship between science and religion, contributed pen portraits of political figures to Melbourne’s Herald, and issued charts to clients declaring the strengths and weaknesses of their grey matter.

The Frasers joined a milieu of progressive thinkers who experimented with the possibilities of both the material and the metaphysical worlds. Annie lectured on women’s health and conducted business as a hydropathist; Joseph issued helpful pamphlets, including Husbands: How to Select Them, How to Manage Them, How to Keep Them (with twenty-one illustrations). But his most lasting contribution is a work of fiction that distilled his eclectic philosophy — a final, optimistic love letter penned as he battled tuberculosis.

Miniature Martial histories

Published in 1889, Melbourne and Mars appeared at a time when the Red Planet was the canvas for bold imaginings. Partly, this was the result of Italian astronomer Giovanni Schiaparelli’s 1877 observation of “canals” on Mars, which were soon reimagined as systems of irrigation and therefore as signs of intelligent life.

The novel follows the story of colonial merchant Adam Jacobs, a happily married middle-aged British transplant who not only lives in the powerhouse city of the Australian colonies but, by a division of the soul, also resides on Mars in the body of a bright boy named Charles Frankston.

Through Jacobs’s diary, Fraser guides us into an ideal society of beautiful, four-foot-tall Martials as perceived by Frankston. Scientific innovation and wide-eyed self-improvement shape this idyll of flying ships and state provision, the Martials embodying altruistic socialism. Their political philosophy promotes moral education to cure social inequalities, and its non-revolutionary approach appeals to middle-class mores. Frankston earns global fame with a major breakthrough: a method for cultivating massive vegetables in the snow line.

“Perpetual Peace”: a lantern slide depicting a globe of Mars, c.1900–1940. State Library of Victoria

By contrast with Mars, the streets of Fraser’s Melbourne roar with “grinding wheels, clanging bells, discordant and angry voices.” The buildings tower in a dark cluster over pushing, hurrying people whose faces are “stern, hard, selfish, smileless, sickly, pale, wrinkled, careworn” and “ugly as with sinful passion.” This is a bleak world of competition and capital, emblematic of common moral anxieties about the metropolis.

Fraser explains that a Grand Federal Government rules Mars in a state of “Perpetual Peace.” But that was not always so, for this near-utopian state evolved from an earlier time of warfare between nations. Only when the “world grew sick of war,” and only when the towering city-state of Sidonia rose to such power that it could guarantee global stability, did the Federal Council of Ambassadors of nations settle on peaceful governance through a Central Executive. Thanks to this state of peace, the collective workers of the Martial federation could transform “waste” spaces (uncultivated land) into controlled and productive land. “Which nation will attempt to make Sahara into a sea, while its possession might have to be fought for by several European powers?” muses the narrator.

These miniature Martial histories of governance obviously sprang from the ideologies of the federation movement. Fraser arrived in Australia just as the Federal Council of Australasia took shape, and he embraced popular ideas of national formation as a pathway to human progress. The Melbourne of Fraser’s novel contrasts with the advanced inhabitants of Mars, who have passed through the stages of “race life” rather as a child grows into adulthood. For Martials like Frankston, who live on both planets, Mars — which is at least a millennium more advanced than Earth — serves as a “promotion.”

This model of progress through federation reflects the philosophy of the influential Italian politician and philosopher Giuseppe Mazzini. Born in 1805 in Genoa, Mazzini’s publications and Young Italy movement fuelled Italian unification, and that event captured the popular imagination in colonial Australia, where the patriot general Giuseppe Garibaldi also became a lauded hero.

Mazzini argued that the formation of nations — each built from a shared mission — would ultimately lead to a global union. Drawing on philosophers such as Immanuel Kant (Fraser’s “Perpetual Peace” draws directly on the 1795 essay of this title by Kant), Mazzini’s works influenced nationalist movements around the world and thinkers including Deakin.

The optimism of world federation radiated most brightly from Cole’s Book Arcade in inner Melbourne. Here, the whiskery bookseller and businessman Edward William Cole — beloved publisher of light works including Cole’s Funny Picture Book — fostered this utopian ideal, issuing tokens with cosmopolitan messages. “Let the world be your country and to do good be your religion,” proclaimed one of these shiny “Federation of the World” medals (many of which are now in the collection of Museums Victoria).

Whether Cole and Fraser exchanged ideas in person is unknown, although we can gauge Cole’s enthusiasm for the novel from his publication of an edition of Melbourne and Mars. In early 1890, just months after Fraser’s death from tuberculosis, Cole also announced an essay competition, offering prizes for the best works that argued the case for and against global federation.

Cole’s was a cosmopolitan spirit, and his openness towards Asia conflicted with popularly held desires for a white Australia.

Deakin — the idealistic booster of federation — believed in a pure racial destiny for this nation-continent. He was part of the government that drafted the Immigration Restriction Bill (passed by the new parliament in 1901) that created the infamous dictation test by which Australian border officials could deny entry to migrants if they failed a test in any European language. The bill built on decades of anti-Chinese sentiment and formed part of the tapestry of the White Australia policy, which also enabled the deportation of Pacific Island labourers from Queensland.

This underbelly of exclusion also lurks in Fraser’s book. While Fraser did not present explicitly racist views in Melbourne and Mars, in his colonialist depiction of New South Wales, Indigenous characters figure only as two-dimensional plot devices. Meanwhile, his Martials answer solely to Anglo-Saxon names. And places such as Port Granby and Mount Weston expand to intergalactic proportions the widespread project of European mapping.

Between utopia and dystopia

Fraser’s book received a handful of reviews in late 1889, with the Melbourne Herald proclaiming that “no one who reads the first half-dozen pages is likely to put the book down until the last sentence is reached.” But other traces of its reception evade us. Its slipping below the surface of Melbourne life must partly derive from Fraser’s death in January 1890, just weeks after the book’s publication.

In our own time, as the tectonic plates of geopolitics and our own societies realign, the hopeful, surprising pages of Melbourne and Mars remind us that, even at its most compelling, any vision for the future is still just one of many possibilities. The world did not unite in a grand altruistic government, as Fraser hoped. And Australian Federation could not completely resolve the competing needs and whims of its parts, as evidenced in Western Australia’s unsuccessful attempt to decouple in 1933. The practicalities of our federal system weave through many issues — from revenue raising, to marriage, to waterway management in the midst of climate crisis.

For now at least, the tensions that glared from our televisions in March have subsided thanks to the stability afforded by Australia’s relative success in limiting Covid-19 transmission. But as the Black Lives Matter protests held across Australia remind us, we also face a greater challenge: overturning the very compact of whiteness on which this nation rests. We must confront dystopia. Remedying Australia’s founding flaw of dispossession and racial exclusion will require a radical work of reimagining, an act of dreaming and collective will worthy of this little patch of Planet Earth. •

Melbourne and Mars: My Mysterious Life on Two Planets by Joseph Fraser, edited and with an introduction by Alexandra Roginski and Zachary Kendal, is available from Grattan Street Press.

Funding for this article from the Copyright Agency’s Cultural Fund is gratefully acknowledged.

The post A better life on Mars appeared first on Inside Story.

]]>
Second-wave days https://insidestory.org.au/second-wave-days/ Tue, 16 Jun 2020 01:51:17 +0000 http://staging.insidestory.org.au/?p=61509

As the quest for a Covid-19 vaccine continues, effective mitigation strategies are proving their worth

The post Second-wave days appeared first on Inside Story.

]]>
Sunday’s daily briefing from China’s National Health Commission included some ominous news: thirty-six new locally transmitted cases of Covid-19 in Beijing, the fruits of a new cluster detected two days earlier. The epicentre of this outbreak — more than one hundred cases thus far — is the massive Xinfadi wholesale produce market, which supplies 70 per cent of Beijing’s fruit and vegetables and a good proportion of its meat and fish. Media reports pinpointed its source even more precisely: “the novel coronavirus was detected on a chopping board used by a seller of imported salmon at Xinfadi market. China imports about 80,000 tons of chilled and frozen salmon each year, mainly from Chile, Norway, Faroe Islands, Australia and Canada.”

Just like the pump handle John Snow removed to stop London’s 1854 cholera epidemic, there is something appealingly specific about this discovery. Will that chopping board be the harbinger of a second wave of Covid-19 in China? Is geopolitics implicated in the reference to salmon from Australia and Canada, two members of the “five-eyes” intelligence network being urged by security hawks to morph into an anti-China trading platform? What will be the temperature of the looming war: chilled or frozen?

Official accounts from China have poured cold water on the salmon theory, although some reports suggest that genomic analysis has found the newly identified strain of the virus to be from Europe. Nor should this outbreak be called a second wave — at least not yet. China celebrated its first day with zero locally acquired cases back on 19 March, and for the past three months new local cases have bumped along pretty much at zero or in the low single digits, so this outbreak is certainly larger. But that doesn’t mean it will necessarily spiral out of control, especially with Beijing’s swift deployment of mass testing and localised lockdown.

As in China, Australia’s epidemic is well controlled, and this is the reality we can expect for the foreseeable future — very few cases, mostly among travellers, and the occasional community outbreak, especially as workplaces become busy again. Everywhere, meat processing plants have proven to be especially prone to outbreaks, for reasons that aren’t well understood but may include the difficulty of social distancing and disinfection compounded by the industry’s notoriously poor labour practices.

It’s all part of what Tomas Pueyo calls “the hammer and the dance” — the largely successful outbreak-and-response strategy of countries containing the epidemic. Pueyo’s ability to coin a good phrase has helped him become perhaps the most prominent “lay” commentator to have emerged thus far in the pandemic.

There is no doubting that second waves of Covid-19 are inevitable. The only issue will be their size and the degree of resistance to reimposed bans on public gatherings and closures of schools and workplaces. For Australia and other southern hemisphere countries, the onset of winter and the normal seasonal surge in flu means the coming three months will be the most critical phase of the epidemic thus far. Little wonder then that the promise of a vaccine is so tantalising an escape route.

The World Health Organization’s list of vaccines under development now includes ten in clinical development and a further 126 at the pre-clinical stage. The race is being conducted in markedly different ways. In the United States, Operation Warp Speed retains its nationalist flavour, refusing to contemplate Chinese vaccine candidates. US authorities have settled on a small handful of prospects, including the much-hyped messenger RNA candidate from Moderna, which announced on 11 June that it had finalised preparations to move to phase III testing on humans.

Although the University of Queensland’s vaccine candidate was apparently on Warp Speed’s shortlist of eighteen candidates, it appears not to have made the final cut, but it is receiving support from the global CEPI alliance of public, private and non-profit organisations. Meanwhile, promising safety and efficacy results for China’s candidate have propelled it into phase III trials, but new cases have become so scarce in that country that trials have been moved to Brazil.

It is widely held that some sort of managed competition will be the quickest route to an effective vaccine, but already a proliferation of global alliances are offering to shepherd the process. Gavi, the global non-profit vaccines alliance, held its quinquennial replenishment meeting on 4 June, hosted by British prime minister Boris Johnson. US$8.8 billion was raised, including a billion dollars from the United States — there was a supportive message from Donald Trump — and Australia upped its contribution to $300 million.

Gavi has been a leading proponent of “advance commitments” to overcome market failure in vaccine development, locking in purchases ahead of development to reduce the risk to vaccine producers. It has launched such a scheme for a Covid-19 vaccine, reckoning that a US$2 billion fund would be enough to “enable twenty million healthcare workers to be vaccinated, create a stockpile necessary to deal with emergency outbreaks, and start establishing production capacity to vaccinate additional high-priority groups.”

Meanwhile, the pharmaceutical industry and public universities provide two contrasting models of how to get to a vaccine. Imperial College has launched VacEquity, a social enterprise to oversee the manufacture of its vaccine (if successful) as a globally available public good. “Right now we think the focus should be on how to solve the problem rather than how to make money out of it,” says Simon Hepworth, the college’s director of enterprise. Pharmaceutical giant Pfizer has partnered with BioNTech to combine its own experience in navigating the regulatory and production pathways with BioNTech’s messenger RNA candidate, even refusing government funding support on the grounds it would complicate and therefore slow its single-minded pursuit of an effective vaccine.

The danger is that the current Covid-19 vaccine landscape is sharing too few eggs around too many baskets. An interesting way of making sense of it all comes from the Washington-based think tank, the Center for Global Development, which suggests it is best to look at the research effort as something like an investment portfolio that deliberately tries to cover all bases — not only the type of vaccine developed but also how its manufacture can be scaled up and how it will eventually be used in different populations.

Vaccine anticipation is not without its drawbacks. On the model of flu vaccination, even were a vaccine to prove successful it won’t necessarily provide complete protection for every person. Given the pattern of SARS-CoV-2 spread, estimates suggest that a vaccine would need to be 70 per cent effective to be able to replace social distancing.

Perhaps more importantly, though, waiting for a vaccine might be like waiting for Godot. We can distract ourselves along the way — planning the push and pull mechanisms to be used if the much-desired breakthrough occurs, for example — but our hopes of a vaccine will risk diverting us from other ways of dealing with the acute pandemic crisis. I can’t help but be reminded of the AIDS experience: for decades, the refrain was “only a vaccine will really bring the epidemic under control.” That vaccine still hasn’t arrived, but in the meantime some countries committed to minimising new HIV infections and AIDS deaths with the full range of the social and medical innovations to hand, and those that didn’t continue to pay the price. •

Funding for this article from the Copyright Agency’s Cultural Fund is gratefully acknowledged.

The post Second-wave days appeared first on Inside Story.

]]>
Everything is connected https://insidestory.org.au/everythings-connected/ Sun, 07 Jun 2020 08:34:03 +0000 http://staging.insidestory.org.au/?p=61309

Network effects, good and bad, have influenced responses to Covid-19

The post Everything is connected appeared first on Inside Story.

]]>
The counterpoint to the rebellion against structural racism and militarised policing in the United States is that country’s deeply unequal experience of the Covid-19 pandemic. While statistics on ethnicity are far from complete, they show that black populations are disproportionately represented in case numbers and, especially, in the death count compared with white populations. A recent study showed that the mortality rate from Covid-19 across the United States, adjusted for age, was 3.5 times higher in black populations than white.

Nor are these effects confined to the United States: in Britain, Covid-19 deaths among black men were found to be 4.2 times higher than among white men, and among women 4.3 times higher, with higher death rates also found in other non-white minorities after adjusting for age and other health characteristics.

These are not unexpected results. We know that disadvantages concatenate, with devastating impacts on health. That’s why experts have been warning that low- and middle-income countries would suffer most during the pandemic. But the picture has turned out to be rather more complicated than that.

The latest hotspot warning from the World Health Organization focuses on Latin America. Brazil has attracted a lot of attention because its federal structure and mix of right-wing populism and confused messaging parallel the situation in the United States. But per capita case and death rates in Peru and Ecuador are high by global standards, too, and Chile, Colombia and Bolivia are also experiencing serious epidemics.

Venezuela has done much better. Its relatively early and comprehensive responses, including quarantine, border closures and widespread testing, have kept community transmission low, and its challenge now is to manage infection among Venezuelans returning from other countries. The hundreds of thousands who fled an economy crippled by US sanctions have been living precariously in neighbouring countries, often denied or at the end of the queue for social and health support, and are now returning to their homeland in droves.

Africa remains the continent with the lowest Covid-19 burden, accounting for just 2 per cent of global cases. The dominant “rich world” narrative holds that, at worst, an undetected epidemic is already raging and, at best, it is only a matter of time before it will be. That view discounts the possibility that African countries may have introduced effective public health measures, despite resource constraints, as soon as the pandemic became evident.

The Africa Centres for Disease Control and Prevention reports that all fifty-five African Union member states have acted to limit Covid-19’s spread, with full border closures in forty-three countries, limitations on public gatherings in fifty-four, closures of education institutions in fifty-three, a requirement to use face masks in public in thirty-nine, mass testing in eighteen and, as of 22 May, national lockdowns in nineteen and easing lockdowns in twenty-one.

Led by distinguished Cameroon-born virologist John Nkengasong, the Africa CDC was only launched by the African Union in 2017. This pandemic is its first real test, and thus far it seems to be coping far better than its American namesake.

The potential explanation for Africa’s milder-than-expected epidemic includes these effective government responses. Timing may also play a role, though it will be while before we can be sure. The much younger age structure of the population is likely to be another explanatory factor: while 19 per cent of Australia’s population is under fifteen, the proportion is at least double that in thirty-seven African countries, and all but seven of the fifty most-youthful countries are African. A lot more will be learnt about the impact of age structure on the epidemiology of SARS-CoV-2, but the very low rates of symptomatic infection in children provide enough hints that it may be key.

Competing models of the future of the pandemic in Africa could not be more different. BMJ Global Health has published two, one of which predicts that 22 per cent of the continent’s population will be infected in the first year, with 4.6 million hospitalisations, and the other, looking at Ghana, Kenya and Senegal, suggesting peaks from June to August and only 1 per cent or so of the population infected. The journal has issued an invitation for on-the-ground accounts of epidemic responses that take these two wildly divergent scenarios as starting points.

Beyond the models is a complex interplay between the vulnerability caused by poverty and the resilience developed by people accustomed to having to make do with minimal external support. A CARE international gender assessment in West Africa produced some intriguing results. As expected, it found that the economic shock is falling more heavily on women and the informal economy, and making greater demands on women to provide care. But it also found evidence that women are taking a greater role in community decision-making, drawing on their expertise in running the self-help saving-and-loan associations that provide much of the welfare support in these communities. “There are hopeful signs of men doing more childcare work now that children are at home all the time,” CARE also found, “and some signs of men and women doing more joint decision-making during the Covid-19 crisis.”

The science of complex adaptive systems has long held that positive adaptation can emerge from unexpected quarters. Effective networks are not simply the product of power or money; they are more subtle than that.

Emerging accounts are revealing how Australia’s Covid-19 response drew on informal and formal public health networks, reaping the benefits of pre-planning, and then plugged this expertise directly into the heart of political decision-making. Perhaps the most important feature of this system was that it was able to adapt as new information became available — never an easy job, especially when the stakes are millions of lives and livelihoods. The capacity for adaptive decision-making will be just as necessary as the next few months unfold.

One tool that will guide that decision-making is the genomic information that has been collected by the Microbiological Diagnostic Unit Public Health Laboratory at the Peter Doherty Institute in Melbourne. The team there has been sequencing the virus from as many Victorian samples as it can find, with sequencing done on an estimated three-quarters of all cases in the state. Nature reckons this phenomenal effort is, by a long way, “the most comprehensive sequencing coverage in the world for an infectious-disease outbreak.” This data source will enable future outbreaks to be tracked not only by contact tracing but also by their genetic fingerprints.

If we needed another reminder of why blame and stigma are inimical to public health, this is it. We know SARS-CoV-2 is incredibly easy to spread. Even with the greatest infection control, it can still escape. Knowing where the virus has come from allows its future spread to be precisely tracked — and that is only possible in conditions of empathy, trust and mutuality. •

Funding for this article from the Copyright Agency’s Cultural Fund is gratefully acknowledged.

The post Everything is connected appeared first on Inside Story.

]]>
Covid-19’s awkward couple https://insidestory.org.au/covid-19s-awkward-couple/ Tue, 26 May 2020 05:47:40 +0000 http://staging.insidestory.org.au/?p=61169

Britain’s book of government blunders has a new chapter

The post Covid-19’s awkward couple appeared first on Inside Story.

]]>
Big break, revolutionary pause, transformative hiatus or cathartic suspension? Only when it’s over will the half-world that Britain entered on 24 March, with its strange blend of movement and stasis, gain definition. Two months in, a form of shutdown syndrome makes exit on the other side hard to imagine. There is no going back to a pre–Covid-19 age. That leaves a present easier to itemise than to grasp.

In hospitals and care homes, staff and patients are on the front lines of a daily struggle for life against the fiendishly complex virus that is SARS-CoV-2. Scores are still dying each day, an undue proportion black and Asian, making the country’s toll of deaths per million inhabitants the world’s third-highest.

The economy is stripped to the manufacture, import, delivery and sale only of essentials. Schools, universities, arts venues, libraries, places of worship are closed, as are most shops. Millions are confined to home, excepting forays to buy, exercise, get medical help or make limited social contact. Windows in residential streets are festooned with children’s rainbow drawings and tributes. “Thanks to key workers and our NHS! Better times ahead!” is one of dozens in my neighbourhood.

The workforce is in limbo: the more secure on furlough, the bulk of their wages guaranteed until October by the exchequer and then by employers; others are increasingly on site; many depend on welfare, emergency funds or savings. A deep recession is under way, officials projecting a 30 per cent fall in GDP between April and June this year as state borrowing balloons. Railway and coach travel is near frozen. Business models that assume close physical contact (sports, aviation, tourism, hospitality, high-street retail, fashion, catering, gyms, restaurants, cafes) are on ice.

Parliament, TV punditry, teaching, festivals and conferences are scrambling onto Zoom or Pexip. Broadcast news has segued from journalists’ stopwatch need-to-know explainers into endless heartbait. Newspapers’ print versions, already embattled, are further shrinking as advertising revenues plunge.

The government is assailed for shortcomings in basic duty of care, testing, equipment and communication, its early delays and missteps having made its subsequent efforts a colossal but flawed catch-up. Prime minister Boris Johnson, visibly sapped after a close call in intensive care and facing a competent new Labour leader in Keir Starmer, is needled by Scottish and Welsh leaders, rash in defending his chief adviser Dominic Cummings over an alleged breach of lockdown rules, and breezing his way through every encounter.

Never in the United Kingdom’s post-1945 history have so many ingredients fermented so quickly into so heady, and contradictory, a brew. They include the state’s own fissures, with the post-1997 devolution of powers to authorities in Scotland, Wales and Northern Ireland effectively reducing the United Kingdom to England in health and education policy.

The macro level forms but half the story, the other being the virus’s myriad psychic impacts on everyday lives. That can mean fears of losing health and job, and the pains of separation and the stresses of a constricted world, yet also the joys of car-free streets, freedom from commuting, cleaner air, thrilling birdsong, nature exultant. Everything, outside and inside, is in movement at the same time.

No wonder questions of meaning, purpose, choice and change, both individual and national, have acquired a new sharpness. Visions of the post-epidemic future — more green, equal, united and caring, or an authoritarian dystopia — abound. So do toolkits and slogans: build back better, great reset, new social contract. Henriette Roland Holst’s ageless poem on her native Holland lends itself to the moment: England, you give no space but to the mind.

Such dreams will be shaped, if at all, in the forge of corona-age politics. For now, their undoubted allure carries a hint of twin flaws: wishful thinking, and self-distancing from a moment whose nub is that the youthful SARS-CoV-2 continues to run the show.


In step with the struggle to define the future is a contest over the pre-epidemic past. Eventually, at least one official inquiry will assess the UK state’s performance in the context of its plans for an emergency of this type. This is already much discussed, even as the country’s share of the Covid-19 trauma has just got going. Close scrutiny of Boris Johnson’s own role is as certain as that of Tony Blair’s over the Iraq war, which was examined in Sir John Chilcot’s 2016 report. So too is attention to the performance of government departments, affiliated scientific committees, public health agencies and political advisers. Still, a lidar-like focus may be needed to penetrate layers of hindsight, covering of backs, shuffling of responsibilities: all familiar from the legion of “blunders of our governments.”

On previous form, the flagship inquiry — likely to pre-empt the one on Brexit, whose final curtain Covid-19 may well hasten — will take years (Chilcot lasted seven). By then, the court of public opinion will long have delivered its own verdict, which could land severely on some experts as well as the prime minister. Already published committee records and detailed media reports converge on a story of initial, fatal misdirection compounded by a series of analytical and logistical errors. Even in a fluid crisis where so much is provisional, that account will be hard to shift.

The story’s first part notes that Britain’s pandemic plan, drawn up in 2011 and still operative, foresaw the principal threat as a new strain of influenza. This judgement reflected the swine flu outbreak of 2009, sparked by the composite H1N1 virus. In that case, around £1.2 billion ($2.2 billion) was spent to allay the expected thousands of deaths, though they ultimately numbered only 214 out of 540,000 cases. Dame Deirdre Hine’s official report on H1N1 advised an incremental, follow-the-data approach when the next epidemic hit, plus wariness about worst-case assumptions. A contiguous factor here is that there was painful institutional and local memory of the 2001 foot-and-mouth outbreak, when a frenzied response saw millions of animals slaughtered and £8 billion ($15 billion) burned, but no memory at all of SARS-CoV-1.

Influenza, not corona; wait and see, not get ahead; and, to complete the set, darkness not light. In October 2016, a big simulation of an H2N2 flu pandemic was held under the aegis of Public Health England. It found that the National Health Service would be overwhelmed, identified likely shortages of personal protective equipment, or PPE, and highlighted care homes’ vital role in relieving hospitals. Alarming as the exercise was, its report stayed under wraps. That the simulation took place just as Britain was walking deeper into the all-consuming Brexit mire under Theresa May might be contributory to the lack of practical follow-up. In shadow, the seeds of yet another blunder germinated.


The story’s second part begins weeks after the novel coronavirus had been identified in the wake of the Wuhan outbreak, and its full genome sequence released by Chinese scientists on 10 January (which allowed Sarah Gilbert of Oxford’s Jenner Institute, among others, to immediately start working on a vaccine). Relevant British bodies were tuning in to its spread, even more so when the first local cases, of two Chinese family members in northern England, were recorded on 31 January — by coincidence, the very day Britain left the European Union and entered a transitional period for trade talks.

Those distractions partly explains why, until late February, coronavirus was seen by the political-media class almost exclusively as “over there.” A collective jolt arrived in the first week of March, with the first local death, the number of cases passing a hundred and Johnson announcing a £46 million (A$86 million) fund for testing and vaccine research. Still, a sketchy government plan (“The UK is well prepared to respond in a way that offers substantial protection to the public”) had zero sense of urgency.

In public, Westminster’s stage-set approach, with its reassuring mantras (“contain, delay, research, mitigate”) and studied politesse between government and scientists, projected harmony. But the twenty-one days until the lockdown began on 24 March would incubate problems so fatal as to make what came later a giant work of repair. Test-and-trace plans aborted, care homes exposed, medical stocks insufficient, frontline staff under-protected, border monitoring and quarantine procedures absent: each of these had a distinct source, but linking several of them was a lack of reliable data and of systems capable of delivering, processing and acting on that data. Notionally in charge, government — inevitably, perhaps — floundered.

Much of this was owed to the way the ambiguous inheritance of those what-to-do-in-an-emergency files played out against the baffling unfamiliarity of the latest epidemic. The government in early 2020 was “hypnotised by its own plan,” writes Ian Leslie in the New Statesman: “Faced with the novel problem of an untreatable, highly transmissible virus, the government’s current advisers seem to have found it hard to break with the plan they had — now unfit for purpose — and think anew.”

An initial failure to gauge the disease’s threat had shrunk the bandwidth that, if available, might have allowed different options to be explored. Practical lessons from East Asia were neither sought nor applied. There was a crucial deficit of imagination. That bland 5 March plan was wrong: Boris Johnson’s government would lack not just knowledge and tools in taking the initiative against Covid-19 but also the ability to remedy that lack. In contrast to supermarkets’ pinpoint circuits, it had no way to conjure instant operative resilience out of sparse warehouses and uncertain suppliers in a just-in-time economy.


Among the many compelling sub-themes of this two-part story is one with a singular local twist: a shift in science’s public profile from neutral expertise towards competitive partisanship. Its pivot was the government’s mid-March volte-face from an overhyped “herd immunity” approach to a more interventionist one. The notion of allowing mild infection of the healthy young while shielding the vulnerable, mentioned on the BBC by David Halpern of the Behavioural Insights Team and repeated by chief scientific adviser Sir Patrick Vallance at a press briefing, was first twisted into a form of eugenics, even genocide, then buried over a frantic weekend when a paper by the Imperial College mathematician Neil Ferguson and colleagues found that without rigorous physical distancing, Covid-19 deaths could reach 250,000 or more. The path to shutdown on 24 March, ranking among the most consequential days in British history, was set.

The switch was opposed by a vocal libertarian minority that cited evidence of Covid-19’s varying risks and impacts to argue against social closure. Some fire was directed at Ferguson himself, whose costly advice in the 2001 outbreak was disinterred. Behaviourists, economists and public health specialists widened the fray in lamenting epidemiology’s new stardom. Most telling was that the contretemps began to deflate harmful veneration of what ministers were jarringly calling “the science.”

This nebulous entity had been the government’s face-shield from Covid-19’s onset, invoked to parry every doubt over its decisions. It had the additional benefit of implicitly defusing a barb dear to opponents of Boris Johnson, his government and Brexit, its flagship cause: that these Brexiteers held “experts” in contempt. That barb dated from a late stage of the 2016 referendum campaign on EU membership, when justice secretary Michael Gove, still a key Johnson ally, told Sky News’s Faisal Islam during a gladiatorial interview that Britain would be “freer, fairer and better off” if the country voted to leave the European Union.

To Islam’s litany of twelve power centres whose leaders wanted the country to remain in the Union — including the US, the IFS, the IMF, the CBI, NATO and the NHS — Gove retorted, “The people of this country have had enough of experts from organisations with acronyms saying they know what is best and getting it consistently wrong.” Seizing on a fatal pause in mid sentence, Islam’s sharply incredulous double repeat of Gove’s first ten words launched these as the story. Gove’s unwise “had enough of experts” rode the media carousel — nonstop, into every crevice, for four years — as a symbol of crass, self-harming populism.

It was a gift that kept on giving. Never mind the sources, feel the clicks. Post-corona, dozens of “the experts are back” columns and “who needs experts now?” jibes wrote themselves. At the food chain’s pinnacle, “Boris Knows He’s Out of His Depth. Suddenly Experts Are Useful Again” was the Times headline on its 9 April interview with the geneticist Sir Paul Nurse, Nobel laureate and director of London’s state-of-the-art biomedical Crick Institute, whose Brexit views go without saying. “It’s galling when people who have denounced experts then come on the stage and start talking about experts. It doesn’t fill you with great confidence.”

The spectacle of experts’ disagreement over Covid-19 pulls the plug on the mirthless funfair. To the extent that this bombshell reflects dimly on Britain’s general level of scientific literacy, anti-Brexiteers’ pathological reductionism bears some responsibility: no chance of an “experts” sneer ever made way for a defence of the scientific (far less the democratic) value of pluralism.

The breakthrough here is that scientists (the “experts” du jour) are now part of the same cacophonous public space as everyone else. Many — along with active politicians, TV anchors, sportspeople, clerics, lawyers, academics, novelists — are now daily commentators, even capable (another thunderbolt) of drawing unfounded conclusions from false assumptions and dubious data. That the government’s Scientific Advisory Group for Emergencies, or SAGE, has spawned a dissident body, Independent SAGE, led by former chief scientist Sir David King, is the paragon’s last breath.

This is but one sub-theme of the British story that the virus has acted to dissolve and reconstitute: in effect, as an agent of influence. So when ministers are chided for leaning on “the science” they don’t disclose to avoid questions they don’t answer, when Neil Ferguson “steps back” after admitting to private assignments that violated lockdown codes, or when the public health guru John Ashton and Lancet editor Richard Horton are — as a duty to the audience, and at long last — gently reminded by broadcasting hosts of their political credentials and rhetoric, it is another little victory for SARS-CoV-2.


A forlorn government is differently exposed when its crutch, “the science,” is kicked away. No more can it so limply pass over responsibility for judgements it needs to own. That presents a new maximum test to ministers: can they convey the balance of risks with a literate awareness of scientific and political complexity? An immediate test too, as the government now plans for schools in England to reopen on 1 June as part of the lockdown’s staged easing, against strong resistance from teaching unions and parents.

Boris Johnson looked unready to reach this high bar even before the latest drama to consume his government. This was the furious reaction to Dominic Cummings’s family drive from London to his parents’ farm near Durham, northeast England, where the adviser and his journalist wife Mary Wakefield, both showing symptoms of illness, wanted to deposit their small son while themselves going into quarantine in an outhouse. Every detail of that choice is now being parsed for clinching evidence of what Cummings denies, that the trip broke the government’s then guidance to “stay home.” Genuine anger mixes with revengeful glee as the co-architect of Brexit and of Johnson’s general election victory in December, hated by many on both counts, flirts with Nemesis.

Cummings’s hour-long press conference in the Number 10 rose garden on 25 May, hours after the morning’s paroxysmic headlines, began with his chronicle of a family under pressure of illness and overwork. He described his movements as “reasonable” in the context of “weighing complex decisions to do with the safety of my child and my desire to go back to work.” Lobby journalists then all but accused him of arrogant, elitist hypocrisy in flouting orders he expected the plebs to observe.

Always careworn in appearance, Cummings is an independent-minded strategist whose intellectual seriousness, ambition and impatience radiate equally from his fertile blog. The media’s reflexive hostility over this (in British terms) uncommon radical recently erupted in intense criticism over his attendance at SAGE meetings, on the grounds that he was a political pollutant in scientific waters. This fizzled out when a few SAGE members said he mostly listened or sought clarification for the prime minister’s benefit. SAGE member Jeremy Farrar subtly wishes for more such interaction “so that advice goes directly into policy,” while blaming “not right” decisions made early in the crisis for a UK epidemic “that at least to some degree could have been avoided.”

The latest Cummings episode may also fade, though the Independent’s John Rentoul holds that only his departure can allow the people’s trust in Johnson to be rebuilt: “The public has already decided that he and Boris Johnson think that the rules for the little people don’t apply to them.”

The government’s errant choices pre–Covid-19 await their moment. So do incipient tensions between that other awkward duo, politics and science. Beyond the half-world, SARS-CoV-2 remains in charge. Those better times ahead will be a long haul. •

The post Covid-19’s awkward couple appeared first on Inside Story.

]]>
The first genomic pandemic https://insidestory.org.au/the-first-genomic-pandemic/ Mon, 11 May 2020 05:05:30 +0000 http://staging.insidestory.org.au/?p=60902

The virus’s genome has been at the centre of the vast output of research findings

The post The first genomic pandemic appeared first on Inside Story.

]]>
If it seems like we’ve been hit by a deluge of information on Covid-19, that’s because we have. The journal Nature suggests that the number of research papers on Covid-19 is doubling every two weeks, and by yesterday the World Health Organization’s repository of global literature on Covid-19 included more than 15,000 items. That’s quite apart from amateur and professional journalism, not to mention social media, where 630 million tweets with the hashtag #Coronavirus or #Covid19 have appeared (though the daily rate has dropped from its peak of twenty million on 13 March to around five million).

This information overload provides fertile ground for misinformation and conspiracy theorists. An unholy alliance is emerging of anti-vaxxers, China hawks and gun-toting libertarians, ready to seduce the credulous and the disaffected. Marlon Brando’s Johnny in The Wild One said it best: asked “What are you rebelling against?” he replied, “Whadda you got?”

Some dedicated souls are refuting the lies, piece by piece. But observing the old internet adage, don’t feed the trolls, I prefer to slip down the more orthodox rabbit holes.

One driver of this truly revolutionary explosion of scientific literature has been the changing ecosystem of academic publication. When journals as printed and bound artefacts lost their salience, their publishers moved online, relying on legacy reputations buttressed by prestigious editorial boards. It didn’t take long for pay-to-publish outfits to emerge, recognising there was money to be made by exploiting the publishing imperative among the oversupply of university staff. Distinctions between legitimate and predatory publishers became increasingly hard to navigate.

But in the last few years the environment has changed again. Bibliometric and altmetric measures of citations and impact, combined with legitimate publishers’ databases, are increasingly used to determine access to research funding and academic promotion.

If peer-reviewed and quality-controlled journals are the high-end retail outlets for these research products, the warehouses are the preprint servers. In the natural sciences, the arXiv (pronounced archive) has been around for thirty years. In other fields, these repositories are much newer. MedRxiv, where much of the Covid-19 literature has appeared, was launched less than a year ago. With its sister repository, bioRxiv, it lists 3172 Covid-19 articles, a figure that’s growing rapidly: a quick count shows twenty-nine articles added on 10 May and forty-three the day before.

These preprint servers are not a complete free-for-all. They are hosted by reputable institutions and are moderated, at least to a degree, and sorted into relevant subject categories. But the articles are posted before peer review, and many will never make it through that process. The urgency of slowing the Covid-19 epidemic and staving off deaths makes it very tempting to scour these servers for the latest research, but that comes with the risk of spurious results and junk science.

Artificial intelligence, or AI, proposes a way of identifying the best of this research. Scite.ai is a new tool powered by machine learning that trawls through mountains of scientific literature and not only counts the number of citations in other papers but also tracks whether subsequent mentions support or contradict the original paper. Some papers are widely cited because they are the best example of what not to do — Scite.ai enables a rapid sifting of right from wrong.

Galen and Avicenna would have recognised trial and error as basic to the scientific method. AI makes possible a major shift in this paradigm: for example, chemists can now use retrosynthesis methods to deconstruct and then reconstruct molecules, and potentially engineer drugs with very precise targets — blocking virus replication, for example.

The contrast couldn’t be greater with the fabled serendipitous breakthrough that dominates how we imagine drug discovery. (“On the morning of Friday 28 September 1928 Alexander Fleming finds that the mould growing on a petri dish accidentally left on a shelf kills bacteria, and so penicillin is born.”) AI wants to do away with this image altogether. The slightly ominous-sounding BenevolentAI is a case in point: setting its algorithms loose on a vast database of potential drugs, it pinpoints arthritis drug baricitinib as the most promising compound to combat SARS-CoV-2, and propels it into human trials.

The same shift is happening in vaccine development. Ever since Edward Jenner discovered that a dab of cowpox could be used to fight smallpox, vaccination has worked with two basic strategies: using either a killed version of the virus in question or live virus altered enough to cause only the immune reaction and not the full-blown disease.

For SARS-CoV-2, new techniques are in play to engineer vaccines from first principles, creating a vaccine that inserts itself into a particular site in the molecular pathway to disrupt the virus’s colonisation of the host DNA.

In the early AIDS days, gene-based techniques were in their infancy. I vividly remember a conversation in 1993 with a friend who, with a mix of hope and desperation, was betting his last throw of the dice on gene-splicing techniques he had heard about from Canada. He never got to try them.

At the time, antibody tests using a pinprick of blood were the stalwarts of HIV diagnosis. PCR assays, which amplify genetic fragments from a sample, were useful for confirming borderline cases, but they were cumbersome and prohibitively expensive.

That world has been turned on its head. PCR and other gene-amplification methods are the readily available go-to options for a test. Once the virus was isolated it could be plugged into the machines and reliable tests set up in a matter of days, as Victoria showed. Antibody testing has proved much more difficult, partly because we are still learning about the nature and timing of the antibody response to Covid-19.

To be useful, diagnostic tests need to meet two thresholds that point in different directions: sensitivity, when the test is fine-tuned enough to detect the virus if it is there; and specificity, when the test reacts to the virus in question and not to similar signals. Insufficient sensitivity will produce false negative results; insufficient specificity will produce false positives. Fast and dodgy operators thought they could bang together an antibody test for Covid-19 and rush it to market, but getting that balance right turned out to be much trickier than expected.

Remarkably, CRISPR technology — the gene-splicing technique that enables a small slice of DNA to be cut out and replaced — is coming to the rescue. Thirty years ago this was the very definition of cutting-edge science. Today, a CRISPR-based test stands ready to transform Covid-19 diagnostics, with the promise of a test simple and cheap enough for home use.

When the human genome was first fully described after a thirteen-year, multimillion-dollar project, it was hailed as the dawn of a new era of precision medicine. But gene therapies didn’t start rolling out the door, and the hype faded. Maybe it was a slow burn.

Covid-19 is perhaps the first pandemic with a genomic response: from epidemiology to diagnostics, to therapeutics, to vaccines, the virus’s genome has been front and centre. This pandemic is the crucible in which these genetically based and rationally designed approaches fuelled by AI will prove their mettle — or not.


For an alternative to these ponderings on science, here are a couple of great reads from the last few days.

Rutger Bregman’s new book Humankind is out in English next week, and as a teaser he offers the uplifting tale of a real-world Lord of the Flies in which a group of shipwrecked boys descended not into chaos but rather into amiable cooperation.

One of the smartest of development economists, Dani Rodrik, has considered what a better globalisation could look like. When Australia has not been too busy being Washington’s poodle, it has been a leading advocate of a rules-based global order. Those rules will need to be redrawn in a post-Covid world, and Rodrik provides a good pointer. •

The post The first genomic pandemic appeared first on Inside Story.

]]>
Knowns and unknowns https://insidestory.org.au/knowns-and-unknowns/ Tue, 05 May 2020 00:13:30 +0000 http://staging.insidestory.org.au/?p=60777

Another week of pandemic responses highlights the uncertainties ahead

The post Knowns and unknowns appeared first on Inside Story.

]]>
Two months ago I wrote the first in what has become a series of Covid-19 articles for Inside Story. Two months before that, the first international alert on the emergence of a new and potentially deadly disease had been published. At history’s critical junctures, the narrative emerges gradually.

For me, the turning point came towards the end of February during a celebration for friends’ birthdays in a vineyard restaurant on a glorious late summer’s day. At the table was a young doctor on his way to specialising in intensive care. He confessed to being obsessed with Covid-19, unusual at the time when it was barely a talking point in Australia. A major metropolitan hospital was making contingency plans to turn over one wing to an isolation ward, he confided. He wondered how people would cope if dire health rationing became necessary.

Since then, we’ve seen an explosion of knowledge. Never before in human history has so much been learnt about a new disease in so short a time. But for all we now know, there is even more that, as yet, we don’t.

We know that since the first identified outbreak in Wuhan, China, around 3.5 million cases have been confirmed. We also know this is an underestimate, as many people are infected but never show symptoms. Until very recently the only people tested have been those with signs of respiratory illness, and in many places testing is hardly available.

We know that 250,000 deaths have been attributed to Covid-19 worldwide, but that this toll is also certainly an underestimate. Cause-of-death statistics are notoriously hard to collect consistently, especially with an emerging or stigmatised disease. Even where health systems have a good handle on hospital deaths, deaths in aged care facilities or at home may not be recorded accurately.

Despite the difficulties attached to attributing mortality and the lack of a precise denominator of the numbers infected with the virus, we do know that the death rate from Covid-19 is substantially worse than from seasonal flu. Australia’s case fatality rate seems to sit around 1.4 per cent, not dissimilar to the rate found in other places — like mainland China, South Korea and Taiwan — where testing has been extensive and the first wave relatively contained. The Diamond Princess has inadvertently provided a closed system to benchmark the fatality rate, with the ratio of deaths per infection coming in at 1.3 per cent, and double that if you count deaths per case of illness. Not surprisingly, a cruise ship population skews old, and Covid-19 has a steep age gradient, so corrections have to be made to estimate potential fatality rates across the whole population.

We know that infections start circulating well before they come to the attention of public health authorities, but nevertheless that physical distancing can strongly curtail epidemic spread. The repertoire of measures includes limiting the numbers of people gathering in close proximity, using face masks to limit the spread of droplets from coughing or sneezing, handwashing with soap to kill the virus on the hands, and disinfection of surfaces. But there is no magic formula or proven combination.

We know that SARS-CoV-2 causes respiratory distress with a complex mechanism of action. “Cytokine storm” is such a resonant metaphor it has become a major window into how the body’s immune system can produce a deadly over-reaction. And, at the molecular level, we are learning how spike proteins on the virus’s surface break into cells and enable its particles to replicate.

We know that the risks of becoming seriously ill with Covid-19 are related to other health conditions, especially diabetes and obesity — though not, surprisingly, to asthma or smoking.

We know that wealth and status are no barriers to infection, but that, once you’re infected, your chances of becoming sick or dying correlate closely to existing patterns of inequality. We also know that new diseases, as they always do, give a boost to long-rehearsed prejudices: the West blaming the East, the South the North, Hindus blaming Muslims, Chinese in China blaming Africans, Africans in Africa blaming Chinese.

And we know that for every example of selfless solidarity in the face of a crisis there will be a counter-example of ruthless advantage-seeking, whether by predatory drug companies, disease profiteers, wealthy sporting codes or governments continuing their geopolitical manoeuvring.


But what of the known unknowns?

The list is as long as your arm. Will a second wave of the epidemic be worse than the first? Will there be perpetual waves until 70 per cent of the world’s population is infected? When will effective treatments and a vaccine be developed? How long will people put up with physical distancing? Will global supply chains be broken forever? Will we have to choose sides in a war between the United States and China?

Science, with a capital S, is often presented as a done deal. Just like the sign advertising “antiques made daily,” though, even newly minted scientific facts come with a patina of received wisdom, at least until the next paradigm shift comes along.

What is fascinating about the current frenzy of Covid-19 research is that the lid is being lifted on the messy and conflicted process of science in action. Take two examples, epidemiology vis à vis children, and pharmaceutical treatments.

Federal education minister Dan Tehan picked a bad morning to rip into Victorian premier Daniel Andrews’s “lack of leadership” in not reopening schools for face-to-face learning. Within hours, Victoria’s health minister was announcing a school closure and three-day disinfection following the discovery of an ill teacher, and New South Wales was to follow with another. The scientific ground is also shifting under the Australian public health advice that children in schools pose little risk.

Virologist Christian Drosten has become an unlikely star in Germany’s Covid response. Like Greece’s epidemic spokesperson, Australian-born Sotiris Tsiodras, it seems today’s heroes are made of calm and frank communication combined with prodigious expert knowledge. Drosten, one of the world’s leading coronavirus experts, last week concluded after a close examination of nearly 4000 samples in Germany that “viral loads in the very young do not differ significantly from those of adults. Based on these results, we have to caution against an unlimited re-opening of schools and kindergartens in the present situation. Children may be as infectious as adults.” Similarly, an analysis of data from Shenzhen, China showed that “children are as likely as adults to become infected with SARS-CoV-2 after close contact with an infected person.”

Australia may need to abandon its current rationale for keeping schools open: the idea that children are unlikely to become infected or to be infectious. That doesn’t mean there may not be other reasons — like the needs of those children whose homes pose a danger to them, or the inability to provide childcare options to health and other essential workers. But the idea that schooling is developmentally essential, at least as provided under the current nineteenth-century industrial model, is not a strong rationale.

Why not take this opportunity to update schooling for today’s information-unlimited environment? The only basic skills that are essential are literacy, numeracy and discernment, or how to tell fakes from the genuine article. Once these are mastered, experiential learning can provide the rest. The economy no longer needs young people to be disciplined in sitting at a desk and obeying institutional authority, and the other main task of schooling — to filter access to social goods under a veneer of meritocracy — could also do with a major rethink. With Australian universities reeling under the sudden disappearance of overseas students, why not fill the vacant places with high school students, at liberty to choose a place that most suits their interests?

The science of drug development is also being laid bare. As yet, no effective treatments exist, with much-touted possibilities such as hydroxychloroquine having disappointed. The latest buzz is around Remdesivir — but experienced players know to tread carefully.

The first placebo-controlled study of Remdesivir showed no shortening of the period of illness. With more adverse events in the Remdesivir group than for those receiving placebo, the trial was stopped. This was the trial whose results were made public early by the World Health Organization, much to the annoyance of the drug’s manufacturer, Gilead. When the peer-reviewed publication of the trial appeared on 29 April, Gilead was much better positioned to seize control of the narrative. It issued a press release about its study comparing five and ten days’ use of Remdesivir, both with similar times to recovery.

Also on 29 April, results of a US National Institutes of Health study showed a modest but statistically significant improvement in recovery time — eleven days compared with fifteen days — for patients receiving Remdesivir compared with placebo. This was enough for Anthony Fauci (whose role as the country’s most senior virologist makes him a beacon of sense in Donald Trump’s media conferences) to liken the announcement to the first results for AZT in combating AIDS thirty-four years ago — a hopeful proof of concept that a drug had antiviral impact, but a long way to go before finding truly effective therapies.

Gilead is a master at shaping the environment to its financial advantage. This epidemic is no exception: in the first quarter of 2020, it upped its spending on congressional lobbying in the United States to record levels. In this case, the game is to establish Remdesivir as the “standard of care” against which other treatments will be judged. Even other drugs found to be a better candidate will find it harder to muscle their way in to the fiercely competitive environment where discovery, trialling, regulatory approval and manufacturing all pose significant hurdles.

Although international trade rules include provisions for public health needs to trump intellectual property rights, those provisions have proved inadequate against the industrial-pharmaceutical juggernaut. That is why Médecins Sans Frontières has assembled a large group of partners in a campaign for access, under the trenchant slogan “No Patents or Profiteering on Drugs, Tests, and Vaccines in Pandemic.” At best, its prospects for success also count as a known unknown.


What then of the realm of unknown unknowns?

This is the dangerous territory where conspiracy theories lurk, providing a rod of certainty amid the fog. Will a secret dossier be produced “proving” Covid-19’s origin as a weapon of mass destruction?

If Scott Morrison is sincere in arguing that an independent inquiry into the origins of SARS-CoV-2 is plain common sense and not a dog-whistle attack on China, then he ought to be taking the phylogenetic suggestion that when the virus first appeared closest to its likely bat progenitor, it was in Australia and the United States alongside China. “What if it was an Australian bat that first passed on this disease?” he needs to say. “Just like the hendra and lyssa viruses that first made their appearances in Australia. We’re fair dinkum, we’ll cop that.”

The only thing we can be sure we know about unknown unknowns is that there will be some. Meanwhile, for the next few years, get used to endemic Covid-19 — not eradicated, barely contained, and at least ten times worse than the flu. •

The post Knowns and unknowns appeared first on Inside Story.

]]>
Cook eclipsed https://insidestory.org.au/cook-eclipsed/ Fri, 01 May 2020 02:51:50 +0000 http://staging.insidestory.org.au/?p=60702

Reappraisals and re-enactments have shaped public memory, but our understanding of James Cook’s life and impact continues to evolve

The post Cook eclipsed appeared first on Inside Story.

]]>
Fifty years ago, I witnessed Captain James Cook’s arrival on the shores of Botany Bay — or rather, a re-enactment of the event on one of the barren asphalt assembly grounds at Mona Vale Primary School. The bicentennial event involved a reconstruction of the Endeavour in the form of an awkward float, and some sort of confrontation with pupils cast as Aboriginal people. Such commemorations are supposed to foster feelings of national belonging, but my recollection is one of tedium and indeed alienation from the heroic perseverance our headmaster stressed in his address. Throughout my childhood, Cook seemed a statue rather than a story: a bearer of unachievable and even unattractive virtues rather a life that was extraordinary or enigmatic.

Later, as a university student increasingly absorbed in the anthropology and history of the Pacific, I retained this sense of Cook as an indomitable but apparently one-dimensional explorer, a man who wanted only to put lines on a map. Beginning research on the Marquesas Islands, I was fascinated by a drama of shamanism, taboo, tattoo, unfamiliar gender relations and cross-cultural exchange. Even as I began reading mariners’ journals for what they revealed of early encounters, Cook, the most celebrated of them, seemed a fixture of national histories rather than a character who might himself be an actor or locus of interest. Yet I was surprised when I read his account of first contact at Kamay, or Botany Bay, where the Endeavour’s crew was famously resisted by the Gweagal. After the seamen had tried fruitlessly to initiate relationships with local people, Cook candidly acknowledged, “it was to no purpose, all they seem’d to want was for us to be gone.”

I was still more surprised, a few years later, to come across a monumental but obscure book by one of the participants in Cook’s second voyage. Joseph Banks had been expected to again accompany Cook, this time on an expedition that sought to establish once and for all whether any southern continent existed. But Banks wanted to take a larger scientific party than he had on the Endeavour, and angrily withdrew after rows about the suitability of the accommodation on the ships selected for the new voyage. The hastily nominated substitute as natural historian was Johann Reinhold Forster, a polymath even by Enlightenment standards, who turned out to be both a brilliant observer and a difficult and contentious character, during and after the voyage.

Though Forster avidly collected natural specimens, he was intensely interested in the Pacific peoples encountered over the three years of the voyage. Cook’s method was to use the summers to search for the hypothetical continent in far southern latitudes, and he interspersed those forays with extended cruises in the Pacific tropics. He sought refreshment at places he had previously visited, including Tahiti and New Zealand; he investigated the situations of islands identified by earlier mariners; and he came upon other islands previously unknown to Europeans. Some visits were brief, others extended, and some repeated: the Europeans would visit Society Islanders twice and at length, and the people of Queen Charlotte Sound three times, as well as meeting, for the first time, people in Vanuatu and New Caledonia among other islands and archipelagos.

From the Admiralty’s perspective, the second voyage’s findings were negative: there was no great south land other than what might lie beyond ice, and certainly no land that offered produce or trade. But for an empirical philosopher like Forster the wealth of discovery was extraordinary. He was prompted to write a 600-page book of “observations” made during the voyage, the bulk of which reflected on “manners and customs.” He painstakingly detailed the behaviour, the institutions and the social condition of the various peoples the mariners encountered. He was particularly interested in the condition of women, especially in places like Tahiti, where they were evidently of high status and influential in political affairs. Separately, he wrote the first extended essay about the moai, the great ancestral statues of Rapa Nui. That manuscript was lost in a Polish library until just a few years ago, reflecting the extent to which Cook voyage research is not done and dusted: even now, new material continues to be found.

Forster’s intense curiosity regarding practices that were for him exotic, and the sheer variety of Islanders’ lives, was fully reciprocated by Pacific peoples. Some, like Indigenous Australians, were indeed cautious, and sought to avoid intruders, or at least maintain distance. But across Polynesia, local people, and particularly local people of high status, were eagerly interested in understanding these visitors from beyond the known universe, who came in great ships, bearing extraordinary things. Islanders keenly traded for fabrics, not least because Indigenous forms of beaten and woven cloth were of exceptional importance in their own regimes of value. The Tahitian chief Pomare, among many others, not only wanted novel things, but relationships. He understood that Cook was King George’s emissary, and presented the mariner with gifts for his sovereign. He wanted to extend the alliances he already had with the chiefs of other islands by embracing Peretania, as Britain was rendered in Tahitian.

In the Pacific, the decade of Cook’s voyages was thus an extraordinary time. The mariners’ encounters with Islanders were sometimes tense, there were misunderstandings and moments of violence. Yet there were also sustained diplomatic interactions, and much generosity. People revealed new worlds to each other; Europeans discovered Islanders, and Islanders discovered Europeans; Islanders also rediscovered each other, in the sense that Society Islanders and Māori joined Cook’s ships and visited other parts of the Pacific, encountering peoples to whom they were ancestrally related. One, Mai (generally known as Omai), also travelled to Britain, a precursor of many Pacific Islanders who visited Europe early in the nineteenth century.

Over the last twenty or so years, new scholarship focused on Indigenous perspectives has revealed and explored local experiences of these encounters. Extraordinary work by Indigenous artists has made the diversity of these perspectives and experiences public and prominent for the first time.


The decade from 2018 onwards has been and will be marked by a series of 250th Cook anniversaries, from that of the Endeavour’s departure from England through first contacts in New Zealand and Australia and other events of Cook’s second and third voyages to the anniversary, on 14 February 2029, of his death at Kealakekua Bay. These events seem defined less by the rich, unpredictable and difficult world of the voyages themselves and more by the long history of commemoration and argument about Cook. The little life itself is typically eclipsed by successive afterlives, pageants and re-enactments.

In the early twentieth century, Sir Joseph Carruthers, premier of New South Wales in the years after Federation, was an ardent Cook champion, advocating Cook statues and memorials in London and Hawaii and playing a key role in the dedication of the Kurnell landing area as a national park. Carruthers’s conservative historical imagination was challenged by writers on the left, one of whom, D. Healy, felt it important to contribute an essay on Cook’s death to the Communist, a Sydney journal “for the theory and practice of Marxism.” There was “something fascinating,” he noted, in the story of “an empire-builder who was actually worshipped by a primitive people but made the fatal mistake of being found out.” “No more than the usual arrogant, harsh and stupid British naval commander,” briefly misrecognised as a deity by the Hawaiians, Cook then died in a banal confrontation. The explorer’s death might prefigure a wider modern demystification of false gods, Healy hoped.

Mark Adams’s Cook Memorial, taken at Meretoto-Ship Cove, Totaranui-Queen Charlotte Sound, New Zealand. Silver gelatin prints. Courtesy of the artist

Debate of this kind periodically resurfaced. At the time of the 1970 anniversary there was growing awareness of just how damaging European settlement had been for Aboriginal people. Yet the tenor of events was nevertheless essentially celebratory. That mood has since been shaken repeatedly by war, economic challenges, environmental crises and acrimonious debate about nationality and immigration.

A consequence of an increasingly polarised politics has been much controversy about the global order’s antecedents. Some historians have sought to rehabilitate empire, suggesting that colonial rule broke up old hierarchies and hegemonies and brought the benefits of modernisation. From another direction, the legacies of slavery and other forms of oppression have been denounced by a new generation of anticolonial activists, exemplified by the “Rhodes must fall” campaigns, which sought (successfully in Cape Town, unsuccessfully in Oxford) to have statues of Cecil Rhodes removed from university precincts. In both Australia and New Zealand, monuments to “Captain Crook” are occasionally vandalised.

Cook’s own writings from the voyages make it evident that the morality of cross-cultural contact was a problem at the time. Cook was conscious of the legacies of his expeditions and was deeply troubled by the deleterious impact of sexual traffic on the health of Indigenous populations. Referring to sexual contact between sailors and local women, he wrote “I allow it because I cannot prevent it.” He was still more disturbed by the fact that the mariners’ demands appeared to have motivated Māori men to make prostitutes of their women. He feared that the trade in goods introduced new wants, and hence disease, and served “only to disturb that happy tranquility they and their Fore fathers had injoy’d.”

Cook extrapolated these reservations globally, to the whole business of European colonisation: “If any one denies the truth of this assertion, let him tell me what the Natives of the whole extent of America have gained by the commerce they have had with Europeans.” Early in his naval career, Cook had met dispossessed Beothuk of Newfoundland: he knew what he was talking about.


Among the many ramifications of the Covid-19 pandemic has been the cancellation of commemorative events associated with Cook’s arrival at Kamay. No doubt the messages offered would have been more reflective and representative than those foisted on schoolchildren such as me in 1970. But celebration, anti-celebration and even commemoration that aspires to cultural balance will inevitably diminish the rich mess of encounter, exchange, novelty, violence and moral murk that Islanders and mariners contributed to and suffered through the 1770s.

Community members studying artefacts collected during Cook’s 1769 visit to Turanganui-a-kiwa, at the Tairawhiti Museum, Gisborne, New Zealand, in September 2019. Courtesy Tairawhiti Museum

There is another story altogether. Things that people invent can, over time, assume absolutely different values from those that motivated their creation. During the Endeavour voyage, Joseph Banks, James Cook and others made extensive collections not only of natural specimens but also of Indigenous works of art and artefacts. The material they gathered, mainly through gift exchange, constituted the very first such collection to be systematically made, documented and subsequently deposited in museums.

Whatever values the Endeavour collections have had in academic, historical and artistic terms, they are unambiguously now cultural resources of unique significance. They exemplify Indigenous life and Indigenous culture; they include implements that reflect day-to-day subsistence, and art forms associated with genealogy, sanctity and ritual. They amount to material archives of sustainable ways of life. They bear the hands and values of ancestors.

To be sure, we could just cancel Cook. But the commemorative programs in New Zealand and Australia have provided occasions for artefacts to be returned for extended exhibition, on a model very different from those of standard museum loans. In Gisborne — near the sites of first contact between Cook and Māori in October 1769 — taonga, ancestral treasures normally cared for in Cambridge, were taken from the airport to the local tribal meeting house, where they were blessed, handled and deployed in performance by community members. Prior to the opening of Tu te Whaihanga, an exhibition at the Tairawhiti Museum, a series of community study visits took place, enabling descendants of the people who traded artefacts with members of the Endeavour’s crew to engage more intimately with the historical pieces. Ahead of the event, elder and artist Steve Gibbs looked forward to being able “to honour our ancestors by bringing back something very special to us all here at Turanganui-a-kiwa.”

In Australia, at the National Museum in Canberra, the spears appropriated by Cook 250 years ago are exhibited alongside a set made recently by Dharawal man Rod Mason. “The most important thing,” he has said, “is our connection to the things like the spears, that were taken from Botany Bay, and how we’re still making spears today — no one can take that away from us, because we’ve been doing it all our lives.” Shayne Williams from the Aboriginal community at La Perouse adds, “It makes me feel proud to see those spears from 1770. They are extremely valuable, not just for us at Botany Bay but for Aboriginal people right across the nation.” •

The post Cook eclipsed appeared first on Inside Story.

]]>
You’ve got to give it to Cupid https://insidestory.org.au/youve-got-to-give-it-to-cupid/ Tue, 24 Sep 2019 23:01:49 +0000 http://staging.insidestory.org.au/?p=57005

Books | A psychologist looks at how brain damage and disease can influence sexuality

The post You’ve got to give it to Cupid appeared first on Inside Story.

]]>
The cranium-fondling phrenologists of the nineteenth century determined that the brain area responsible for sexual appetite — or “amativeness,” as they called it — was located just behind the ears. A telltale lump in the occipital region would reveal its possessor to be unusually lusty. Adjacent lumps divulged more wholesome forms of love: for spouse, family and life itself.

Just how radically wrong this skull-braille was is made radiantly clear in Amee Baird’s engaging book. Bumps on the head don’t allow us to read personalities, of course, but the more basic idea that something as rich and complex as human sexuality might sit in a single brain location is also bunk. Baird, an Australian clinical neuropsychologist, shows that sex is sustained by a complex network of brain regions and pathways, and that alterations to any part of this network can influence sexual response. Ironically, the occipital lobe of the brain, where amativeness was thought to reside, is the one lobe that is out of the network.

The neurological case study has become a popular genre through the work of masters such as the late Oliver Sacks and Harold Klawans. Sacks became famous for his humanising portraits of ordinary people living with extraordinary symptoms. In his case studies these symptoms were rarely sexual, with the exception of Natasha K. from The Man Who Mistook His Wife for a Hat. In her late eighties Natasha became, as she remarked, “frisky,” entertaining carnal ideas about much younger men. She proceeded to amaze Sacks by correctly diagnosing herself with “Cupid’s disease,” a loss of inhibition caused by brain infection resulting from syphilis, which she had contracted seventy years earlier while working in a brothel. Sacks’s message here is that the effects of disease are not invariably negative and do not always cause dis-ease. As Natasha said, in appreciation of her late-in-life rejuvenation, “You’ve got to give it to Cupid.”

Sex in the Brain pays homage to Sacks and even discusses at length one of his later cases, but it has a few important differences from previous case study collections. For a start, the book has a single theme, whereas many such collections are jumbles, occasionally carrying a slight whiff of freak show. Although only a fraction of the cases she discusses are her own, Baird displays an intimate acquaintance with her patients that comes from what she calls “the luxury of time” — the fact that her clinical assessments take hours to conduct, so different from the rapid consultations medical clinicians normally deliver. Unlike many writers of case collections, Baird is a psychologist rather than a medically trained neurologist, and perhaps as a result her chapters mingle case details with discussions of contemporary neuroimaging research. Her psychology background also shines through in her gentle skewering of neurosurgeons’ personalities.

The book’s chapters present an array of sexual alterations and aberrations resulting from brain damage or disease. Baird discusses hypersexuality, apparent changes of sexual orientation, erotomanic delusions of being loved by someone famous, paedophilia induced by brain tumours, the effects of pornography consumption on the brain, and the neural basis of love and sexual pleasure. Brain-based sexual complications associated with Parkinson’s disease, multiple sclerosis, dementia and autism are explored, as are the ways that some complications are themselves complicated by gender. (One grumpy and taciturn husband becomes warm, appreciative and romantic to his spouse following a left hemisphere brain injury.) As the book progresses, the reader is introduced to onanistic monkeys, brain structures with interesting names (amygdala and hippocampus derive from words for almond and seahorse), grotesque “cures” for homosexuality, safety-pin fetishes, epileptic seizures that cause orgasms, and orgasms that cause seizures and strokes.

One rather surprising revelation is how often sexual disturbances in the context of neurological conditions result from the treatments rather than the conditions themselves. Some people with Parkinson’s disease have developed sexual excesses following dopamine replacement therapy. Others with epilepsy and other neurological disorders are afflicted by an assortment of sexual disturbances following brain surgeries intended to correct the orginal problem. Baird shows how the unending quest to improve medical treatment has sometimes violated the Hippocratic injunction to “do no harm,” and how when those harms have been sexual in nature they have sometimes been neglected. Too often, she suggests, doctors have failed to ask patients, let alone their partners, about bedroom side effects.

A temptation in neuroscience writing is to reduce the person with a neurological condition to their brainhood and to define the condition as a loss of the individual self. Baird, as a seasoned clinician, is sensitive to how social networks matter as much as neural networks, and shows how the sexual disruptions caused by brain disorders influence a patient’s relationships with partners and family members.

Baird is at her compassionate best when she embeds her patients’ sometimes embarrassing or shameful behaviour in the broader context of their frayed lives and social connections. She writes amiably, accessibly and without titillation. Sex in the Brain introduces a very promising new talent in popular neuroscience and deserves to be widely read. •

The post You’ve got to give it to Cupid appeared first on Inside Story.

]]>
The radical legacy of Apollo https://insidestory.org.au/the-radical-legacy-of-apollo/ Sat, 20 Jul 2019 23:02:29 +0000 http://staging.insidestory.org.au/?p=56172

They went to the moon but discovered the Earth

The post The radical legacy of Apollo appeared first on Inside Story.

]]>
Fifty years ago the Americans missed their chance when they planted an American flag in the lunar dust around Apollo 11. The bulky astronauts bounding around the airless moon never looked so small as when they saluted the stars and stripes, a bedraggled piece of cloth held aloft by a horizontal rod. It was a diminishing moment in an otherwise impressive mission, and it looks smaller with the years. It seems that Horace Walpole was right when he wrote in 1783: “Could we reach the moon, we should think of reducing it to a province of some European kingdom.”

Apollo 11 was a magnificent technological feat that also stirred a common humanity. There were times when the sheer wonder of leaving our planet to land on another world transcended the grubby, aggressive cold war politics that fuelled the space race. The plaque left on the moon depicted the two hemispheres of Earth and assured the solar system that Apollo 11 “came in peace for all mankind.” But nationalism was the mud on their boots, and they tramped it in.

A few hours after the first moonwalk, an uncrewed Soviet spacecraft, Luna 15, crash-landed on the moon, thus giving the Americans a satisfying first spike on the graph of their newly installed lunar seismometer. Luna 15 took the space race to the finish line, for it was designed to retrieve Russian samples of moon rock before the Americans could return with their own.

The rhetoric of space exploration was so future-oriented and cold war politics so competitive that NASA underestimated the power of looking back. In 1968, the historic Apollo 8 mission launched humans beyond Earth’s orbit for the first time, out and across the void and into the gravitational power of another heavenly body. For three lunar orbits, the three astronauts studied the strange, desolate, cratered surface below them and then, as they came out from the dark side of the moon for the fourth time, they looked up and gasped:

Frank Borman: Oh my God! Look at that picture over there! Here’s the Earth coming up. Wow, that is pretty!

Bill Anders: Hey, don’t take that, it’s not scheduled.

They did take the photo, excitedly, and it became famous, perhaps the most famous photograph of the twentieth century, the blue planet floating alone, finite and vulnerable in space above a dead lunar landscape. Frank Borman said: “It was the most beautiful, heart-catching sight of my life.” And Bill Anders declared: “We came all this way to explore the moon, and the most important thing is that we discovered the Earth.”

In his fascinating book Earthrise (2010), British historian Robert Poole explains that this was not supposed to happen. The cutting edge of the future was to be in space; Earth was the launch pad not the target. Leaving the Earth’s atmosphere was seen as a stage in human evolution comparable to our amphibian ancestor crawling out of the primeval slime onto land. And now, “after thousands of years of life on this planet” (declared the Los Angeles Times)Man has broken the chains that bind him to Earth.” Humans had left the realm of solids and gases for gravity-free, oxygen-free space, for a new frontier and a beckoning future. The weightless astronauts, even in their clumsy spacesuits, were liberated. Furthermore, their new dominion was seen to offer what Neil Armstrong called a “survival possibility” for a world shadowed by the nuclear arms race. In the words of Toy Story’s Buzz Lightyear (sometimes hilariously confused with Buzz Aldrin), the space age looked to infinity and beyond!


So the power of the view back towards Earth took NASA by surprise. A few years later, in 1972, a photo taken by the Apollo 17 mission and known as The Blue Marble became one of the most reproduced pictures in the world, showing the Earth as a luminous, breathing garden in the dark void. Earthrise and The Blue Marble had a profound impact on environmental politics and sensibilities.

Within a few years, the American scientist James Lovelock put forward “the Gaia hypothesis”: that the Earth is a single, self-regulating organism. In the year of the Apollo 8 mission, Paul Ehrlich published his book The Population Bomb, an urgent appraisal of a finite Earth. During the years of the moon missions, British economist Barbara Ward wrote Spaceship Earth and Only One Earth, revealing how economics failed to account for environmental damage and degradation, and arguing, like Ehrlich, that exponential growth could not continue forever. Earth Day was established in 1970, a day to honour the planet as a whole, a total environment needing protection.

Then, in 1972, the Club of Rome released its controversial and enormously influential report The Limits to Growth, which sold over thirteen million copies and went into over thirty translations. In their report, Donella Meadows and Dennis Meadows wrestled with the contradiction of trying to force infinite material growth on a finite planet. The cover of their book depicted a whole Earth, a shrinking Earth. Shrinking the Earth became the title of a 2016 book by the American environmental historian Donald Worster about modern humanity’s ill-fated quest for endless growth and how the view of Earth from space initiated a change in consciousness comparable to the Copernican and Darwinian revolutions.

When the Apollo 14 astronaut Alan Shepard looked up from the lunar surface and regarded the Earth high in the sky, he was struck by “that thin, thin atmosphere, the thinnest shell of air hugging the world.” He looked at his home in the blackness and felt its extreme fragility, and he wept.

In the fifty years since Apollo 11, we’ve learned a lot about that precious atmosphere. Earth systems science emerged in the second half of the twentieth century and fostered a keen understanding of planetary boundaries — thresholds in planetary ecology — and the extent to which the human enterprise is threatening or exceeding them. The same industrial capitalism that unleashed carbon enabled us to extract ice cores from the poles and construct a deep history of the air. At least we now understand our predicament even if we are perilously slow to act. The blue planet is suffering the sixth great extinction and a climate emergency. The fossil fuels that got humans to the moon now endanger our civilisation. But fifty years after Apollo 11, we might hold onto the idea that the space race also unexpectedly quickened our race to save the Earth.

Last week I watched Todd Douglas Miller’s fiftieth anniversary documentary, Apollo 11, and was mesmerised by the real footage and voiceovers, and felt all the anxiety and suspense even though I knew what would happen next. The surviving 65mm footage surpassed all the efforts of Hollywood (in the 2018 film First Man) to represent the awesome power of the Saturn V rocket launch. I was moved by how mechanical and brittle was the whole grand enterprise. They were shooting at a moving target in a machine with 5.6 million moving parts. As the astronauts boarded the spaceship and the countdown proceeded, technicians were fiddling with a leaking liquid hydrogen valve on the rocket below them. In the Houston control room, slide rules were used dexterously. When Armstrong stepped out for the moonwalk, he bumped and broke a switch that was crucial to the operation of the ascent engine, and it was fixed with a ballpoint pen. The lunar module, Eagle, was a tinfoil Tardis. These vulnerable humans were camping at the edge of the universe.

I was twelve at the time of the Apollo 11 voyage and found myself in a school debate about whether the money for the moon mission would be better spent on Earth. I argued that it would be, and my team lost. But what other result was allowable in July 1969? Conquering the moon, declared Dr Wernher von Braun, Nazi scientist turned US rocket maestro, assured Man of immortality. I followed the Apollo missions with a sense of wonder, staying up late to watch the Saturn V launch, joining my schoolmates in a large hall with tiny televisions to witness Armstrong take his Giant Leap, and saving full editions of the Age newspaper reporting those fabled days. I’m reading them now: “Target Moon — here they come,” “Here we go round the Moon,” “Down to the Moon — Spacemen set for walk into history,” “Apollo hurtles for home — Nixon will see that big splash.”

Other news was pushed to the margins: about senator Edward Kennedy leaving the scene of a fatal car crash at Chappaquiddick Island, about the latest “light casualties” in Vietnam, about the immigration minister (Mr Snedden) welcoming a significant rise in the number of refugees settling in Australia, about the arrival of seven Nauruans in Australia for eye operations, about efforts to save the Great Barrier Reef from oil drilling. One telecommunications company took out a full-page advertisement comparing Neil Armstrong to Captain Cook and thus the moon to Australia as “yet another unknown where intrepid man has now trod.”

But the legendary cartoonist Les Tanner depicted the two astronauts sitting on the moon gazing at Earth and saying: “Funny — it looks like one world!”

Apollo was an astonishing extrapolation of the military-industrial complex, of a cold war superpower blasting even into space. Thus with the eyes of the whole world upon them, with a moment of unity in their grasp, the Americans planted their national flag on the moon. It was the next phase of colonisation, the new frontier, an affirmation of the future. Everything was simulated and rehearsed in advance, every possible problem imagined and solved before it could happen. The day before the launch the crew were still in simulators, practising the future. But NASA did not foresee Apollo’s greatest legacy. It could not imagine the radical effect of seeing the Earth. •

The post The radical legacy of Apollo appeared first on Inside Story.

]]>
Eventually the truth catches up https://insidestory.org.au/eventually-the-truth-catches-up/ Tue, 25 Jun 2019 00:11:51 +0000 http://staging.insidestory.org.au/?p=55770

Television | Four decades on, Soviet scientist Valery Legasov is an unlikely figure for our times

The post Eventually the truth catches up appeared first on Inside Story.

]]>
On its northern hemisphere release in May, the HBO–Sky Atlantic miniseries Chernobyl toppled Game of Thrones from its prime position on the ratings charts. This strange popularity contest between a spectacular Gothic epic and a dramatised documentary is prompting some vexed speculations. If even the most cogent of fantasy worlds fails to resolve its catastrophes in a way we find satisfying, what is to be learned from sustained dramatic engagement with a real-world cataclysm?

The central figure in Chernobyl (screening in Australia on Foxtel) is Valery Legasov, a nuclear physicist sent to assess the reactor immediately following the initial explosion on 26 April 1986. Jared Harris portrays him as a committed professional who becomes the voice of conscience within a corrupt regime, dominating the final episode with his testimony at the criminal trial of Chernobyl personnel in July 1987.

Legasov’s speech, aimed at the cohort of observers from scientific institutions who constituted an unofficial jury, overstepped the bounds of what the Politburo was prepared to hear. The rest of his story is all too predictable. Made a “former person” and relegated to obscurity, his interventions were largely wiped from the record. Shortly after the second anniversary of the meltdown, he committed suicide.

One of the few remaining traces of his presence is a brief interview on NBC’s News Today at the time of the August 1986 conference of the International Atomic Energy Agency in Vienna, to which he was sent as chief Soviet delegate, still bearing the Kremlin seal of approval. The American interviewer is keen to ask the leading questions: “Are you saying as much as you know?” and “Should all the reactors be closed?”

To the first question he responds that the detailed report he has submitted “tried to produce precisely the kind of material that would enable the experts to consider the measures and draw conclusions for the future.” As for closing the other sixteen reactors of the same design, he shrugs. (Yes, he really does shrug, in a slow, inexpressive movement.) It’s the first thing that occurs to anyone unfamiliar with the history of the breakdown, he says. “Experts” — a word he uses repeatedly — understand things differently.

Legasov’s expression is impenetrable throughout, that of a technocrat reciting an authorised doctrine. The fuller story of his involvement suggests that there was a complex, principled human being behind the mask, and therein was a key challenge for scriptwriter Craig Mazin and actor Jared Harris. Mazin avoids the obvious choices: there’s no attempt to portray Legasov as a family man, although he had a wife and daughter who stood by him throughout the ordeal. Instead, he sits alone in a dismal little apartment, with a cat as his sole companion.

The real-life Legasov was also a man of some national standing, an esteemed party loyalist who held a senior position at the Kurchatov Institute of Atomic Energy. There was potential for high drama in the authority figure torn between symbolism and realism: a version of Thomas More behind the iron curtain. Instead, he is introduced as a conscripted subordinate, a thorn in the side of party official Boris Shcherbina, the man entrusted with the political management of the crisis.

Harris, who excels in the role of the ordinary man cast onto the frontline of history (as he did as the reluctant monarch George VI in The Crown), plays Legasov as someone driven by a stubborn fixation on technological accuracy rather than by any moral commitment to “the truth.” That comes later, as an evolution of his growing insight into the causes of the catastrophe. This psychological evolution, subtle and gradual, forms a central line of tension through the five episodes.

As Shcherbina, Stellan Skarsgård is a perfect dramatic counterpart to Harris. Harris is light-voiced, slightly built and unobtrusive; Skarsgård, a solid, conspicuous figure in the landscape of devastation, speaks as if he has swallowed a handful of gravel. Yet it is Shcherbina who gives way, the realist in him called out by the sheer scale of what he is witnessing.


According to Craig Mazin, this is a story “about the cost of lies and the dangers of narrative.” The culpability of a state apparatus built on a false narrative is a central theme, but herein lies the danger of another one-dimensional narrative — that of Chernobyl as the symbol of a failed state and its fallout. Those following the story in Western media, Mazin says, “had no sense of how multilayered the situation was.” So the series also sets out to show the forms of genuine heroism exhibited by the Soviet citizenry.

In the opening episode, viewers are subjected to an almost minute-by-minute re-enactment of the unfolding disaster as it is experienced by those in the control room, where a test experiment goes wrong. The quintessential irony is that they are running a safety test. But those pressing the buttons and pulling the levers are under pressure from a bullying supervisor who has himself been leant on by a superior determined to complete the required procedures in an arbitrarily imposed timeframe. And so the machinery of the state has an impact on the technologies of the reactor: it is almost as if the escalating rage of the supervisor is feeding directly into the system, driving the rapidly scrolling numbers on the electronic counter.

Then, in one of the most vividly realised scenes, miners from Tula are called on to dig a channel underneath the core and install a liquid nitrogen coolant. The coal industries minister emerges from his vehicle dressed in a pale blue suit and faces a group of forty-five men whose skin and clothing are permeated with coal dust. It’s a stand-off of the starkest kind. He issues an order; the leader of the miners stonewalls. Why should they do this? The minister signs to the two armed guards behind him, and threatens to shoot. The miner shrugs, “You haven’t got enough bullets for all of us.” The impasse is broken when the miners understand what is at stake and accept their role, each of them leaving a black hand print on the minister’s suit as they pass him to board the convoy to Chernobyl.

This is dramaturgy, not realism, but the actual courage of those miners is well attested, and the scene serves to convey another dimension of the “Soviet Union.” There was an extent to which it remained true to its name among the people, if not in its many levels of government.

Aware as they may have been of the dangers of narrative, the series creators also deal in it by infusing the dramatisation with conventional forms of stirring and sentimental encounter. Emily Watson’s role as Ulana Khomyuk, a nuclear physicist who enters the fray to offer a challenge to Legasov’s diagnosis, is a fictional composite. With her natural candour, Watson invests the character with rather too much moral colouring, especially when she incites Legasov to go out there and tell it like it is in the trial hearing. Was it really like that?

The final episode, in which scenes from the courtroom are intercut with flashbacks to the opening scene in the control room of the reactor, turns into a kind of show trial of the Soviet state. It is dominated by Legasov, whose lecture on the factors leading up to the meltdown turns at the last minute into a grand denunciation of the culture of lies in which they are all embroiled. “To be a scientist is to be naive… The truth doesn’t care about our governments, ideologies, religions. It will lie in wait for all time and this at last is the gift of Chernobyl. I once would fear the cost of truth. Now I only ask, ‘What is the cost of lies?’”

In the 1980s, Soviet Russia was, in the eyes of the Western world, the prototype for the failed state. Four decades on, Legasov’s words ring out as a statement for our times, an indictment of the fraudulent political cultures now well advanced in Western democracies. The global financial crisis might be seen as the capitalist equivalent of the Chernobyl meltdown, but what, ultimately, were the consequences? With the ascent of Donald Trump and Boris Johnson’s likely instatement as prime minister, we’re still waiting for the truth to catch up. It may be that the popularity of Chernobyl is a reflection of wishful thinking. If only the truth actually would come home to roost. •

The post Eventually the truth catches up appeared first on Inside Story.

]]>
The butterfly effect https://insidestory.org.au/the-butterfly-effect/ Thu, 31 Jan 2019 23:39:48 +0000 http://staging.insidestory.org.au/?p=53034

Stalking a giant in Papua New Guinea’s ranges

The post The butterfly effect appeared first on Inside Story.

]]>
Sometime in 1906, butterfly hunter Albert Stewart Meek disembarks from an old pearler named Hekla on the northeast coast of New Guinea. He unloads his provisions and tools of trade: killing bottles with cyanide of potassium for small insects, syringes with acetic acid for larger ones, non-rusting pins for setting his trophies, cork-lined collecting cases. He waves off the boat with instructions to the skipper to return for him in three months.

He has high hopes of claiming discoveries in a wilderness still largely unexplored by Europeans. But things are not going so well.

By his own account — A Naturalist in Cannibal LandMeek is the swashbuckling, superior Edwardian opportunist from central casting. The son of a naturalist, but with no formal scientific training, he’d travelled from London to Queensland at seventeen to work as a jackaroo, with a sideline in collecting and trading antipodean specimens. At eighteen he had his first commission from Lionel Walter Rothschild, 2nd Baron Rothschild, heir to the Rothschild fortune and a zoologist, to venture into Queensland’s central ranges and bag three pairs of every kind of insect, bird or animal he could find.

Fifteen years later, Meek is being bankrolled by Rothschild to explore the Pacific, capturing exotic butterflies and moths to add to the baron’s natural history collection. It’s the adventurous life he yearned for, and a handsomely profitable one, but it is not without its travails. The islanders he has recruited as labourers have bolted into the bush. He recovers them, punishing the ringleader to persuade them back into reluctant service. By the time they have hauled his kit inland, though, seven are down with disease. It’s fair to say their welfare is not his paramount concern.

But amid this chaos is a glimpse of something beguiling. Setting up camp in the ranges on the Mambare River — likely within sight of the village of Kokoda — he sees an enormous and unfamiliar butterfly. She’s flying so high he brings her down with a shotgun armed with special ammunition. She’s brown with pale yellow markings, her wingspan measuring almost twenty centimetres.

Over the next month, Meek pushes about 130 kilometres inland and high into the formidable Owen Stanley Range — country where, thirty-six years later, the Japanese push down to Port Moresby would be defeated and the Australian legend of Kokoda born. The butterfly eludes him. Meek is “unfortunate enough to lose a couple of boys” — his carriers attacked and murdered. “The collecting was good, but the natives made it practically impossible for me to stay there any longer.” He retreats to Queensland for a spell.

A year later he returns, pulling up in Oro Bay, about thirty kilometres from where he shot the female butterfly. More misfortune: this time he’s laid up himself with terrible sores and raging fever. Then, somewhere near Popondetta, the present-day capital of Oro Province, he stumbles into the butterfly’s garden. He captures males splashed with iridescent turquoise and blue, and more females, some measuring twenty-eight centimetres wingtip to wingtip. Aided by villagers he rewards with mirrors and knives, he finds velvety black caterpillars with ruby spines. He plucks pupae from vine leaves and witnesses — “to my great joy” — a butterfly emerge. The species feeds on an “entirely different vine to other butterflies,” a hitherto unknown species of Aristolochia.

The butterfly specimens are dispatched to Rothschild, who names the subgenus Queen Alexandra’s birdwing, Ornithoptera alexandrae, in honour of the wife of Edward VII.


Meek’s faded trophies, complete with bullet holes, still reside under glass at Rothschild’s estate in Hertfordshire, now part of the British Natural History Museum. In Oro, their elusive kind, the world’s biggest butterfly, has largely vanished, along with much of its nourishing garden. O. alexandrae’s prospects for survival are now entwined with those of the 22,000 people who are the owners of its remnant habitat on the remote Managalas Plateau.

After thirty-three years of negotiations with and between 152 clans, the people of the plateau last year declared a conservation area over their 360,000 hectares of country, putting it out of reach of encroaching loggers, miners and oil palm plantations. It’s only the second, and by far the largest, conservation area in the country. They have resolved to find other ways in which the land might support their livelihoods.

Because such wild places are the last strongholds for so much fast-vanishing biodiversity, and critical to buffering the effects of climate change, we all have a stake in how this crazy-brave gamble turns out. And there are other intriguing dimensions to this story. In an era of seismic corruption and the fracturing of fragile services and infrastructure across Papua New Guinea, the preservation of the Managalas Plateau looks like something almost as elusive as O. alexandrae — good news. As a grassroots initiative, might it signal a turn of the tide in the narrative of external plunder — so much of it unapologetically rapacious; some of it, lately, dressed in finer ambitions but wearing the same old soiled colonial attitudes, blind and deaf to indigenous needs and desires?

After a decade travelling in and out of PNG, collecting too many bleak stories of violence, disease and dysfunction (yes, I too am a plunderer), I’m a little reluctant to chase this apparition for fear it will vanish. Up close, things are sometimes not as they appear from a distance. Yet here I am, stumbling around with my butterfly net on other people’s country.


At Oro Bay, where the Hekla had moored 111 years earlier, we pull off the coast road from Popondetta to pick up cold drinks and provisions at a cavernous tin-shed trade store before heading bush. To have any chance of seeing a Queen Alexandra birdwing butterfly (or QABB, as the locals shorthand it), I will have to travel deeper into the country than Meek likely did. Blessedly, not on foot, though at times it seems like that might be quicker.

The store sits on an inlet, a tide choked with garbage lapping at the shore. Stallholders do steady trade in buai, betel nut, for chewing and there’s a window offering liquor. Disconcertingly, one of the police escorts we collected back in town — pistol waving in his hand and thongs on his feet — makes a purchase. His young offsider, in a neat shirt and black lace-ups buffed to a mirror finish, waits in the back of our Toyota 4WD ute. My guide — veteran environmental champion, lawyer and local son Damien Ase — works his phone before we drive beyond network coverage.

Wilting in hot, wet shade, I slip the sandwich I have no appetite for to an appreciative hound. Eventually we are on our way.

We pass towering tanks full of crude palm oil awaiting shipment to become cakes and cosmetics around the world. They are fed by the harvest of red nuts from the squat, bristling oil palms that line the road. Palm oil production accounts for more than half of PNG’s agricultural export earnings. The cost had been plain to see through the window on my flight into Popondetta, over the ranges from Port Moresby: wild jungle canopy gobbled up by a meticulously machined green industrial landscape.

Before hitting the road to the plateau, I met Malchus Kajai, chair of the Managalas Conservation Foundation, in Popondetta — from where O. alexandrae has disappeared. “Because of the oil palm, the feeding grounds have been destroyed,” he explained. “We’re fortunate to have it up there.”

Kajai, who is fifty-seven, has spent more than half his life campaigning to preserve his birthright. He was studying for the Anglican priesthood when he began to worry there was more harm than good coming from development in Oro — the logging, the mining, the explosion of oil palm. His reading of the Bible was that “we have been entrusted to manage the forest. I started to take up the responsibility to speak and protect… The forest which is still virgin contains a lot of fauna, flora, a lot of species that have yet to be discovered.”

He had other concerns too. The fading of culture. Failing schools and health services. Diminishing income from crops like coffee and cocoa — not for lack of effort, but because without functional roads and communications, farmers could not market their produce. There’s been coffee on the plateau for sixty years, but lately it hasn’t been worth picking because of the obstacles getting it to market and the price, now less than $1 a kilo in a fifty-kilo bag. (Meanwhile, in Melbourne, I pay $50 for a kilo of PNG beans when I can find them.)

Such hardships underlie the willingness of many people — including some on the plateau — to sell rights to their country, but Kajai was one who led the resistance. Urbanised landowners were particularly keen for the pay-off, but then they wouldn’t have to live with the consequences. “We had a lot of conflicts, a lot of problems, especially with the elites — educated people who have been to town and were lucky enough to come down and get employment,” he said of the decades wrangling the conservation push.

To understand the obstacles the project had to navigate requires a few insights. First, land in PNG is still largely held under customary ownership. Second, PNG’s population is a diverse patchwork, with over 850 languages, so negotiations over one region may involve a multitude of tongues. (There are five across the Managalas.) Third, land is beyond price in a nation where the state is still so absent that country is all that can be relied on for survival.

These hurdles mean that any number of conservation efforts by marquee conservation outfits and other international non-government organisations — so-called BINGOs — have crashed and burned in the gulf between what distant donors expect and what local people need. Meanwhile, the country’s wilderness is being devoured by logging, much of it illegal. Exports of tropical timber have doubled over the past decade, making PNG the world’s largest exporter of round logs.

The profits from this trade have failed to improve life for most people on the ground, said Kajai. He’s the father of eight grown children. “The system has failed them… The system has failed us. But we have land. Land will not fail you. It is only when you are not creative that you’ll fail yourself.”


There’s bitumen carpeting the routes of the oil palm trucks, but it disappears after the turn-off to Afore village on the Managalas Plateau. Afore is only sixty kilometres away, but it will take around four hours to get there because the road is so bad. A dozen people are piled in the back of our ute. Most are locals who pay K50 (A$21) for the ride — around half the annual income earned by many households on the plateau.

I’m sitting up front with driver Colin Fred, who lives in Afore with his schoolteacher wife and three children. Grinding two sets of gears, playing the pedals like a 4WD virtuoso across the range from full throttle to light staccato, he somehow extricates the overburdened ute from dry ruts and muddy bogs. We plough through a wide river where the bridge was taken out by Cyclone Guba a decade ago. The road is ruined, but then so are pretty much all the routes relied on by the 80 per cent of PNG’s eight million–plus population who live in rural and remote areas. For them, this reality defines all else. Without a functional road you can’t bring teachers and medicines in or send crops and emergency cases out.

We lurch up onto the plateau, a shallow basin that sits between 650 and 850 metres above sea level, encircled by mountain ranges pushing up to over 2000 metres. A breeze flushes out the vehicle’s stifling cabin.

The road winds through stands of rainforest and wild banana trees, rows of coffee and cocoa, swathes of grassland, huts planted on stilts in scrupulously kept gardens of flowers and vegetables. The lushness is fed by rich volcanic soils. A scientist working here twenty years ago on an AusAID-funded research program theorised that the QABB probably gained its monumental size from the vigorous health of the single rare species of Aristolochia vine on which it lays its eggs, and the nectar of the hibiscus and ixora flowers it cruises. Dozens of eruptions had scattered layers of phosphate-rich ash across the plateau, providing “ample nutrients to sustain the caterpillars of such a large butterfly.” And it’s not the only extraordinary creature nurtured by these conditions: a billboard celebrating the conservation project lists half a dozen other flagship species, among them the Raggiana bird of paradise and Doria’s tree kangaroo.

The crucible for the project was a Tok Pisin literacy program back in 1984, enlisting academics and students from the University of PNG — among them aspiring lawyer Damien Ase. A central figure in the PNG conservation movement nationally and locally, Ase hails from a village on the other side of the plateau. “I saw all the destruction that was going on in those places where cash crops like palm oil and cocoa were taking over the forest,” he recalls. “I didn’t want my people to go through that… so I played my part.”

The literacy program evolved into a non-government organisation called Partners with Melanesians, which over the next decade shifted into conservation and development, securing funding from the Rainforest Foundation of Norway, which has supported the project from concept to realisation last year.

The Managalas declaration doesn’t entirely lock up the forests. Rather, it lays out a program of sustainable use of the landscape. Every part of the plateau has been mapped and zoned for one of five purposes: village life, subsistence gardening, larger-scale cropping, hunting grounds and no-go conservation areas. The hope is that this portfolio will generate a mix of activities and attract a variety of players — including researchers, tourists and produce buyers — to support local livelihoods. In Kajai’s vision of the future, farmers will find markets for their organic coffee and spices, village houses will have electric light, schools will plug into the internet, and students will become teachers, tour guides, scientists and health workers employed on the plateau, raising their own families, and sowing an ongoing connection to land and culture.

This model is what American anthropologist and PNG specialist Paige West describes as “conservation-as-development.” Such projects assume that environmental conservation can provide a flow of cash income, and that “development needs, wants and desires, on the part of rural peoples, could be met by the protection of biodiversity on their lands.” West has spent years closely observing the dynamics of such projects, which turn on contracts between villagers and outsiders — maybe a big non-government organisation, maybe research scientists. She has seen how much gets lost in translation: rural people don’t always understand the outsider notion of “conservation” and outsiders don’t always understand what villagers think of when they imagine “development.”

These days, West collaborates with John Aini, a PNG conservationist, to spotlight these failures among specialists, scholars and practitioners, and challenge them to find strategies for “decolonising conservation.” They describe how, time and again, they have seen outsiders come into communities with their own well-formed ambitions but little capacity to understand the links between local livelihoods and healthy biodiversity. Donor-driven projects almost inevitably fail, often leaving behind a volatile mess of failed expectations.

A big part of the problem, according to Vojtech Novotny, a Czech ecologist who has been working in PNG for decades — including running a modest sustainable livelihoods program in the villages around his field sites — is that many people who donate money and effort to saving faraway forests are afflicted by a crippling romanticism. He explored this in a provocatively titled 2010 paper, “Rain Forest Conservation in a Tribal World: Why Forest Dwellers Prefer Loggers to Conservationists,” arguing that “the global machinery of nature conservation remains, regrettably, remarkably inept at presenting indigenous owners of tropical forests with a decent offer in exchange for their continued management and conservation of a substantial amount of the world’s biodiversity.”

Forest people need income and services, and he’s seen little sign of improvement from the BINGOs in delivering on these. “They need a stream of new projects to excite donors, and that doesn’t really work here.”

But he’s “cautiously optimistic” about the prospects for the Managalas Plateau. He and others credit the realisation of the project last year to its organic roots, the engagement of local participants throughout, and the long endurance of both the homegrown Partners with Melanesians and its Norwegian benefactors. None of this guarantees it will deliver what is hoped, but “the advantage is that after thirty years, they have already been through the cycle of hope and disappointment,” says Novotny. They are perhaps well placed to ride it out a bit longer yet.

Malchus Kajai is banking on it. “It’s almost a year now, and people are asking us, ‘When is the service going to be delivered? When are we going to have a coffee mill? When are we going to secure a market for our coffee and our vanilla? When are we going to have better roads?’” People are anxious, he says. “I’m anxious.”


We arrive in Afore at nightfall, pulling up at a pair of spartan shacks that serve as headquarters of the conservation project. We settle in by torchlight, talking late by the cooking fire, eating rice and tinned fish beneath undiluted stars. We sleep under mosquito nets, serenaded by the chorus of forest creatures and a choir of mothers just returned from a church retreat.

I’m up at 5am, slip-sliding down a dark, muddy path to a pit toilet, praying our crew will soon be en route to a stand of forest where — they promise — I will find O. alexandrae. It’s my only shot. I’m booked on a flight out of Popondetta that afternoon, so I’m desperate to get this show on the road. But by the time the fire’s awakened, and pots of tea and rice brewed for breakfast, it’s gone 7am.

In the rear-vision mirror, Afore is otherworldly, an island in the sky encircled by rivers of ethereal morning mist. The track into the forest is barely discernible. I feel myself breathing in to help Colin Fred squeeze the ute between close trees and across too-narrow improvised bridges. Households sprinkled through the bush are surprised by the rare traffic, children chasing and laughing. We pass women hauling up pots of water from streams. The bone-weary gaze of one of them, as I wave, wipes the smile off my face.

Raynold Pasip leans in from the back of the ute to tap my shoulder. “This is my land!” he shouts over the revving engine as we pass through some invisible jungle boundary. Pasip is a wiry elder and a member of the Managalas Conservation Foundation board. We spoke at length the night before about the land, his history with the project, his hopes.

“When I walk to my own bush, I see the bird of paradise. I see a cassowary. I see a wallaby. I see all these things, I feel proud,” he said. “If I want to kill them for my meat for the dinner, it doesn’t cost me money. I can kill some of them, and then come and cook. The feathers for my dancing, [the] tails for making bilum [bag], and the bones I use [to] make a needle, different things. Our young children are taught, when they go they have a certain time for hunting, and a reason to kill birds and a reason to catch animals. They are not careless in killing birds or cutting trees.”

Pasip talked about the destruction he had witnessed elsewhere in PNG. He describes how that persuaded him to join forces with the elders of 151 other clans, deciding “this plateau should be declared for conservation… so our young generation, the children, will have benefit from their own resource.” They should not be mere labourers on their own land. But without a good road, without airstrips, “people struggle. They carry their own food, cargo, from their shoulders… they walk down to town and they do their marketing.”

He also talked about the peacefulness of the plateau, in the bush and even in the villages, where people were not disturbed by the fighting that has become part of life in so many communities. He spoke of places where you could find birds of paradise or great waterfalls or tiny frogs. The secret sites where, in old times, his people would take their dead. “They didn’t bury them in the ground. They have to go and wrap them around with a mat, and they make a little house on the tree, and they leave them there.”

We pull up at the tiny village of Dareki. The phone network on the plateau hasn’t worked for over a month, so Damien Ase couldn’t send word ahead of our coming. We surprise the man we have come to see — Conwell Nukara, the butterfly whisperer — at home with his small children under a verandah of palm leaves, drying off after a bath in the Pongani River. Briefed on our hurried mission, he leads the way, on foot, deeper into the forest.

We climb over logs and under a tangle of trailing vines including, Nukara points out, the Aristolochia favoured by O. Alexandrae. To us it’s poisonous, he warns. We wade through cloying air and a cool, shallow creek. Pasip and June Toneba, the women’s representative on the conservation board, walk with me. She points out cultivated plots mixed in with the wild growth of the forest: plantings of corn, peanuts, chillies. One of her objectives under the project is to bring in teaching programs for the women. They are such accomplished gardeners, but they struggle to turn this into profitable business, and their children are malnourished because they don’t have the knowledge or resources to feed them a sustaining diet.

Fixing the road is also, for her and other women, truly a matter of life and death. Because when their labours go wrong they can’t get to hospital in Popondetta, many mothers die delivering their babies. Toneba is still mourning the recent death of her own daughter, Imelda, after an asthma attack.

My gaze is on the ground, picking through the labyrinth of tree roots, when Toneba cries out. A flash of movement, a disturbance in the ether. A butterfly the size of a small bird, swooping and dancing around us. “It’s a Queen Alexandra… a QABB!” Toneba declares, and I’m swivelling about with excitement, or perhaps delirium. It circles close, but juggling camera, recorder and notebook I fail to get a fix, and then it’s vanished. When we catch him up, butterfly whisperer Nukara says that we were almost certainly mistaken, tricked by a similar but smaller birdwing.

Finally, we arrive in a clearing where a sign declares: WELCOME TO MISU — QUEEN ALEXANDRA BIRDWING BUTTERFLY FARM: A COMMUNITY INITIATIVE. Nearby is a green shade house, about half the size of a tennis court, which Nukara built last July. Inside are rows of saplings sprouting broad leaves.

He gently turns one over. Stapled underneath is a portion of another leaf he has plucked from the surrounding forest. Stuck to this is the brown and yellow pupa of O. alexandrae. Nukara turns another leaf, revealing another pupa. The one cocooning a female is as long as his index finger but plumper, the male specimen a little smaller.

Nukara says he roams the forest every morning looking for pupae. When he finds them he brings them into the shade house, where they stay up to eight weeks. This keeps them safe from birds and spiders. A day after they hatch, he opens the door and releases them, he says. So far, he’s waved out twenty-five: fifteen females, ten males.

“This is the largest butterfly we have in the world,” he says, “so that is our pride… It is also endangered, and in the future this will bring people from outside, tourists and other people who are interested, so we can make a small income from that.”

Nukara is wearing a t-shirt with the emblem of New Britain Palm Oil, one of the biggest producers in the country. He’s not being ironic, merely pragmatic. Decades of campaigning by environmentalists against rapacious habitat destruction by the industry, enlisting the orangutan as the poster child of the devastation through Sumatra and Borneo, has put producers under intense pressure to improve their sustainability credentials. In Oro, New Britain Palm Oil has recently announced it will bankroll a captive breeding program for the QABB to try to rescue its precarious population. Its experts have been visiting Nukara’s butterfly farm, talking to him about collaborating on what may be the last chance to save the vanishing species.

Nukara keeps a close watch on his growing flock of O. alexandrae. He checks in on the pupae and patrols the glade every morning. Butterfly poachers, modern-day Meeks, remain a real threat. Prized QABBs sell for thousands on the black market. There is an argument — including from some of the species’s passionate champions — that its best protection might be to permit landowners to trade a limited quota of specimens: killing butterflies to save them.

What happens, I ask Nukara, when you release these freshly hatched spirits into the wild? Do they flap away? “They hang around for a while,” Nukara says. “They even come back to me… then they fly up.” He saw half a dozen here just this morning. “You should have come early,” he admonishes. “Every morning, I am always filled with joy.”

To my great joy. That’s what A.S. Meek wrote on witnessing his prize emerge from its cocoon. At which point he pulled out his kit and killed it, securing the trophy inside one of his airtight japanned containers.

I scan the enveloping green one last time. Nothing. I shut my eyes. Birdsong, the chirp of lizards, the cacophony of unseen tiny creatures, the fall of fruit on the forest floor. The pulse of fecund energy. The silence of the ancestors perched in their trees. The more disturbing ghost of my own kind, Meek.

I’ve read somewhere that O. alexandrae flaps through the high canopy with the power and thrust of a bat. It’s not registering on my poorly tuned radar. Which is not to say it isn’t there. •

This essay appears in Griffith Review 63: Writing the Country, edited by Ashley Hay.

The post The butterfly effect appeared first on Inside Story.

]]>
Science under siege https://insidestory.org.au/science-under-siege/ Fri, 05 Oct 2018 02:01:17 +0000 http://staging.insidestory.org.au/?p=51207

Donald Trump has launched an all-fronts attack on science and environmental protection

The post Science under siege appeared first on Inside Story.

]]>
While the news from Washington has been dominated by Brett Kavanaugh’s candidacy for the Supreme Court and how it will help cement Donald Trump’s legacy, the administration has been intensifying its attack on science and redoubling its efforts to dismantle regulations designed to protect health and the environment and tackle global warming. The legacy of that campaign could be much more toxic and longer-lasting than the outcome of the Kavanaugh hearings, and not just for the United States.

In the past two weeks alone, reports reveal fresh attacks on independent sources of advice. The Office of the Science Advisor to the Environmental Protection Agency seems set to be dissolved. This senior post offers advice to the EPA and its administrator on science underpinning health and environmental policies, regulations and decisions. The head of the EPA Office of Children’s Health, a respected paediatric epidemiologist, has been placed on unexplained administrative leave following reports that the incumbent has repeatedly clashed with administration officials bent on loosening pollution regulations. The disputes are reported to centre on the planned weakening of mercury emission rules, announced on 30 September, the administration’s failure to act on a recommendation by EPA scientists that the organophosphate insecticide chlorpyrifos be banned, and the proposal to dismantle programs that protect children from lead poisoning.

There are no surprises here. During the presidential campaign, Trump’s tweets linked autism to vaccinations and light bulbs to cancer. He has described global warming as a Chinese hoax designed to make American manufacturing uncompetitive, and has also claimed that it is based on faulty science and manipulated data. He has cited freezing temperatures as evidence that global warming doesn’t exist. He pledged during his presidential campaign to revive the coal industry and bring back miners’ jobs, foreshadowed sweeping deregulation of natural gas, oil and coal production as part of an “America First” energy plan, and promised to withdraw the United States from the Paris climate accord.

Senior political appointees dispute that human activity is the leading cause of climate change; the administration interferes in science policy processes and restricts federal researchers and their work; executive orders and regulations are used to bypass congressional debate. Thousands of government web pages relating to climate change have been taken down, buried or scrubbed of references to climate change and carbon. No part of the federal bureaucracy is immune.

It starts at the top. The White House Science Advisor was not nominated until July 2018 (ending the longest vacancy in the forty-two-year history of the post) and has yet to be confirmed by the Senate. Those concerned about climate change are relieved that the nominee is a well-respected meteorologist, but history shows that the effectiveness of science advisers is determined not by their expertise but by how closely they are in step with the political priorities of the administration they serve.

Trump headed off to major negotiations on denuclearisation with North Korea without any expertise in this area on his team. There is no chief scientist at the State Department, despite the fact that science is central to such issues as cyber security, global warming and monitoring nuclear capabilities, nor at the Department of Agriculture, which is redefining its core mission from the scientific monitoring of food production and safety to the promotion of farm exports. The Department of the Interior and the National Oceanic and Atmospheric Administration have both disbanded their climate science advisory committees, and the Food and Drug Administration no longer has a Food Advisory Committee to provide guidance on food safety. As the New York Times headlined, “Science is unwelcome. So is advice.”

Among these actions, Trump’s June 2017 announcement of the US withdrawal from the Paris Climate accord is perhaps the least meaningful, a victory of symbolism and bombast over reality. The earliest the United States can legally withdraw is November 2020, which means it will be an issue in the next presidential campaign. Meanwhile, many states are continuing their efforts to tackle climate change and are on pace to meet their share of the Obama administration’s pledge under the Paris accord. But the fact that the United States has ceded its leadership in this area and now stands as a rogue outsider has global implications.


The real damage is being done elsewhere, largely following a sixteen-point agenda delivered to the administration by coal baron Robert Murray, a major supporter of Trump’s election campaign. The wish list of regulatory overhauls includes ending regulations on greenhouse gas emissions and ozone and mine safety, cutting the staff of the EPA, and overhauling the office of mine safety at the Department of Labor. Cabinet secretaries have eager carried out these, egged on by the president, who claims “The never-ending growth of red tape in America has come to a sudden, screeching and beautiful halt.”

Scott Pruitt (a climate change denier who made a career of filing lawsuits to block EPA regulations) was an early cabinet appointment to head the EPA where he notoriously pandered to the interests of the very industries overseen by the agency. When he was forced to resign over ethical violations in July, his deputy Andrew Wheeler, a former coal lobbyist who had previously worked for Murray, the man with the to-do list, stepped in as acting administrator. Small wonder that Trump, even as he accepted Pruitt’s resignation, was moved to tweet, “I have no doubt that Andy will continue on with our great and lasting EPA agenda. We have made tremendous progress and the future of the EPA is very bright!”

The agenda Wheeler inherited includes the proposed repeal or weakening of more than thirty environmental protections. Key among these is the repeal of the Clean Power Plan, based on the Clean Air Act 1970 (as amended in 1990) — the centrepiece of the Obama Administration’s efforts to tackle climate change and meet the emission reduction goals pledged under the Paris agreement — and its replacement with a new rule. This would see power plants required to reduce their carbon dioxide emissions by around 1 per cent below 2005 levels by 2030 (the equivalent of taking 2.5–5.3 million cars off the road), a dramatic weakening of the Obama rule (which required a 32 per cent reduction below 2005 levels, equivalent to taking seventy-five million cars off the road). The EPA’s own analysis reveals that this change will result in up to 1400 additional premature deaths, 48,000 new cases of asthma and consequently 21,000 additional missed days of school every year.

Efforts to cap greenhouse gas emissions are also undermined by a proposed rule that would reverse by 2020 the requirement that manufacturers make cars more fuel efficient. This plan also includes language forbidding states like California from imposing stricter standards. Shockingly, in an environmental impact statement issued by the National Highway Traffic Safety Administration to justify this, government scientists make the admission that global temperatures will warm by 7°F (about 4°C) by the end of the century: this dire forecast is offered as a justification that changing tail-pipe standards is irrelevant — the planet’s fate is already sealed.

The Trump Administration can’t scrap the Clean Air Act outright. A 2007 Supreme Court decision enabled the EPA to declare that carbon dioxide is a pollutant under the Clean Air Act because it causes global warming and this endangers human health. Overturning that decision would require the Trump administration to disprove the science of climate change, a legal battle it is not willing to undertake. Instead, it has devalued a metric known as the social cost of carbon, which calculates damage to property, human health, agriculture and economic growth from carbon dioxide pollution and is used to offset the costs of compliance. The administration argues that each ton of carbon dioxide emitted by a car or a coal plant in 2020 would only cause between $1 and $7 in economic damages, far lower than the Obama administration’s estimate of $50.

The EPA is also proposing to rescind the provisions of the Clean Water Act that prohibit industries from dumping pollutants into streams and wetlands. Just this month it was revealed that the agency is pursuing rule changes that would overturn the current, decades-old guidance that says any exposure to harmful radiation is a cancer risk. This change, based on the claims of outlier scientists that a little radiation is good for you, could lead to higher levels of exposure for workers at nuclear installations and in some medical settings.

Under these proposals, even the dirtiest forms of pollution are getting a reprieve, despite acknowledged harms to human health. The heaviest burdens will fall on the poorest and most marginalised Americans, many of whom are black. Indeed, there is a kind of systemic racism at the heart of the environmental devastation that Trump’s policies promulgate.

As William Ruckelshaus, administrator of EPA under president Ronald Reagan, said when Pruitt was still in the job, “My principal concern is that Pruitt and the people he has hired to work with him don’t fundamentally agree with the mission of the agency. They are more concerned about costs associated with regulation.”

Pruitt acted early to restrict academic researchers from joining the agency’s scientific panels, instead appointing scientists who work for the industries the EPA regulates. He required the Clean Air Scientific Advisory Committee (which is mandated by law to prioritise the health effects of pollution) to also consider the potential economic and energy consequences of emission control measures — even though the Supreme Court unanimously declared in 2001 that the Clean Air Act “unambiguously bars cost considerations from the [standard-]setting process.” And he has proposed limiting the types of scientific research EPA officials can take into account when writing new policies.

Against this background, there was a heavy irony in Pruitt pushing back on an article published by two Harvard scientists in the Journal of the American Medical Association that estimated the administration’s proposed changes to environmental policies would conservatively lead to an extra 80,000 deaths every decade. According to Pruitt, these results are “not scientific.”


There is a saving grace to this nasty, widespread agenda: most of these changes are yet to take effect. They have been stymied by lawsuits, court challenges and even the concerns of the affected workers and industries. Automobile manufacturers now say that “climate change is real and we have a continuing role in reducing greenhouse gases and improving fuel efficiency” and are concerned that their investments in innovation will be lost. Miners worry they are not sufficiently protected from black lung disease.

Trump may have ordered Energy Secretary Rick Perry to halt the shutdown of coal and nuclear power plants, and is considering ways to force the purchase of coal-fired electricity, but ultimately market forces will drive the dirtiest, oldest power plants out of business. The numbers show that not much has changed for the faltering coal industry since Trump took office. Employment and production are up, but coal consumption is down and coal prices are lower now than they were when he took office. The industry is more affected by cheap natural gas prices than by burdensome regulations.

Moreover, the regulatory certainty these regulatory changes will deliver is ephemeral. They will ignite legal challenges that could last years. It is worthwhile pointing out here that the Obama Clean Air Rule never went into effect as it was stayed by a Supreme Court decision in February 2016. That hasn’t stopped Trump and his cohorts endlessly promoting the perception that the EPA has repealed Obama’s environmental legacy, encouraged new jobs, and made life easier (and more profitable) for big business.

Trump’s push to see Kavanaugh on the Supreme Court is partly about ensuring that these legal roadblocks are dissipated. In his twelve years as a federal judge, Kavanaugh has heard twenty-six cases involving the EPA. He is on record with an opinion, concurrence, or dissent in eighteen of those cases, and only twice has he sided against industry. In a 2012 opinion he adopted an “environmental originalism” approach, writing that the EPA “went well beyond what Congress authorised” in crafting a greenhouse gas permit program. In 2016 oral arguments, Kavanaugh said that the Clean Air Act is “a thin statute, it wasn’t designed with [greenhouse gases and climate change] specifically in mind.” He believes that it is up to the Congress to act on such issues. Environmental groups are concerned about what Kavanaugh’s appointment would mean for future Supreme Court rulings on environmental cases.


Will American voters recognise and act on this attack on science-based health and environmental protections? The science of issues like climate change is complex, and that can facilitate efforts to mislead and manipulate the public. And polling shows that voters’ views on the subject — like their views on nearly all issues these days — are increasingly politically polarised.

A March 2018 Gallup poll found that 87 per cent of Democrats believe global warming is caused by human activity, compared to only 40 per cent of Republicans. Scepticism among Republicans is increasing, with 69 per cent saying that the seriousness of global warming is exaggerated, compared to 66 per cent in 2017; only 4 per cent of Democrats see the threat as exaggerated. There are significant differences in Republicans’ opinions on environmental and energy issues based on age, with millennials much more likely to believe that the global warming is mostly due to human activity and that climate change is affecting their communities. About half of all Americans don’t think climate change will affect them.

In the same polling, a majority of Americans say protection of the environment should be a priority, even at the risk of curbing economic growth. Proposals to reduce emissions, enforce environmental regulations, regulate fracking, spend government money on alternative energy sources and pass a carbon tax all had majority approval.

There is little recent polling to show how concerns about climate change play out for minority voters, but what is available suggests that people of colour care about environmental issues. A 2017 poll found 91 per cent of African Americans and 90 per cent of Latinos are concerned about climate change, compared to 68 per cent of whites.

How political leaders communicate about climate change influences public perceptions about this issue and public willingness to support needed actions. In this age of fake news, it is too easy for leaders like Trump to sway public opinion by being selective about the scientific facts and data relating to such a complicated issue, surrounded by so many uncertainties. But a recent summary of public opinion suggests that climate change will be a wedge issue in the 2018 midterm elections, especially for younger and minority voters. •

The post Science under siege appeared first on Inside Story.

]]>
Getting personal about cancer https://insidestory.org.au/getting-personal-about-cancer/ Thu, 12 Jul 2018 01:28:08 +0000 http://staging.insidestory.org.au/?p=49736

New research tools are revolutionising cancer therapies

The post Getting personal about cancer appeared first on Inside Story.

]]>
As a young medical oncologist, I was sometimes asked which cancer diagnosis I most feared. Invariably, my answer was metastatic melanoma. Although a primary melanoma could be cured surgically if it was caught early — and could often be avoided altogether by heeding sun-protection messages — once the disease began spreading it progressed rapidly. The main chemotherapy, dacarbazine, only reduced the cancer in 15 per cent of patients and didn’t offer a survival advantage in most trials.

Things have changed. These days we have at least three lines of therapy for treating melanoma, and patients are surviving for years. What happened? Cancer therapies moved from conventional chemotherapy to targeted therapies, including immunotherapy, and ushered in a new era of “personalised” medicine.

Conventional chemotherapy kills cells that are dividing, and in the process disrupts genetic material — our DNA — and our bodies’ mechanisms of division. It doesn’t target cancer cells alone — it also kills normal cells that happen to be dividing at the time it is administered. This is why the side-effects of chemotherapy tend to appear in tissue that is constantly dividing, including the lining of the mouth and bowel, hair follicles and the bone marrow cells that replenish blood cells. Fortunately, a higher percentage of cells in the cancer are dividing at any given time, and normal tissues recover better between chemotherapy treatments than the cancer can.

We have long known that cancers are caused by mutations in the DNA that makes up the genes in the cell. Some mutations are inherited; some are acquired if we are exposed to certain substances. In each case, they interfere with the body’s normal controls over cell growth. If we can identify the mutations, their proteins and the signalling pathways involved, then therapies can be designed to stop the cancer growing.

We’re increasingly doing that, but this revolution is only possible because of the development of genomics. Where genetics concentrates on single genes, genomics looks at all our genes, how they relate to each other and how they influence the growth of tissues.

The sequencing of the human genome in 2001 revealed variations in individuals’ DNA sequences that can be used to identify susceptibility to cancer and different responses to drugs. It also allowed us to pinpoint the different mutations in individual cancers. We can now measure the abnormal protein produced by the mutated gene to screen for cancer, help diagnose cancer and track the effects of treatment.

Next-generation DNA sequencing has taken this development a step further. Rather than looking for single gene mutations or sequencing sections of DNA relatively slowly, we can very rapidly sequence the whole genome. New bioinformatic techniques and significant computational resources can handle the very large amount of data collected through this process. (A whole genome sequence corresponds to approximately one terabyte of raw data.)

Next-generation sequencing can identify mutations in cancers that are not present in normal cells, and we can then target the proteins they produce with specific treatments. New high-throughput screening procedures can test thousands of compounds, identifying those that have the greatest impact on the cancer and the least effect on anything else.

Targeted therapies aren’t new. The first was developed more than fifty years ago, in 1967, after the discovery of the oestrogen receptor on breast cancers. Seventy per cent of women with breast cancer have these receptors, which act as a conduit for the oestrogen that fuels the cancer. Blocking the receptor with tamoxifen destroys the cancer cell.

More recent cancers identified by genetic alterations include chronic myeloid leukaemia, in which DNA swaps between two chromosomes (9 and 22) create an abnormal short chromosome 22 (the “Philadelphia chromosome”) that forms a new gene, BCR-ABL. The protein produced by that gene, a tyrosine kinase, is part of the pathway that signals the cells to divide. One of the early treatment successes was a small molecule, imatinib, which inhibits the action of the tyrosine kinase.

As one of the first of the therapies to target the tumour in this way, and as evidence of the potential of targeted therapy, imatinib even made it onto the cover of Time magazine. What’s more, it acts on another tyrosine kinase, this one associated with a rare cancer called a gastrointestinal stromal tumour — all the more remarkable a discovery because that cancer is unresponsive to conventional chemotherapy and radiotherapy.


These advances bring us back to the enormous progress in the treatment of melanomas, which no longer top my list of most-feared cancers. The first breakthrough came in the early 2000s, when it was found that about half of all melanomas have a mutation in a cancer-promoting gene, BRAF, which activates a protein called BRAF kinase that is part of the signalling pathway regulating cell growth. A new drug, vemurafenib, was found to inhibit this protein and was approved for use in 2011.

Following successful small studies of vemurafenib in patients with melanoma with a BRAF mutation, Paul Chapman and his colleagues in the BRIM-3 Study Group published the findings of a randomised study comparing vemurafenib to the older chemotherapy treatment, dacarbazine. Vemurafenib improved survival, once again highlighting the potential of targeting the proteins involved in the signalling pathways.

Our increased understanding of the interaction of the body’s immune system with cancers has also led to a second group of targeted therapies known as immunotherapies, which harness our immune systems to kill cancer. One type of immunotherapy uses antibodies to target the proteins found on foreign invaders like viruses and bacteria. Monoclonal antibodies — antibodies all of one type — can be manufactured to target a protein on a cancer cell, which can in turn trigger the immune system to attack the cell.

This is how we have come to deal with the HER2 gene, which is associated with an aggressive type of breast cancer. Copies of HER2 are higher in around a fifth of breast cancers, making it a vital biomarker for the condition. When a new antibody, trastuzumab, is added to chemotherapy for women found to have the HER2 gene, the chances of surviving breast cancer for ten years increase from 75 per cent to 84 per cent. I vividly recall this result receiving a standing ovation when it was presented at the American Society of Clinical Oncology meeting in 2005. Another antibody of this kind is rituximab, which targets a protein on some types of lymphoma and improves the outcomes of low- and intermediate-grade lymphomas by about 13 per cent.

Monoclonal antibodies can also be used to carry other toxins — a cytotoxic agent (which kills cells) or a radioactive particle — to the cancer. The recently developed drug TDM-1, for instance, combines trastuzumab with a cytotoxic called emtansine, which targets the cancer. Ibritumomab tiuxetin, an antibody against a common lymphoma protein, carries the isotope yttrium-90 to the lymphoma cell.

Not all monoclonal antibodies are designed to attack the cancer directly. We now recognise the significance of the tumour’s ability to stimulate the growth of new blood vessels, which nourish the cancer and provide an avenue for cancerous cells to spread beyond their site of origin. The vessel growth is stimulated by the vascular endothelial growth factor, which can be blocked by a monoclonal antibody called bevacizumab, thereby limiting the growth of the cancer. Added to therapies for bowel and some lung cancers, it has been found to improve survival rates.

The most recent successes with monoclonal antibodies derive from research into the question of why our immune systems don’t seem to be successful in attacking cancer. As well as antibodies, the immune system contains T cells that attack invading organisms. Research has revealed that T cells, and even some cancer cells, contain a protein that prevents them from attacking the cancer. New antibodies have been developed to block these proteins and allow the T cells to attack the cancer.

Among these antibodies is ipilimumab, which targets the immune-blocking protein CTLA-4 (cytotoxic T-lymphocyte-associated protein 4) found on T lymphocytes. When ipilimumab blocks this protein, the T lymphocyte can kill the cancer in a similar manner to the way it deals with foreign invaders. Ipilimumab was one of the drugs that made a significant impact on melanoma: a pooled analysis of studies shows that just over a fifth of the patients are still alive three years after commencing treatment.

Another immunity-blocking protein that can be targeted by a drug is the programmed cell death protein 1 (PD-1) found on the immune T cells. This protein prevents T cells from attacking normal tissues, but in doing so it also prevents them from attacking cancer cells. Two new antibodies, nivolumab and pembrolizumab, are designed to block this protein and allow the T cell to attack the cancer. The treatment has proved effective, but it can have serious side effects, so research is continuing.

Not surprisingly, given its normal function, blocking PD-1 can also cause autoimmune inflammation in some normal organs. Fortunately, this is usually only temporary and can be controlled for the time that these antibodies are being given.

These agents, known as checkpoint inhibitors, are the third of the therapies that have revolutionised the treatment of melanoma. They are being tested both individually and in combination, and have also shown some success in treating lung cancer, kidney cancer and Hodgkin lymphoma. The proteins that stop T cells attacking the cancer can also be found on the cancer. Atezolizumab, a new antibody designed to block one of these proteins, PD-L1, is being used in the treatment of lung and bladder cancers.

At the cutting edge of immunotherapies are cancer vaccines. Here, proteins derived from the patient’s cancer are introduced into the body to stimulate an immune response that will also kill the cancer cells. These techniques are in their infancy and must be distinguished from the vaccine being used to prevent cancer of the cervix, which is directed at the human papilloma virus that must be present if a cancer of the cervix is to develop.

The potential immunotherapy causing perhaps the most excitement currently is a type of adoptive immunotherapy called CAR T-cell therapy. The patient’s T cells are removed from the body and genetically modified to express a protein called a chimeric antigen receptor, or CAR. The T cells are then grown in large numbers and reinfused back into the patient whose immune cells have been depleted. CARs on the surface of T cells can attach to proteins on the surface of the cancer cell, improving the ability of these T cells to attack the cancer. Dramatic results were obtained in a small series of patients, starting with children with acute lymphoblastic leukaemia and adults with lymphoma, who had exhausted all other forms of therapy. The side effects have been severe, however, and more research is needed.

The research tools associated with the development of genomics, and the increased computing power and bioinformatics needed to analyse genomic data have resulted in a paradigm shift of cancer treatment from cytotoxics to targeted drugs. Technology that allows candidate drugs to be tested simultaneously and very rapidly has accelerated the development of these therapies, as have the advances in immunotherapy that have enabled the production of monoclonal antibodies and their tagging with cytotoxics.

Add to them the discovery of drugs that block the proteins that in turn are blocking the immune response, and the manipulation of T cells outside the body to make them more effective in attacking cancer, and the result is an increase in survival rates for a number of cancers, and the promise of progress in many more, as we increasingly personalise cancer medicine. •

The post Getting personal about cancer appeared first on Inside Story.

]]>
Sleeping on it https://insidestory.org.au/sleeping-on-it/ Fri, 27 Apr 2018 02:29:33 +0000 http://staging.insidestory.org.au/?p=48340

Books | You are how you sleep, according to a persuasive new account of the science of not being awake

The post Sleeping on it appeared first on Inside Story.

]]>
Sleep has been a bestselling topic for a few years now, so another book on the topic — even one as good as this — comes as no surprise. Interest probably peaked with Arianna Huffington’s call to action, The Sleep Revolution, but the TED talks, special issues of New Scientist and media stories about sleep’s role in health, wellbeing and even beauty have kept rolling out. It’s hardly surprising, of course — worries about sleep are universal.

Matthew Walker is well-qualified to add his voice to this debate. He has been researching sleep and sleep-related phenomena for several decades now and has worked with some of the other leading sleep researchers of our time.

In Why We Sleep he presents the science of sleep in a detailed yet very accessible manner. The scholarship may not interest all readers, but the book is structured so that it needn’t be read right through, or even in order. In fact, Walker encourages readers to focus on the parts that are of most interest or use to them. And he won’t take offence in the slightest if we nod off at some points — in his view, the more shut-eye the better.

He begins by explaining the biological mechanisms that regulate sleep and how, for better and for worse, we and our environment manipulate them. Unusually, he includes an entertaining discussion of sleep types across animal species, with both parallels and contrasts to human sleep, and some hypotheses about the way that human sleep has, and will, evolve. This chapter is the foundation on which Walker builds the rest of the book.

He goes on to argue why sleep should be a priority for individuals, and also a priority for employers and communities. This is by no means a new argument, but he develops the point very persuasively. Underpinned by laboratory and field-based evidence, his discussion of the impact of inadequate or disrupted sleep on our daytime performance, mood, health and wellbeing is clear and cogent. He highlights the fact that while one night’s bad sleep has acute, short-term effects, the implications of cumulative or chronic sleep disruption can be severe.

Why We Sleep then heads off into an entertaining and challenging discussion of a topic — how and why we dream — about which we knew very little until quite recently. Walker presents a brief history of our understanding and interpretation of dreams, from beliefs among ancient Egyptians and Greeks (that dreams were handed down from the gods), through Aristotle (that dreams originated in waking events) and Freud (that dreams originate in the unconscious), to our modern understanding. He discusses the value of dreaming for mental, emotional and psychological wellbeing, and his intriguing account of the role of dreams in creativity is certainly worth thinking about (or sleeping on).

The book’s closing chapter is a call to arms for government, industry, communities and individuals. So many elements of Western life either limit opportunities for sleep or encourage us to prioritise wakefulness. Technology; work hours and work arrangements; lighting; pills to help us sleep and pills to keep us awake — the list goes on. Walker challenges us to prioritise sleep in order to be able to get the most from the rest of our lives.

Sleep is often viewed as the activity (or non-activity) that we do when all the day’s important tasks are finished. Matthew Walker attempts to reframe our idea of sleep so that we will come to treat it as critical for getting all those other tasks done well, safely and without sacrificing our health.

My only disappointment with this book is its patchy acknowledgement of the published literature. Walker names some scientists and cites some studies, and doesn’t provide full references for all of them. Although he expresses due humility, he does make it sound like the majority of the published work was done in the United States, if not in his own laboratory.

As a sleep researcher in Australia, I have a certain perspective, of course, but no one would deny that significant advances have been made by scientists not only here but on other continents outside North America. (Among them is Till Roenneberg, whose fascinating book I reviewed for Inside Story in 2012.) Walker’s main audience is American, of course, but readers everywhere should bear in mind the international nature of the research effort in this field. ●

The post Sleeping on it appeared first on Inside Story.

]]>
A kind of groove https://insidestory.org.au/a-kind-of-groove/ Mon, 16 Oct 2017 21:33:56 +0000 http://staging.insidestory.org.au/?p=45430

Extract | Gilda Civitico’s story illuminates the art and the science of tinkering

The post A kind of groove appeared first on Inside Story.

]]>
Some mornings, when Gilda Civitico arrived at the Victorian Infec­tious Diseases Reference Laboratory where she worked as director of experimental design, she would open a cage of fluffy ducklings and ease them into a sink. “I gave them a little pat and let them have a swim,” she recalls, “with their mates.” Then she cupped one in her hand, anaesthetised it, and while its tiny heart still quivered, she sliced open its belly, snapped through its ribcage, disentangled the pulsing organs, and harvested its slippery liver, snipping off portal veins and connective vessels. “You have to do it while it’s alive,” she explained, “or the blood clots.”

It’s fiddly work, in which speed and precision are essential. You can’t culture live cells if cell function starts shutting down. In this kind of experimental work, a confluence of material forces and knowledges interplay. You have to meld your theoretical knowledges (objective, measurable, explicatory) with your bodily knowledges (subjective, tacit, intuitive, skilful), and these are in continuous correspondence with the material forces around you (contingent, intractable) as well as the forces of time, climate and culture.

After harvesting a liver, Civitico quickly flushed out its red blood cells and then doused it with collagenase, an enzyme that breaks down the binding proteins and transforms the organ into a mound of substance the consistency of soft tofu. She pressed this slippery, still-warm mound into a sieve so fine that only single cells could find a passage through. She spun the resulting slurry slowly in a warm centrifuge, which separated the cells into three distinct bands: fat cells at the top; remnant red blood cells in the middle; and, at the bottom, the denser cells she coveted — hepatocytes (the main liver tissue cells). A labyrinthine procedure followed, in which she suspended those cells in pre-warmed (37°C) culture media. With a process of gentle layering and centrifuging, she got the cell density just right. “Think liquid red jelly being gently layered onto not-quite-set green jelly,” she told me. “You don’t want them to be mixed.”

This done, she syringed the substance into a pipette, washed the cells, counted them, and checked their viability with a staining pro­cess. “And then you dilute them out with media to the right number of cells per ml of media and this is what you use to seed your cell culture plates.”

Left overnight, exiled in their Petri dishes and sustained with exacting temperatures and gas mixtures, the duckling liver cells bonded together and also to their new homes. And each day for the next few weeks Civitico tended to their needs, changing growth medium and nutrients. This microbial life-support regime, she explained, was but the first of many procedures before the finicky business of DNA profiling and dose-response analysis. Had she infected the ducklings with another strain of the hepatitis B virus she was using, or had she been testing a different viral inhibitor, she might have designed this experiment altogether differently.

Experimental science isn’t merely a series of procedures, it is an art form. It requires a fine attunement to aesthetic detail and sen­sory attention. It’s ars and techne. It involves educated guesswork and mindful inter­action of procedural, tacit, bodily and propositional knowledges, interplayed with the knowledges embedded in technologies and materials. Just as a cellist might subtly adjust her technique to the proclivities and demands of a particular instrument, experimental science’s raw materials and tools aren’t simply passive recipients of human will, they are active agents. It’s the same deal from surgery to brick-making. Anthropologist Tim Ingold describes a confluence of human forces and materials in his description of clay brick-making, which “results not from the imposition of form onto matter but from the contraposition of equal and opposed forces immanent in both the clay and the mould.”

In Civitico’s lab, too, intimate attunement to these agents and their nuanced qualities — all their variabilities, tensions, fluxes, flows and resistances — was essential. Sometimes there was no telling what infinitesimally subtle variable of matter might violently sabotage a trial and undo months of evolving research. “Sometimes,” she told me, “a supplier would subtly change something. So, for example, if you ordered foetal calf serum, you made sure you stocked up on supplies of the same batch. You want to minimise variables.”

The right density of cell suspension, or the ideal nutrient profile in growth medium might be knowable by keeping abreast of specialist literature. But other contingents — like the subtle colour- or shape-shifts that sug­gest pH change or cell fatigue, or the ways some cells seem to prefer certain plastic plates — were knowable through intense observation, tweaking, inkling and kinaesthetic knowledge. Some knowledge simply couldn’t be codified in a procedures manual or even ade­quately explained to an assistant, because it relied on intuition. “You get a feel for what the cells like,” Civitico explained. People would be surprised, she added, at how much tacit knowledge is involved in lab science. “You can’t explain it, but you just develop a feel for what works.”


In his celebrated essay on intellectual craftsmanship, the sociol­ogist C. Wright Mills argues that theory doesn’t simply, through application, become praxis: instead, theoretic knowledge is itself “part of the practice of craft.” Craft — like tinkering — is a way of knowing by doing. To many craft theorists, the applied knowledge gained by making can’t be disentangled from the theoretical knowledge behind that making. Sociologist Richard Sennett observes that “all skills, even the most abstract, begin as bodily practices” and then “technical understanding develops through the powers of imagina­tion.” There’s a recursive relationship between making and thinking. In Tacit Knowing (1967), the chemist and philosopher Michael Polanyi asserts that humans carry a certain knowledge-awareness without being able to identify it in words.

Craft has long been under­stood as ineffable: something learned with the whole body and its senses rather than simply the mind. As early as 1677, craftsman Joseph Moxon wrote that craft “cannot be taught by Words, but is only gained in Practice and Exercise.” In The Art of the Maker (1994), Peter Dormer asserts that the knowledge to make something work, or to understand how it works, “is not the same as understanding the principle behind it”; tacit knowledge “differs from propositional knowledge in that it cannot easily be articulated or described in words.” Polanyi distinguishes between knowing how and knowing that; the former (the how) is being rapidly lost in a Western educa­tional system that privileges more abstract vocational training geared towards information economies over manual, bodily or skills-based knowledge.

Although her former career as a research scientist gave her an understanding of the tacit knowledge of tinkering in the lab, Civit­ico’s solitary tinkering in her home offered her additional freedoms: freedom to play and learn in a way that was unfettered by economic, political or bureaucratic concerns. In her home, tinkering became “a playful pursuit,” she told me. It’s about “applying knowledge you already have to a new problem or creative challenge. Tinkering is experimental so the results of your tinkering might be anything. I enjoy this element of uncertainty because I only ever have to please myself, and the process is always instructive.”


A few times when I visited Civitico, she and I sipped tea in the open-plan rear extension of the postwar brick home she shares with her partner, electrical engineer Andrew Peel (a tinkerer who was at the time researching a PhD in electrical engineering) and their two young daughters. Each time, her sewing machine and various sewing paraphernalia occupied half of the dining table. This was consistent across my research: tinkerers’ living arrangements tended to be physically — as well as temporally — organised around their tin­kering. In the light-filled space were shelves of games, shells, fossils and magazines: New Scientist, IEEE Spectrum, New Economist, Make and Craft, amid a collection of specialist books she calls her “craft porn,” including The Art of Manipulating Fabric and Metal Clay: The Complete Guide.

This library represents a more pleasurable stock of references than the procedures manuals she once followed in the lab; yet the deliberations and processes of a research scientist, Civitico told me, are not so far removed from those she now undergoes here, in her industrious domestic life. “The things that made me a good scientist are what make me a good craftsperson,” she told me. “I have a very high tolerance for repetitive stuff people find mind-numbingly tedious.” This repetition, she said, doesn’t negate creativity, but enables it. Both lab work and her current occupation — as prolific maker of jewellery, clothing and preserves (the “jam lady,” as she was known at her daughters’ school) — require strategic imaginings, a high frustration threshold and a willingness, dedication even, to learn from mistakes. She’s happy to unpick a garment just as she might, in a lab trial, analyse a procedural error and start over again. “Any kind of technical learning,” she said, “requires you to research, imag­ine, plan, execute a technique, fail, troubleshoot, try again and keep tweaking.”

Civitico’s story is one of psychological sanctuary — of tinkering-as-refuge. Her upbringing in an Italian-Australian family involved an array of cooking and crafting, but as an adult she rekindled crafting as a way to anchor her mind when postnatal depression hit hard. It was a way, she told me, to channel mental and bodily energy into material problem-solving rather than the dark wrestle of abstract anxieties that besieged her. She could have returned to her job, but that meant dealing with other stresses and demands — bureaucratic, collegial, temporal, political — and it meant neglecting what she considered the more important work of parenting. Nor did she seek the status or monetary rewards of work. She didn’t especially need the objects she started making, though their material value (and social agency) became a happy side-effect. The refuge she needed can be understood as engagement, a way of transforming her mind noise into a liminal rhythm that at times she’d experienced in her lab work.

Versions of engagement have been described in sociology, psy­chiatry, labour studies and neurology. C. Wright Mills defines the craftsman as being:

engaged in the work in and of itself; the satisfactions of working are their own reward; the details of daily labour are connected in the worker’s mind to the end product; the worker can control his or her own actions at work; skill develops within the work process; work is connected to the freedom to experiment.

Building on Mills, Richard Sennett defines engagement as the “exper­imental rhythm of problem-solving and problem-finding” that craftspeople experience. For him, the “carpen­ter, the lab technician and the conductor are all craftsmen, because they are dedicated to do good work for its own sake. Theirs is prac­tical activity, but their labour is not simply a means to another end.” He essentialises the craftsman’s rhythm. “The craftsman,” he writes, “represents the special human condition of being engaged.”

Engagement is recognised in contemporary psychology as “flow,” a term coined by US psychology academic Mihaly Csikszentmihalyi, who wrote, among other books, Beyond Boredom and Anxiety: Experiencing Flow in Work and Play, and Creativity: Flow and the Psychology of Discovery and Invention. In an attempt to codify the “optimal experience” that gives life purpose, Csikszentmihalyi interviewed artists, chess players, and others whose work involved the rhythm of concurrent problem-finding and problem-solving. He found they could achieve a state of transcendental grace.

The way Civitico described it, engagement or flow (she explained the state as “a kind of groove, like a form of meditation”) concurrently occupies the mind and quiets it. During this state it doesn’t occur to the tinkerer to check her watch or to eat; you are “completely caught up in what you’re doing.” Many craftspeople attest to the allure of this rhythm. In an exhibition on rare trades presented by the National Museum of Australia, bookbinder Daphne Lera described:

this feeling [that] happened almost within the first week of starting to learn bookbinding, and it hasn’t really left me. It stayed with me all these years. It is to do with the fine physical task. It sounds repetitious… but it’s also got this rhythm to it… [T]his rhythm I’m talking about, all I can say is that I recognise it, and I know that it does, it does exist. I lose myself in it when I’m concentrating.

Lera says this happens when she is “forever trying to perfect the technique.” John D’Alton, another tinkerer I talked to, described this liminal, meditative focus as “a manifestation of God.” Polanyi, too, describes a kind of tacit human knowledge “from which a harmo­nious view of thought and existence, rooted in the universe, seems to emerge.”

Civitico told me that entering this rhythm sometimes involves a certain discipline, pushing herself through a portal of frustration. She showed me a tube the size of a small child’s finger. Inside were open-ended silver rings: impossibly small. Civitico told me her supplier, whom she sourced online, “twists fine silver wire on to a mandrel, then tumbles them to get the burrs off. When you choose them you have to be precise about the size of the internal hole compared to the gauge of the wire.” Once Civitico settled at her work desk with a pile of the right proportioned rings, she switched on her task-light, and took up two fine pliers. So began the painstaking pro­cess of opening, twisting and closing the tiny rings, and fashioning them into impossibly intricate patterns. “There’s a certain amount of getting the rhythm back — you’re all thumbs,” she said. “Sometimes you keep dropping them and swearing for half an hour before you get to that state, and then you could just keep going forever.”

Seducing herself into a tinkering state helped restore Civitico’s health in ways she never calculated. She started tinkering from home by chance. Late one morning, as she placed a necklace — a birthday gift from her brother — on her dresser, she paused. “I looked at it and thought, ‘I can make this.’” She hadn’t really considered how jewel­lery “works,” but as she took the time to consider it closely, with the trained eye of a microbiologist, the necklace revealed its workings to her. She jumped online, and a universe of adventure and possibility revealed itself. It was a coup de foudre. Once she had the tools and materials, she made the necklace successfully and, recognising her handiwork as the neophyte impulse it was, she unpicked it, and set out to make something more challenging.

Civitico’s craft room, which doubles as the family’s music room, houses a floor-to-ceiling wall closet that itself houses a hierarchic organisation of boxes within boxes, like a monumental matryoshka doll. Some of these contain buttons, folded vintage fabrics, patterns, and tiny tubes of infinitesimally small jewellery components. There are varying grades of wire (with names like “half-hard” and “tigertail”) and silver coils. There are regular- and irregular-shaped beads (these have names like “bugle,” “hex,” “Charlotte,” “seed,” “faceted” and “Japa­nese delica”). And handmade lampwork beads she ordered from the US, inside of which jellyfish-like forms are suspended.

“People slave over a hot torch flame to produce these,” she told me, “from coloured glass rods.” (To be sure, if you google “lampworking,” you’ll discover a vast lexicon of technique and tradition.) In each tiny globe was a universe of otherworldly forms. We squinted for a few moments, holding each of them to the light and marvelling at their innards: nacreous, opalescent and ethereal sea-floor forms. “Part of the joy of craft,” Civitico said, “is the joy of discovery. It awakens in you the possibility of things, and whole other worlds.” ●

This is an edited extract from Tinkering: Australians Reinvent DIY Culture, published this month by Monash University Publishing.

The post A kind of groove appeared first on Inside Story.

]]>
Digging deeper into a 65,000 year story https://insidestory.org.au/digging-deeper-into-a-65000-year-story/ Thu, 27 Jul 2017 19:01:37 +0000 http://staging.insidestory.org.au/?p=41811

Don’t be dazzled by the numbers. What counts is how this latest archaeological find contributes to our understanding of Australia’s deep and dynamic history

The post Digging deeper into a 65,000 year story appeared first on Inside Story.

]]>
In the 1950s it was widely believed that the first Australians had arrived on this continent only a few thousand years earlier. They were regarded as “primitive” — a fossilised stage in human evolution — but not necessarily ancient. Over the past sixty years, the field of Australian archaeology, led by the likes of John Mulvaney, Isabel McBryde and Rhys Jones, has dramatically enlarged our understanding of Australian history. It is a story that has emerged from rock shelters and shell middens, art sites and urban spaces, archives and laboratories, lore and local knowledge. It is a complex, contoured and ongoing history of human endurance and achievement in the face of great social, environmental and climatic change. And last week, in a landmark Nature paper, it was pushed back to 65,000 years.

Dating the earliest human occupation at 65,000 years ago is a dramatic step, and it throws up questions that are not easily resolved. But it is not, as one journalist declared last week, “so old it [will] rewrite everything about the continent’s human history”; nor is it, as dozens of reports have asserted in their headlines, “evidence of Aboriginal habitation up to 80,000 years ago.” Such overstatements come from an unhelpful view that older is better; they diminish the site by emphasising numbers and dates rather than the deep and dynamic history they reveal.

The site at the heart of the latest finds, Madjedbebe (formerly Malakunanja II), has been part of archaeological conversations for over forty years. It is no more than a slight overhang: a decorated rock wall leaning out from the Arnhem Land escarpment, the last reach of the plateau before the landscape gives way to wet, scrubby plains. It lies on the lands of the Mirarr people, who were supportive of the archaeological work on their country. Through the Gundjeihmi Aboriginal Corporation, the Mirarr held the right of veto over all aspects of the work and were involved in the excavation through a process of constant and respectful communication. As David Vadiveloo, a lawyer, film-maker and consultant to the corporation, put it: “the Mirarr did not want this to be a project about them, they wanted this to be a project that was in partnership with them.”

Madjedbebe was first excavated in 1973 during research for the Alligator Rivers Environmental Fact-Finding Study, a wide-ranging report that provided the case for the declaration of Kakadu National Park. Bill William Miyarki guided the young archaeologist Johan Kamminga to the rock shelter and, equipped with a trowel, ash shovel and brush, Kamminga dug a small test pit, to a depth of 268 centimetres, against the rock face. The deposit was rich with stone, shell and bone, and a carbon sample collected above the level of the lowest artefact led him to conclude that “the earliest occupation at [Madjedbebe] is therefore likely to be in excess of 18,000 years.” It was one of a series of Pleistocene sites identified along the Arnhem Land escarpment in the 1960s and 1970s, and there were many, including Rhys Jones, who believed the region might hold the key to dating the arrival of the first Australians.

By the time Madjedbebe was next excavated, in 1989, the time horizon for Australian archaeology had been pushed back to 37,000–40,000 years, but that was where it plateaued. Jones thought it was no coincidence that the oldest dates all clustered around the time when the material required for dating — the radioactive isotope carbon-14 — had almost completely decayed. He suspected that the oldest dates “were so close to the theoretical limits of the radiocarbon methods that maybe the ‘plateau’ was really an illusion.”

In an attempt to break through the radiocarbon barrier, Jones teamed up with geochronologist Bert Roberts and archaeologist Mike Smith to date a few known sites with the relatively new techniques of thermoluminescence and optically stimulated luminescence, which could date quartz grains from a few hundred years old to several hundred thousand years old. They returned to Madjedbebe and, under the watchful eye of Gagudju elder Big Bill Neidjie, dug a narrow “telephone booth” shaft, four-and-a-half metres straight into the earth.

The findings from the 1989 dig, published in Nature, suggested that people had been living in Australia for 55,000 years, plus or minus 5000. Jones often spoke of a human antiquity in Australia of 60,000 years. He revelled in the fact that this was 20,000 years earlier than any modern human site in Europe: it “really caused people to raise their eyebrows.” The news made headlines around the world and a commendation was passed in the Australian Senate noting, “with interest, the discovery of Dr Rhys Jones of the Australian National University, and his team, of art and artefacts in the Malakunanja II [Madjedbebe] rock shelter in Kakadu that have been dated as at least 60,000 years old.”

Archaeologist Mike Smith draws stratigraphy during the 1989 excavations at Madjedbebe (Malakunanja II), Arnhem Land. Courtesy of Mike Smith

But the archaeological community remained sceptical. Some criticised the work for the use of the relatively new method of luminescence dating; some for the fact that the 1989 dig was never written up with a full site report. Others questioned whether the artefacts at the lowest levels had been pushed down over time by termite activity or “human treadage.” Although another site excavated in the region, Nauwalabila, was published in 1994 with dates 53,000–60,000 years old, both sites were contested and their ages dismissed as outliers.

The rejection of the significantly older dates from Madjedbebe and Nauwalabila reflects archaeologists’ caution about accepting dates that are outside the general pattern of the oldest sites. In 1998, archaeologists Jim Allen and Jim O’Connell reviewed the evidence for early dates in Australia and concluded that “initial occupation dates to about 40,000 radiocarbon years ago.” In 2004, they pushed this date back a few thousand years, writing that “while the continent was probably occupied by 42–45,000 BP [Before Present], earlier arrival dates are not well-supported.” In 2014, they reviewed new claims and again updated their estimate, concluding that “the first humans arrived in Sahul shortly after 50 ka [thousand years ago] — on current evidence not earlier than 47–48 ka.” The incremental shifts from 40,000 to 42,000 to 45,000 to 48,000 led anthropologist William F. Keegan to remark wryly, “Archaeologists seem to face far more complications in making the crossing to Sahul than the people who accomplished this feat about fifty [thousand years ago].”

In 2012 and 2015, a team of archaeologists embarked on new excavations at Madjedbebe with the hope of resolving the lingering questions about the antiquity of the site. I joined the team as camp manager and cook. As well as subjecting the site to rigorous dating tests, my companions conducted a series of tests to verify its structural integrity and better understand the natural processes by which it had formed. By opening a larger area of the site, not just a small shaft, they were also able to gain a more nuanced picture of the archaeology at its deepest levels. The leader of the excavation, Chris Clarkson, was eager to see if he could discern a pattern in the lowest layers that might illuminate the technology of the first colonisers of Australia.

In the new Nature paper, the large interdisciplinary team carefully and systematically address the earlier criticisms of Madjedbebe, present an account of their remarkable technological discoveries and, through an exhaustive dating process, push back the baseline date for human occupation to 65,000 years. It is significant that many who were critics of the 1989 excavation, and who were not involved in the recent study, have spoken publicly of the robustness of the find. “This is the earliest reliable date for human occupation in Australia,” Peter Hiscock, the Tom Austen Brown Professor of Australian Archaeology at the University of Sydney wrote to one reporter: “This is indeed a marvellous step forward in our exploration of the human past in Australia.”


So what do we do with an outlier that refuses to be dismissed? Madjedbebe’s 65,000-year story allows us to calibrate the epic journey of our ancestors, homo sapiens, out of Africa. The colonisation of Australia was no small feat. It required the traverse of a passage of water around one hundred kilometres wide to a land where no hominid had roamed before. Based on the array of technical, symbolic and linguistic capabilities it required, psychologist–archaeologist duo William Noble and Iain Davidson have argued that “archaeologically, this is the earliest evidence of modern human behaviour.” Thus, the endless debates surrounding when modern humans emerged from Africa and spread around the world ultimately come to rest on when people arrived at the southernmost extremity of their migration. Madjedbebe becomes the key date.

The antiquity of the site also thrusts it into the longstanding debate over the fate of the giant marsupials and reptiles known as the “megafauna,” which, according to a large-scale dating effort in 2001, died out in a continental extinction around 46,000 years ago. Talking to the media this week, Clarkson was emphatic: “It puts to bed the whole idea that humans wiped them out. We’re talking 20,000 to 25,000 years of coexistence.”

But it is important to stress that the history uncovered at Madjedbebe enriches, rather than eclipses, archaeological understandings of other regions. The new, older date doesn’t necessarily destabilise the strong pattern of settlement that has been established by archaeologists across Australia from 50,000 years ago; rather, it reveals how little we know about the 15,000 years that preceded it. Around 65,000 years ago, the first Australians were probably swept to the Sahul Rise: a low-lying, fan-like formation of skeletal limestone, riddled with tidal channels. The landing site, along with many signs of early occupation, now lies submerged on the Arafura Sea shelf. It is only though the lowest layers at Madjedbebe, at the margins of this vast inundated region, that we can glimpse the earliest chapter of Australia’s human history. “It’s not exactly Atlantis,” Mike Smith and Chris Clarkson remarked to me, “but it’s pretty bloody close.”

The full implications of the latest findings from Madjedbebe will take time to absorb. But let us not be dazzled by old dates — nor become numb to their power. Madjedbebe’s 65,000-year story adds to a remarkable Indigenous history of transformation, resilience and diversity. •

The post Digging deeper into a 65,000 year story appeared first on Inside Story.

]]>
Going under https://insidestory.org.au/going-under/ Mon, 03 Jul 2017 00:05:00 +0000 http://staging.insidestory.org.au/going-under/

Books | When does consciousness end and unconsciousness begin?

The post Going under appeared first on Inside Story.

]]>
Imagine you are lying in a hospital bed, facing an operation that you must endure without anaesthetic for medical reasons. The surgery is so painful that a drug is administered to erase its memory. Waking from sleep, you ask a nurse whether you have had the operation or whether it is still to occur. The nurse is unsure whether you are the patient who had a ten-hour operation yesterday or the one who will have a one-hour operation later today. Which patient would you rather be?

This thought experiment was posed by the great moral philosopher Derek Parfit, who died last New Year’s Day. Parfit, whose ideas on personal identity have a Buddhist flavour – passages of his work are intoned by monks in a Nepalese monastery – queried the rationality of preferring a longer period of concluded pain to a shorter period that is still to come. The two experiences can both be vividly imagined but not recalled, and they differ only in their tense and quantity. It seems odd to favour the option that involves the larger quantum of suffering, as most of us do intuitively. More basically, is it rational to care at all about future pain that will not be recalled, suffering that does not become part of a continuous experience of self?

Kate Cole-Adams’s fascinating Anaesthesia: The Gift of Oblivion and the Mystery of Consciousness reveals that Parfit’s tale is not just a philosopher’s thought-bauble but also an insight into common experiences of surgery. Surgical patients are intensely concerned about the pain they might feel, often more than they care about the possibility of surgical error, infection or the gruesome fact that their bodies will be sliced and perforated. They submit to a chemically induced discontinuity in their consciousness to avoid that suffering in the present and wipe it from future memory. It is even possible that anaesthesia is effective precisely because it induces amnesia: patients under the influence of anaesthetics may experience excruciating pain during surgery but have their memory of it blocked when they awaken.

Cole-Adams’s main preoccupation is not, however, forgetting pain after anaesthesia but becoming aware during it. Some proportion of patients, its magnitude much in dispute, reports emerging into wakefulness during surgery, an experience that is often profoundly traumatic. Cole-Adams describes the experience as one of terror but it is better understood as horror. It is not so much that an awful but uncertain possibility is dreaded; instead, an awful reality has come to pass. In the words of the American scholar Barton Levi St Armand:

Horror overtakes the soul from the inside; consciousness shrinks or withers from within, and the self is not flung into the exterior ocean of awe but sinks in its own bloodstream, choked by the alien salts of its inescapable prevertebrate heritage.

Why becoming aware during anaesthesia should generate this existential horror is easy to comprehend. Searing pain is experienced in a body that is paralysed and unable to communicate, a predicament of deep, invertebrate helplessness. It is no surprise that people who endure it often suffer lasting psychological effects. As Cole-Adams speculates, some reports of alien abduction – being probed by spectral figures while lying immobile under intense light – may represent distorted memories of imperfectly anaesthetised surgery. This horror also explains why preventing awareness during surgery – and denying the inconvenient truth that it sometimes occurs – is a preoccupation among anaesthetists.

Anaesthesia is a topic that most readers will not have explored in any detail. But it is an important and spacious one, despite being almost invisible, just as anaesthetists themselves hover in the background while surgeons hog the limelight. (Anaesthetists’ joke: “The adjustment of an operating light is an immediate signal for the surgeon to place his head at the focal point.”) We should be very grateful to Cole-Adams for bringing the field into the wider consciousness it deserves. Without effective anaesthetics, the surgical revolution would not have been possible, medieval patients having to make do with prayer, herbs and violent restraint. Beyond its enormous practical importance, anaesthesia is also intellectually rich, helping us to understand the complexities of consciousness and unconsciousness.

Take consciousness, for a start. We are accustomed to having our subjectivity, our sentience and our capacity for voluntary movement all marching in lockstep. While awake we are aware, feel pain and emotion, and act on the world, and when we are asleep we are unaware, numb and inert. Anaesthesia can disrupt this alignment. There can be awareness without feeling, thanks to local analgesia; feeling without awareness, in the sense of pain suffered while unconscious; and awareness and feeling without movement, due to muscle-blocking drugs that cause paralysis. By producing these unusual phenomena, anaesthesia opens a revealing window on how the mind reacts to its temporary unravelling.

The nature of unconsciousness is equally complex. The term “unconscious” can refer to many levels of unawareness, which anaesthetists call the “planes of anaesthesia.” Cole-Adams demonstrates how enormously difficult it is, despite great advances in medical technology, to assess in the operating theatre when consciousness ends and unconsciousness begins. “Unconscious” can also refer to knowledge that exists outside awareness but may nevertheless exert an influence on behaviour. Cole-Adams explores at length how events occurring while patients are apparently out cold, and overheard speech in particular, may affect patients after they have come to. “Unconscious” can also have a Freudian meaning, referring not to what merely happens to be outside awareness but to what is forced out of awareness by repression or other psychological mechanisms of defence. The link to anaesthesia may seem tenuous, but Cole-Adams makes a case for some intriguing connections that also illuminate the neural and psychological processes responsible for the unity of normal consciousness.

This is a book of ideas, and Cole-Adams has spent its long gestation talking with a remarkable assortment of practitioners and researchers, and critically observing and mulling over their work. But it is also a deeply personal story. Interspersed with her never-dry explorations of concepts, theories and clinical practice, she relates her own experience of spinal surgery, her neuroses, her troubled relationships with partners, the illnesses and frailties of family members, and her deep resonance with people who have experienced “accidental intraoperative awareness.” Cole-Adams’s sensitive and slightly bruised persona is always present, only occasionally becoming intrusive or distracting. Throughout she writes vividly. During a period of recovering from a nameless fatigue, her voice is “slow and flat, like a mop being dragged across a floor.” As an anaesthetised patient has his false teeth removed, “his lips wilted inwards.”

It has been said that anaesthesia is the half-asleep watching the half-awake being half-murdered by the half-witted. Like surgery it is a troubling, anxious subject that most of us would rather avoid or deflect with dark humour. Cole-Adams has illuminated it in a memorable way. The book is a gift not of oblivion but of awareness. •

The post Going under appeared first on Inside Story.

]]>
Lost in translation – or should that be transcription? https://insidestory.org.au/lost-in-translation-or-should-that-be-transcription/ Mon, 20 Feb 2017 22:31:00 +0000 http://staging.insidestory.org.au/lost-in-translation-or-should-that-be-transcription/

Books | This account of the latest research on genes and society poses some of the right questions

The post Lost in translation – or should that be transcription? appeared first on Inside Story.

]]>
The meaner a book review, the more fun it is to read. In the introduction to The Genome Factor, Dalton Conley explains that he and Jason Fletcher came together after he dismantled a paper Fletcher had presented at a conference. Dismantling the book that has resulted – indeed, dismantling anything – is only possible if the thing hangs together in the first place. This book doesn’t. It is more like a trash and treasure market. There are gems here, but also – to the mind of a confirmed reductionist molecular biologist like me – vast stalls offering items that hold little value.

But I must stress that one person’s trash can be another’s treasure. If you enjoy lines like “We show how genotype acts as a prism, refracting the white light of average effects into a rainbow of clearly observable differential effects and outcomes,” then you are going to find a lot to enjoy in this work. For the sake of sport, let’s start with what I considered to be the trash and then come to the treasure.

For a molecular biologist, attention to detail is important. One single error in a gene, like an error in a computer code, can kill everything. Everyone makes mistakes and pedants (like me) take an embarrassing degree of pleasure in pointing them out. To be fair, there aren’t too many mistakes in The Genome Factor, but the ones on display are beauties. On page 36, Huntington’s disease is given as an example of a recessive genetic disorder. In reality, Huntington’s is the classic example of a dominant disorder, as is taught in most undergraduate classes. This might just be a slip of the pen but it is akin to messing up the central dogma of modern molecular genetics by getting transcription and translation wrong. That occurs too, on page 197, where nonsense mutations are said to block transcription when they actually stop translation.

The book’s embedded confusion about natural selection, evolution and genetics is more worrying. On page 6, the old argument about dilution of genetic advantage in each generation is brought up, this time by analogy to a card game called Pass the Trash. Even if you receive a royal flush in this game, you lose some of those cards in the next round – just as you can only pass on half your genes to your children, diluting their effect and giving your kids “no particular genetic advantage.” This criticism of the idea of evolution was levelled at Darwin himself, but it was resolved after Gregor Mendel showed that specific genes could encode features, essentially autonomously, across generations, which means that blending and dilution don’t undercut evolutionary progress. Even if you only have half a royal flush, you still hold important cards. Of course, the authors are aware of this, and elsewhere they explain that “if every gene depended on ten others, evolution would be pretty constrained.” Including these conflicting explanations makes the text very confusing.

But there’s more. The most puzzling chapter comes with the provocative title “Is Race Genetic? A New Take on the Most Fraught, Distracting, and Nonsensical Question in the World.” This certainly is a fraught question, and innumerable ill deeds have been perpetrated in response. But I was surprised to see it described as a “nonsensical” question. Perhaps we should avoid it, but I didn’t expect it to be eliminated before the chapter had begun. By page 94, the verdict is in: “Race does not stand up scientifically, period.” (Back on page 7, it was described, with emphasis, as “just plain wrong in genetic terms.”) Part of me was relieved, but then a few pages on I am invited to “compare Pygmies and Bantus… Inuit and Swedes.” My mind is swimming as I struggle to compare things that don’t exist scientifically and to cleanse myself of my own recalled observations of the rich diversity of this imperfect world.

But terminology is there to help me. Although it appears that race is scientifically invalid, we discover on page 101 that “self-identified” and “self-reported” race, “continental ancestry,” “race groups” and “continental clines” are useful classifications. There is a valid point being made here, and the term “continental ancestry” is certainly better than “race.” Then, on page 111, everything becomes clear. The authors explain that it is not race itself but the “common government definitions of race” that are “flimsy” and “the notion of continental ancestry having distinct genetic signatures is a biological reality.” From the perspective of researchers working in the United States – where residents and visitors are frequently asked to tick boxes declaring “Caucasian,” “Hispanic,” “African American” and so on – I guess this chapter may make some sense, but I do worry that many readers will be as confused as I was by the linguistic somersaults performed to avoid the fraught topics raised in this chapter.

We’re nearly at the end of the “trash” section. Another aspect that I didn’t like but others might enjoy was the authors’ festival of “straw man” burning. I am not sure if it was regarded as a straw man at the time, but the controversial 1994 book by Richard J. Herrnstein and Charles Murray, The Bell Curve, certainly preoccupies the authors. As far as I can discern, this is a nonfiction equivalent of Aldous Huxley’s Brave New World. I believe it made a major impact on publication, but I am not sure if it is still driving agendas in the social sciences. Nevertheless, refuting it seems to be a major motivation behind The Genome Factor.

But that’s not the only straw man. I jotted the words in the margin each time I came across a case. On page 35, we hear that “it was silly of scientists to think they would find the gene… for sexuality or for IQ.” I am aware of no scientist, alive or dead, who has been that silly. The idea has always been to see if one could identify genes that had an impact in those characteristics. Given the extensive achievements in the genetics of mental retardation, for instance, you would have to conclude that many such genes have been identified.

Next Conley and Fletcher refer to the fact that a “small group of social scientists” are proposing a set of “explosive ideas” about how genetics and biology may partly explain the economic success of different countries. We are told that the work is “yet to coalesce around version 1.0.” Personally, I don’t think scholars dealing with economics and politics are likely to be put out of a job by population geneticists any time soon. But there’s more, even including a discussion of whether different genetic groups go to war more frequently. On this point, the authors find that, in fact, related groups fight more often – perhaps because they are neighbours.

The final criticism I would make is that Conley and Fletcher’s enthusiastic language sometimes sounds good but doesn’t make much sense. When thinking about behavioural genetics, we are told, it is not “genes or environment” but “genes times environment.” Later, in a key section on genome modification, we receive the cheerful challenge: “Unhappy with your APOE4 [Alzheimer’s susceptibility gene] variant… Change it?” A molecular biologist like me wonders how, even with the remarkable new tools such as CRISPR, we would go about changing a gene in every cell of a living human body. This is crazy talk.


But we can forgive much of this because in every market there are treasures. I really liked the chapter on genotocracy. You have to think about Andrew Niccol’s film Gattaca here – about the benefits and the dangers of genetic pre-judgements (and prejudice). The belief that many people – perhaps everyone, at birth – may one day have their genomes sequenced and that hackers may release this information is probably realistic. Like the authors, I have concerns in this area. They don’t say this specifically, but I don’t think there will be much good news in genomes. Current genomics tends to focus on bad news – identifying genetic disease genes or risk of disease – which means that leaked genetic information is unlikely to reflect well on the individuals concerned. I do think that assigning genetic scores to things like “educational promise” and even “athletic promise” could be self-defeating, and the authors are right to highlight the issue. The section on genome modification with CRISPR is a bit odd in some places – changing a single disease-causing letter in the genetic code hardly makes “genetic history irrelevant in a tangible way” – but the new technology will certainly have profound effects.

To me, the epilogue – Genotocracy Rising, 2117 – is even better. Here, the prospect of parents having embryos sequenced before implantation is raised. I think this is possible and it could have major effects on society. The authors fixate a little on parents choosing the embryo with the highest IQ. The “cult of IQ worship” is certainly a problem: IQ is not more important than other qualities like empathy, sociability or even honesty, but it does already seem to be influencing people’s career prospects disproportionately. I don’t think it will be possible for parents to optimise their children’s IQs or beauty or personality any time soon – these things are just way too complicated and parents will only ever be able to choose between a handful of embryos – but I do think pre-implantation diagnosis will be used to reduce serious single-gene genetic disease.

Finally, the impact on society depends on scale and penetration. Already, the ultra-rich have all the advantages of the super-powerful – if you want to be president or a Hollywood star, it helps if your parents or partner was one of those things. But since most people are not in this super-elite category, the stratification – while real – doesn’t penetrate at every level. I think it will be the same for genotocracies – there will be some stratification but it won’t permeate all of society. Consequently, I am more relaxed about the future than either the authors of this book or those of The Bell Curve appear to be.

In other words, if you enjoy reading about the future, then you’ll find this book easily digestible and fun to read. But I wouldn’t lose much sleep over it. •

The post Lost in translation – or should that be transcription? appeared first on Inside Story.

]]>
The truth about torture https://insidestory.org.au/the-truth-about-torture/ Wed, 25 Jan 2017 18:00:00 +0000 http://staging.insidestory.org.au/the-truth-about-torture/

From the archive | Outside TV drama, “enhanced interrogation” fails the evidence test, writes Tom Hyland in this review first published in June 2016

The post The truth about torture appeared first on Inside Story.

]]>
Imagine the storyline for a TV political drama – something like The West Wing or House of Cards.

It goes like this. The United States is attacked by terrorists, who kill thousands of civilians. With the nation under extreme stress, the president adopts extreme measures. He approves the torturing of suspects to gain information needed to prevent further attacks. He’s advised and supported by legal scholars. The Constitution, the Bill of Rights, and domestic and international law are brushed aside. But there’s a twist – a scriptwriter’s in-joke: some of those advocating the effectiveness of torture do so not because of any scientific evidence that it works, but because they’ve seen it work in another piece of TV fiction, 24.

It’s a scenario that might once have appealed to fantasists and conspiracy theorists. Trouble is, the storyline doesn’t come from a fictitious TV series. It’s a broad summary of how the world’s most powerful democracy came to adopt torture as a matter of state policy. In the process, it trashed its reputation, corrupted its institutions, gave a new grievance to its enemies, and caused massive suffering to the tortured – and untold moral and psychological harm to the torturers. And it didn’t work.

It didn’t work, or not in the way that advocates said it would, because proponents of torture didn’t know, and didn’t try to find out, what torture does to the brains and mental processes of those subjected to it. It breaks its victims, and can even kill them. Some will confess to anything, just to end the pain.

But if the aim of torture is to get a suspect to promptly and accurately recall and coherently relate details of, say, a terrorist’s ticking time bomb, it’s a failure. This is the conclusion reached by Shane O’Mara, professor of experimental brain research at Trinity College Dublin, in Why Torture Doesn’t Work.

O’Mara’s focus is not on the ethical or moral dimensions of torture, though on those grounds alone he opposes it. His focus is on the science. He judges it solely on the justification advanced by pro-torturers – that it achieves results and will save lives. Tortured terrorists talk, and talk fast, and what they say is coherent and credible, or so we’re told. Yet the scientific evidence, backed by accounts from those subjected to it and those who’ve inflicted it, shows that it fails completely on its own terms.

President George W. Bush approved torture as the United States responded to the 9/11 attacks. He was supported by a series of legal opinions – the so-called Torture Memos – from the government’s senior legal advisers. All of them advised that the government could torture without breaking the law. It helped that they said the methods being applied weren’t torture, that they were simply “enhanced interrogation techniques.”

The proponents of torture assumed the techniques would work. Among the legal advisers was deputy assistant attorney-general John Yoo, these days a law professor. He had no doubt about the effectiveness of torture: “It works, we know it does. The CIA says it does and the vice-president says it does.” (Torture also had its supporters in Australian academia.)

On such flimsy evidence, little more than folklore and wishful thinking, the torturers went to work. And some of them did indeed draw inspiration from 24, the TV series about a counterterrorism agent with an ends-justify-the-means approach. It was hugely popular among the Americans at Guantanamo Bay and it gave interrogators lots of ideas.

The US was not the first Western democracy to use torture. The French did it on a large scale in their Algerian colony up to the early 1960s. The British used it in Northern Ireland, subjecting nationalist prisoners to the so-called Five Techniques – starvation, sleep deprivation, exposure to continuous “white noise,” hooding, and stress positions. There was an unspoken sixth technique – beatings – for prisoners who wouldn’t comply.

The Americans used similar techniques, and more, eventually going beyond even what the Torture Memos permitted. In one case a prisoner was interrogated for approximately twenty hours a day for seven weeks, strip searched in the presence of women, forced to wear women’s underwear, forcibly injected with large quantities of intravenous fluid and forced to urinate on himself, led around on a leash, made to bark like a dog, and subjected to cold temperatures. Waterboarding – a process that creates the perception of drowning – was common.

Some prisoners were subjected to rectal infusions – in one case a pureed mixture of hummus, pasta with sauce, nuts and raisins. It’s hard to disagree with O’Mara when he says this process was solely applied for “retributive or even sadistic purposes.”

O’Mara shows that these techniques have disastrous effects on the mental processes on which memory depends. This is crucial. The interrogator wants the person being tortured to remember things. Where is the bomb? When is it timed to explode? Who are your co-conspirators? Yet what we know about how memory works shows that, instead of facilitating memory function and enhancing recall, extreme stress impairs them.

O’Mara cites a US study involving special forces soldiers who were subjected to prisoner-of-war conditions. Being deprived of food and sleep and subjected to extreme temperatures caused “grave memory deficits” – the soldiers could not recall information they had learned as part of the exercise – even though they had no reason not to give the information to their interrogators. And these were elite soldiers, extremely fit and trained to withstand stress. In O’Mara’s words, they showed “a profound decrement in memory after being exposed to substantial and sustained stressor states.”

Other research reinforces the finding that prisoners interrogated under extreme stress, threat and anxiety are not capable of recalling specific memories. What these techniques do is activate that part of the brain directed solely at survival; other brain functions, including memory, are suppressed. A prisoner being tortured will be left, writes O’Mara, “incapable of saying much that is useful.”

The pro-torture argument contains a deep contradiction – the belief that sleep deprivation, for instance, will reduce prisoners’ ability to think on their feet but also motivate them to talk. The interrogator wants the prisoner to think, yet the torturer’s actions reduce the victim’s capacity to do so. Instead of aiding memory, the treatment induces amnesia. This “surreal undercurrent” in the torturer’s argument, says O’Mara, “requires an internal logic that is not based on reason and evidence.”

Sleep deprivation is usually applied in combination with other techniques, including waterboarding. In this ancient torment, the prisoner is tied to an inclined bench, a cloth is applied to the face and water is poured over the cloth. It causes the perception of suffocation and panic – victims feel they are drowning and experience the dread of imminent death.

While being subjected to this stress, prisoners are asked to search their memories for details of specific events, places and people. American records cited by O’Mara revealed prisoners who were rendered completely unresponsive, vomiting and suffering involuntary spasm, by waterboarding. O’Mara, a neuroscientist, is conducting preliminary research that simulates the effects of waterboarding; not surprisingly, his findings suggest that waterboarding has a negative impact on recall.


Yet what of the scenario favoured by the torture advocates – the ticking time bomb, set to go off in an hour with catastrophic consequences? Does torture work then? The answer is: especially not then. O’Mara concludes that there’s probably no technique able to apply sufficiently severe pain to a well-prepared individual that would cause him or her to reveal information in the minutes and seconds required. Apply too much pain, and the prisoner goes into shock and can’t tell you anything.

In any case, we know that the US, in reality, hasn’t been faced with any ticking bombs. Instead, the evidence about the interrogations reveals prisoners subjected to weeks and even months of torture that provided no useful intelligence.

The fact that torture has the opposite effect to what’s intended suggests to O’Mara that those who devised the American program and gave it legal cover either didn’t consult medical professionals or ignored their advice. Had they been consulted, they would have advised against this regime of “surprising cruelty.” Nor do the interrogators and their authorisers appear to have consulted their own imaginations and considered how they think and act when they’re simultaneously tired, hungry and cold. Like O’Mara, we can only look on, appalled, at their profound lack of empathy.

O’Mara concludes that the predictions of the Torture Memos failed utterly and predictably because they were based on theories that lacked any knowledge of psychology or neuroscience. “Rather,” he writes, “they are founded on an evidence base that involves consulting the contents of one’s own consciousness.” In other words, proponents of torture believe it works because they believe it works.

Importantly, O’Mara also looks at what torture does to the torturer. Applying torture is not easy; it is “stressful for all but the most psychopathic.” The evidence is that it mentally wounds the torturer. Many clearly suffer symptoms of post-traumatic stress disorder, ending up shell-shocked, dehumanised, and covered in shame and guilt.

No such psychiatric price seems to have been paid by people further up the chain of responsibility. We don’t know if any of the law professors and op-ed writers who sought to make torture respectable feel any pangs of conscience for advocating methods that were largely banned when Barack Obama came to office.

O’Mara concludes that the policy process and the rationale that lead the US on such a disastrous course of action, based on little more than intuition and myth, would be hilarious if they were not so appalling.

If that was appalling, what are we to make of Donald Trump’s views on the matter? “We should go for waterboarding and we should go tougher than waterboarding,” the Republican candidate has declared. When asked about former CIA director Michael Hayden’s comment that the military might defy what it regards as unlawful orders to torture or kill civilians, Trump said, “They won’t refuse. They’re not going to refuse, believe me.”

Had such lines been uttered in a fictitious TV series just a year ago, you wouldn’t have believed it. •

The post The truth about torture appeared first on Inside Story.

]]>
In Melbourne, progress on chronic fatigue https://insidestory.org.au/in-melbourne-progress-on-chronic-fatigue/ Thu, 24 Nov 2016 02:50:00 +0000 http://staging.insidestory.org.au/in-melbourne-progress-on-chronic-fatigue/

Peter Clarke talks to Bio21 researcher Chris Armstrong about new research that challenges popular views of this enigmatic illness

The post In Melbourne, progress on chronic fatigue appeared first on Inside Story.

]]>
With its debiliating symptoms – fatigue, “brain fog,” pain, gastrointestinal disorders – and its elusive causes, chronic fatigue syndrome has been one of the great unsolved medical mysteries. Now, a growing number of research teams around the world are tackling the challenge of diagnosing and treating the illness using new medical research techniques.

By looking at patients’ genetics and the changing pattern of their metabolites – the molecules produced by their individual metabolisms – these researchers have made enormous progress in uncovering patterns exclusive to the condition and countering once-popular psychological explanations.

Among the research centres working on CFS (also known as myalgic encephalomyoletis) is the Bio21 Institute at the University of Melbourne. Earlier this month, amid the centrifuges, mass spectrometers and NMR cylinders used to identify shifts in biological material, Peter Clarke spoke to Bio21 researcher Chris Armstrong.

The Institute’s work is supported by the Mason Foundation, with assistance for NMR equipment from the Australian Research Council.

Bio21 Institute

Symptom checklist

Phoenix Rising website and forums

Open Medicine Institute

Duration: 24 mins 11 secs

The post In Melbourne, progress on chronic fatigue appeared first on Inside Story.

]]>
Toads on the evolutionary road https://insidestory.org.au/toads-on-the-evolutionary-road/ Sat, 22 Oct 2016 00:41:00 +0000 http://staging.insidestory.org.au/toads-on-the-evolutionary-road/

Can evolution be used to control the spread of cane toads? In this 2005 interview, biologist Rick Shine reports from the field

The post Toads on the evolutionary road appeared first on Inside Story.

]]>
Rick Shine, Professor of Evolutionary Biology at the University of Sydney, was this week awarded the 2016 Prime Minister’s Prize for Science for his work on cane toads and evolution. The previous week, he picked up this year’s NSW Premier’s Science Prize. In this interview, first broadcast in 2005 on Radio National’s The National Interest, he discusses his research with broadcaster Terry Lane.

Duration: 15 mins 10 secs

Producer: Peter Mares

 

The post Toads on the evolutionary road appeared first on Inside Story.

]]>
The price of secrecy https://insidestory.org.au/the-price-of-secrecy/ Tue, 04 Oct 2016 00:48:00 +0000 http://staging.insidestory.org.au/the-price-of-secrecy/

A new account of Britain’s nuclear tests in Australia reveals a long history of damaging suppression

The post The price of secrecy appeared first on Inside Story.

]]>
Elizabeth Tynan’s new book, Atomic Thunder, is the best account yet of the profoundly anti-democratic policy process that led to Britain’s testing of nuclear weapons at the Monte Bello islands, off Western Australia, and at Emu and Maralinga in South Australia, between 1952 and 1963. Maralinga was an Indigenous word for thunder, but Tynan gives the word a broader contemporary meaning in her final paragraph: “If there is a word that speaks not only of thunder but also of government secrecy, nuclear colonialism, reckless national pride, bigotry towards Indigenous peoples, nuclear-era scientific arrogance, human folly and the resilience of victims, surely that word is Maralinga.”

Several good books have been written about the tests, but Britain’s refusal to release hundreds of thousands of pages of relevant material hobbled their authors. The addiction to secrecy also frustrated Jim McClelland’s royal commission into the tests, which proved to be a model of how a good commission should perform.

Although much remains unjustifiably secret, Tynan, a senior lecturer in James Cook University’s graduate research school, has taken advantage of recently declassified files to reinforce her prodigious research into one of the most shameful episodes in Australian history. Atomic Thunder gives readers the most comprehensive insight yet into the enormity of what happened during the tests and associated trials.

Tynan’s starting point is Britain’s vainglorious decision to develop nuclear weapons, which was quickly followed by prime minister Bob Menzies’s staggering irresponsibility in allowing these dangerous tests to go ahead without answers to basic questions about what they involved and the risks they posed. 

Menzies refused to seek independent advice from highly qualified Australian scientists such as Mark Oliphant, who had worked on the American bombs. Oliphant’s sin was that he didn’t like the idea of killing millions of people with nuclear weapons. Instead, Menzies relied on advice from a nuclear weapons zealot, Ernest Titterton, a British citizen living in Canberra, who unswervingly insisted there were no dangers at all. McClelland showed that Titterton’s first and only loyalties were to Britain and nuclear weapons. (It would not have surprised if he had eaten two teaspoons of plutonium on TV, claiming it was a health tonic he took every day.)

The policy-making structure ensured the effects on the local Aboriginal population were ignored, as were the dangers to the British and Australian service personnel exposed to radiation at the test sites, and to other Australians living much further away.

Atomic Thunder unambiguously demonstrates that the most dangerous tests were the ones falsely described as “minor trials” at Maralinga. The worst of these, “Vixen B,” used high explosive to blow up plutonium, much of which was widely distributed as radioactive contamination. Frequent dust storms in the area repeatedly stirred up plutonium particles – and still do – which could be readily inhaled or ingested by unsuspecting Maralinga workers, Indigenous kids, unsuspecting tourists and many others. 

The British maintained ridiculous levels of secrecy, partly to prove to the Americans that they were trustworthy and partly to hide the nature of these trials, the worse of which focused on developing the UK’s fission and fusion bombs. 

These were not lab experiments. Big explosions created partial fission reactions and horrific plutonium contamination. Tynan nails this in her book – unlike a docile media that failed to report anything about the trials at the time.

The chapter on the Roller Coaster investigation shows that the British knew at the time that the extent of the plutonium contamination at Maralinga was far bigger than what they had reported to the Australian government. The government then relied on the false estimates, wrong by a factor of ten, to indemnify the British in 1968 for any future liability to clean the site up properly.

Deceit also applied to the standard weapons tests, including a bomb detonated in a ship’s hull. The British high commission told a silly lie to an Australian scientist worried about the grossly inadequate clean-up of the contaminated Monte Bello site: “Everyone knows when you explode a nuclear weapon on a ship, the whole ship is vaporised.” No one knows that for the simple reason it’s not true. As Tynan points out, the islands were contaminated with radioactive debris, including pieces from the ship’s large driveshaft.


It’s possible to see a direct line between the complex of forces described in Atomic Thunder and what is happening today. Immediately after the census computer system crashed recently, for instance, the ABC and other outlets turned to people like Peter Jennings, who heads the Australian Strategic Policy Institute, an organisation partly funded by arms manufacturers. Although Jennings could not possibly have known what had happened – no one did at that stage – he claimed that the Chinese government was the likely culprit. Given he couldn’t know, why ask him, let alone broadcast his incorrect answer?

As Media Watch later explained, the computer system basically crashed because it was overloaded, not because it was under attack by foreigners. Yet nonsense about a “denial of service attack” still pervades media accounts of what happened, including a recent Radio National Breakfast report. Before journalists add to the momentum building for a war with China, they should read Atomic Thunder from cover to cover to better appreciate the dangers of relying on “experts” embedded in a rambunctious national security establishment. 

Tynan’s focus on the damage done by secrecy has a compelling resonance at a time when a powerful new security establishment dominates much of Australian politics. Unlike the tepid D Notices of the 1950s and 60s, new laws now provide seven years’ jail for a journalist or anyone else who publishes or releases anything that might hold the security chiefs to account.

There is bipartisan political support for jailing journalists if they report anything about “a special intelligence operation,” even if it is badly bungled or inherently foolhardy. Journalists are not allowed to know what’s a special operation and what’s not, but that lack of information is no defence against a conviction.

It would be almost impossible to report a modern-day equivalent to past scandals, such as the reckless Australian Secret Intelligence Service raid on the Sheraton Hotel in Melbourne. The trainees wearing masks who ran around the premises with silenced machine guns were lucky not to be shot by the Victoria Police, who had not been included on the need-to-know list.

Following the destruction of Hiroshima and Nagasaki, Robert Oppenheimer – the head of the hyper-secret Manhattan Project, which developed the first nuclear weapons – said in a courageous speech at Los Alamos in 1945, “secrecy strikes at the very root of what science is, and what it is for… It is not good to be a scientist, and it is not possible, unless you think that it is of the highest value to share your knowledge” and that it is “a thing of intrinsic value to humanity.”

I would like to think something similar applies to journalists. It certainly applies to authors like Tynan. •

Brian Toohey spoke at the launch of Atomic Thunder in Sydney on 27 September.

The post The price of secrecy appeared first on Inside Story.

]]>
Malcolm Roberts versus a century and a half of science https://insidestory.org.au/malcolm-roberts-versus-a-century-and-a-half-of-science/ Wed, 31 Aug 2016 06:04:00 +0000 http://staging.insidestory.org.au/malcolm-roberts-versus-a-century-and-a-half-of-science/

Diary of a Climate Scientist | If the new One Nation senator wants empirical evidence, he can take his pick from 150 years of research, says Sarah Perkins-Kirkpatrick

The post Malcolm Roberts versus a century and a half of science appeared first on Inside Story.

]]>
The dust of the most recent Australian election has finally settled, and thanks to the double dissolution there are many new faces in the Senate. Among them, despite receiving only seventy-seven first-preference votes, is Malcolm Roberts, who holds the second One Nation seat for Queensland.

Roberts, a former coalminer and industry consultant, has quickly become Australia’s best-known, most outspoken and, if I may say, most absurd climate sceptic. Anyone who has seen Roberts during his recent TV appearances, especially on ABC TV’s Q&A, will be familiar with his belief that there is “not one piece of empirical evidence” that climate change is caused by humans.

I wonder, does Roberts know what empirical means? Synonyms of empirical include “observed” and “experimental” – both of which are used extensively in climate science – but here, for the sake of brevity, I’ll stick to observed climate evidence.

At the frontline of climate observation is a national and international network of point-based weather observation stations. In Australia, some of these date back as far as the 1860s, with many more set up in 1910. (The United States and Europe have longer-established stations, and a network of British station provides observations from as far back as 1659.) Since 1979, gridded temperature readings, meticulously assembled from the highest-quality stations, have been supplemented by satellite observations. The story all these readings tell is the same: global temperature is well and truly on the rise.

Other observed indicators of current and past climate have also become available. Ice cores, for example, can tell us what the climate was like tens of thousands, even hundreds of thousands, of years ago. As new ice accumulates, air bubbles are trapped, and with them the greenhouse gas concentrations at that time, inclusive of carbon dioxide. Since carbon dioxide traps heat in the lower ten to twelve kilometres of the atmosphere, it is a good indicator of past temperature. These ice cores indicate that global temperature is not only higher, but also increasing at a rate faster than any seen during the last 800,000 years (though likely longer).

We also have extensive glacier records, sea-ice observations, sea-level altimeters, and ocean observations. All of these empirical observations are clearly consistent with a warming climate. Glaciers are steadily retreating or disappearing altogether, the Arctic sea-ice level was at its lowest on record last winter, sea levels elsewhere are rising at increasingly faster rates, and the global ocean is steadily warming.

If all this isn’t empirical evidence of a warming climate, then nothing is.

Malcolm Roberts also thinks that scientists have it the wrong way round: that rising temperatures cause carbon dioxide to increase, rather than vice versa. He might need to have a look at the work of nineteenth-century scientists John Tyndall and Svante Arrhenius, who (respectively) demonstrated how greenhouse gases work, and revealed the specific heat-trapping properties of carbon dioxide.

Tyndall’s and Arrhenius’s lab experiments demonstrated that under specific, very cold conditions – conditions that occur ten to twelve kilometres from the Earth’s surface – carbon dioxide absorbs and re-emits particular bands of long-wave radiation that amplify heat trapped in the atmosphere. Their work was undertaken well before anthropogenic climate change was even a “thing,” though Arrhenius did hypothesise that burning fossil fuels could increase global temperatures. In other words, the science of climate change is underpinned by physical, law-abiding, lab-based science.

Another chestnut Roberts relies on is the view that “there has been no global warming for about twenty years.” Well, fourteen of the fifteen hottest years on record have occurred since 2000, each of the past three decades has been hotter than the previous one, and 2016 is forecast to be the hottest year globally, surpassing both 2014 and 2015. Meanwhile, all other indicators of anthropogenic climate change discussed above have continued to get stronger over this period. There is variability in the climate system, of course, and so even with an underlying warming signal, non-record-breaking years are still likely. Manystudies have shown that the lack of a clear trend in global temperature between 1998 and 2014 was due to this sort of variability. Roberts would be wise to realise that this misconception has been debunked many times.

During Q&A, Roberts also said that the 1940s were warmer than now. It’s here that it is clearest how little time and energy Roberts has invested in his research. The 1940s comparison goes back to an old argument over which was the hottest year – in the United States. About a decade ago, there were, ahem, heated discussions as to whether 2006 (and prior to this, 1998) was hotter than 1934 (yes, Roberts even got the decade wrong). In fact, 1934 ranks as the fifth hottest year for the US, behind 2015, 2012, 2006, and 1998. It is important to remember that regional climates change at different rates to the global average, thanks to the mix of influences they experience.

Now comes my favourite – the argument that nature alone determines the level of carbon dioxide in the atmosphere. If, by nature, Roberts means the oceans and the biosphere acting as carbon sinks, then I guess there is some distant truth in the statement, for nature does indeed remove this gas from the air. But all sinks have their limit, and as the oceans continue to warm, their ability to suck up atmospheric carbon diminishes. We are also clearing forests at alarming rates, and re-releasing the carbon they have sequestered for hundreds, if not thousands, of years. On top of this, we’re knowingly burning ancient carbon sinks in the form of fossil fuels – something that nature did not intend.

Not to be outdone by his milder fellow-travellers, Roberts also accuses NASA, the United Nations, the CSIRO and the Bureau of Meteorology of being corrupt participants in a global scam. Honestly, I don’t even know where to start with this one. Roberts believes that NASA put men on the moon yet can’t maintain an accurate global temperature dataset. He thinks that all climate scientists are in on the scam, though I can’t imagine why, particularly when we have to deal with foolish and reckless sceptics like Roberts himself.

In a recent interview, esteemed climate communicator John Cook stated that it is unlikely the plethora of actual evidence will ever change Roberts’s view. But it is disturbing that the fallacies our new senator perpetuates may sway some people. We can only hope that their research into anthropogenic climate change is conducted more comprehensively than Roberts’s. •

The post Malcolm Roberts versus a century and a half of science appeared first on Inside Story.

]]>
Red spot specials: the fall and rise of Australian measles https://insidestory.org.au/red-spot-specials-the-fall-and-rise-of-australian-measles/ Fri, 11 Mar 2016 01:22:00 +0000 http://staging.insidestory.org.au/red-spot-specials-the-fall-and-rise-of-australian-measles/

Vaccination is not only justified by self-interest. It is also an act of altruism

The post Red spot specials: the fall and rise of Australian measles appeared first on Inside Story.

]]>
By the time I was born into the virological soup of suburban Melbourne in 1960, the last Australian polio epidemic was behind us and the remaining childhood diseases were vying for most-feared status. Scarlet fever and diphtheria had gone and rheumatic fever was fast receding. For my parents, measles had taken on the mantle of the “bad” (if routine) childhood illness. Measles made you feel terrible and its complications could be severe.

Most kids contracted the illness by the age of six, so they had little personal memory of it, but their parents certainly witnessed the distress it could cause. My measly memory is of lying in a darkened bedroom listening to the hushed conversation of my mother and our GP outside the door. I was six or seven at the time and I later learnt that I was a bit delirious and my mother was worried that I had developed one of the uncommon but feared complications, encephalitis, or inflammation of the brain.

The measles virus is present in the respiratory secretions of an infected person and is transmitted to others through inhalation or contact with the conjunctivae. The virus multiplies in the nose, throat and airways of its new host who, with each cough and splutter, launches viral particles into the air in aerosols. The virus can persist in a room for two hours after an infected person leaves. It usually takes ten to fourteen days after exposure (the range is six to twenty-one days) for the first signs of the disease to appear. These are known as the prodromal symptoms and include fever, tiredness, irritability, cough, runny nose and conjunctivitis. Two to four days after the prodrome commences, a striking rash appears, usually beginning behind the ears and spreading across the face and neck and then progressing to cover the entire body. The rash starts as small, discrete, slightly raised, red blotches but eventually these may all join up into one confluent rash.

Very few doctors who graduated after 1980 will have seen a real patient with this clinical constellation, and it is not uncommon for a case of measles to be misdiagnosed as a drug reaction (which is the most likely explanation for a measles-like rash in countries where vaccination rates are high). About 30 per cent of measles cases lead to complications, the most common being diarrhoea, otitis media (middle ear infection) and pneumonia. Encephalitis is much rarer (about one in 1000 cases) and the rare but devastating long-term sequel known as SSPE usually appears seven years after the initial illness. SSPE produces a progressive neurological deterioration that affects behaviour, intellect and movement and is universally fatal. I saw one case in 1982 when I was a student but it has now all but disappeared in the developed world.

If you are an Australian reader older than forty-eight, then you can be 97 per cent confident that you had measles as a child. If you are younger than that, then it is still highly probable that you are immune, but the more youthful you are the more likely your protection will be due to vaccination rather than to a “natural” infection. Vaccination is a victim of its own success – contemporary measles vaccine coverage in the developed world is such that a disease that was once a near-ubiquitous rite of childhood passage is no longer part of the received wisdom of twenty-first-century mothers and fathers. The fading folk memory of infectious diseases has allowed the specious arguments of the anti-vaccination lobby to gain considerable traction among anxious young parents. If you listen to the propaganda of the “anti-vaxxers,” you would wonder what the fuss about the disease was in the first place. One zealot has even produced a children’s picture book that extols the virtues of “natural” measles infections.

So why should we worry about measles? Obviously a disease that affected nearly 100 per cent of the population must have had a relatively low fatality rate or there wouldn’t be so many of us baby boomers. In the mid-twentieth-century United States, for example, the measles case-fatality rate was less than 1 per cent (in other words, fewer than one in a hundred children who contracted the disease died). By 1990 the rate had fallen to around 0.05 per cent, or five in 10,000.

But the raw numbers can be striking. In Victoria, for instance, between 600 and 800 children were hospitalised during each of the state’s four major measles epidemics between 1960 and 1970, and 146 deaths were attributable to measles over roughly that period.

The fatality rates of measles fell across the West during the twentieth century, probably as a result of better nutrition and improvements in health services. But measles remains a dreaded and deadly disease in the developing world. There, the case-fatality rate ranges from 0.5 per cent to 6 per cent depending on the region, and during war and famine it can approach 30 per cent. Despite remarkable gains made over the past thirty years because of vaccination, the World Health Organization, or WHO, estimates that 114,900 measles sufferers died worldwide in 2014.


In 1954 an eleven-year-old boy named David Edmonston contracted measles in his boarding school in Boston, Massachusetts. At the time, virologist John Enders (who won a Nobel prize for work that led to the polio vaccine) was scouring the local residential institutions in search of measles virus samples. After dozens of failed attempts with other patients, Enders was able to grow measles from a swab that he took from David’s throat. Over several years he cultivated the virus in his laboratory, allowing it to become weaker with each passage through the cell culture. This “attenuated” strain was successfully tested as a vaccine in monkeys and then in humans. The first to receive the vaccine were severely disabled children with conditions such as Down syndrome, microcephaly and cerebral palsy, who were living in an institution in Massachusetts. In a move unusual for the 1950s, the researchers obtained written consent from the parents of each child before administering the vaccine. The results were remarkable: all of the vaccinated children were protected from measles during the next outbreak.

In 1963, after further studies confirmed its safety and effectiveness, the Edmonston strain was used in the first US measles vaccine. Within a decade the annual number of cases fell from several million to several thousand, an effect that was seen in every country where the vaccine was rolled out. Even in the developing world the number of cases fell dramatically. During 2000–14, the number of annually reported measles cases worldwide decreased by 69 per cent and it has been estimated that 17.1 million deaths were prevented by measles vaccination during that same period.

Even as the collective memory of the disease’s severity faded, the safety of the measles vaccine was never brought into question – not, at least, until a paper was published in theprestigious medical journal The Lancet in 1997, suggesting that autism was linked to the measles, mumps and rubella, or MMR, vaccine. This work was subsequently shown to be scientifically invalid and the lead author, Andrew Wakefield, was deregistered by Britain’s General Medical Council, which ruled that he had acted unethically and shown a “callous disregard” for the children involved in the study. Since then, carefully conducted, large-scale studies across the world have failed to show any association between MMR and autism, yet by 2004 measles vaccination rates in Britain had fallen to 80 per cent and measles cases had skyrocketed. Modern confidence in vaccines is very fragile and it took ten years for coverage to return to the levels seen before the autism scare.

Despite this setback, in March 2014 the WHO announced that measles had been eliminated from Australia, though the news didn’t make much of a splash in the media. Indeed, in light of stories about measles outbreaks in returned travellers and groups with anti-immunisation views, many readers would have been puzzled by the announcement in the first place. If outbreaks still occur, how can the disease be “eliminated”? It’s all a matter of definition. Elimination means local transmission of a disease has ceased, but it doesn’t mean that the disease can’t be reintroduced from countries that haven’t eliminated it. Eradication, on the other hand, would mean there were no more cases of measles anywhere in the world. Only one human disease – smallpox, in 1980 – has ever been eradicated (although the global eradication of polio is tantalisingly close). Dozens of countries have achieved measles elimination but eradication is not currently on the WHO agenda.

To understand why, we need to understand herd immunity: the indirect protection that a non-immune individual receives from the immune members of the same population. The proportion of the “herd” that needs to be immune varies between diseases. For some, a surprisingly modest proportion needs to be immune to stop an epidemic from occurring. Only around 80 per cent of the population needed to be vaccinated against smallpox to achieve global eradication and only around 60 per cent vaccination coverage would be required to protect against a pandemic swine flu epidemic. How high does it have to be for measles?

You can determine the threshold for population immunity once you have established how many secondary infections one infected person produces. This is known as the reproductive number, or R0. (R0 is one of the most useful numbers in infectious diseases, so even if you are maths-phobic stay with me for another three paragraphs). If R0 is less than 1, then the disease will disappear; if it is greater than 1, there will be an epidemic. The more contagious the infection is, the more likely that R0 will be higher. (Contagiousness is just one of many factors that need to be considered in calculating R0, but here I will restrict my discussion to this key factor.) The degree of contagiousness varies considerably from one infection to another: most forms of leprosy are hardly contagious at all, for instance, so R0 in almost every population is less than 1; Ebola requires direct contact with infected blood or bodily secretions so it was relatively easy to reduce R0 below 1 once appropriate infection control was instituted during the 2014–15 epidemic. Even influenza is less contagious than most people think; the pandemic swine flu of 2009 had an R0 between 2 and 3; smallpox was about 5. But measles is probably the easiest infection on the planet to pass from one person to another – its R0 is between 12 and 20.

Once you have calculated R0, a simple piece of arithmetic produces the proportion of people who need to be immune to achieve herd immunity. This is known as the herd immunity threshold, or HIT. The HIT is 1–1/R0.

For smallpox, where R0 = 5, the HIT = 1 minus 1/5 = 4/5 = 0.8 or 80 per cent.

For measles, where R0 =15, the HIT = 1 minus 1/15 = 14/15 = 0.933 or 93.3 per cent

So, to achieve local measles herd immunity, more than 90 per cent of the population has to be immune. For eradication, this would have to be repeated everywhere across the globe. It seems achievable: in 2015 the average level of vaccine coverage in Australia for five-year-olds was an impressive 93.7 per cent. Unfortunately, this does not mean that all those children are, or will remain, immune. While natural infection with measles confers lifelong immunity to nearly 100 per cent of those affected, measles vaccine is between 90 and 95 per cent effective at inducing immunity – and this may wane over time. Therefore the vaccine coverage required to reach the HIT has to be greater than the 93.3 per cent we calculated above. If we generously assume 95 per cent vaccine efficacy, the actual proportion of the population that must be vaccinated is 93.3 per cent x 1/95 = 98.2 per cent, a level that has never been achieved in Australia. With the current vaccine it may be that eradication is simply not a reachable goal.


Herd immunity reminds us that vaccination is not purely a self-interested act but also one of altruism and comity. I believe that a modern society has a duty to achieve herd immunity so that it can provide protection for the small group who are unable to be vaccinated for medical reasons (those with severe impairment of their immune systems, for instance, due to congenital illnesses, cancers, HIV or organ transplantation), for those who did not respond to the vaccine and for older people whose vaccine immunity has waned over time. But the success of vaccination has resulted in a population that has been spared the powerful direct experience of measles (and other infections) and that may be more susceptible to the superficially appealing reasoning of the anti-vaccination movement. While the anti-vaxxers’ effect on vaccine coverage is probably marginal, we have seen that there isn’t much buffer in vaccine coverage rates; small falls in coverage may be all that is required for herd immunity to be lost and for a vaccine-preventable disease to re-establish itself.

I also fear for the children of the anti-vaxxers in the coming years. Denied the vaccine themselves, but temporarily protected by the herd immunity generated by their responsible neighbours, they will grow into susceptible adults who travel to countries where herd immunity for most infectious diseases is low or non-existent. There, these immunological innocents will encounter vaccine-preventable illnesses such as rubella, mumps, pertussis, chickenpox, meningococcal disease, Hib, hepatitis A and B, diphtheria and tetanus, to name just a few. And they will put the ideology and pseudoscience of their parents to the test in a natural experiment that no ethics committee would ever dare to approve. •

The post Red spot specials: the fall and rise of Australian measles appeared first on Inside Story.

]]>
CSIRO and climate: the devil in the detail https://insidestory.org.au/csiro-and-climate-the-devil-in-the-detail/ Thu, 25 Feb 2016 01:14:00 +0000 http://staging.insidestory.org.au/csiro-and-climate-the-devil-in-the-detail/

Diary of a Climate Scientist | Cutting funding at this stage of climate change research comes with enormous risks, writes Sarah Perkins-Kirkpatrick

The post CSIRO and climate: the devil in the detail appeared first on Inside Story.

]]>
Around mid morning on 4 February the news broke that Larry Marshall, chief executive of CSIRO, was swinging the axe on hundreds of jobs. Unlike previous rounds of redundancies, these are tightly targeted – mostly focusing on climate research capabilities in the Oceans and Atmospheres Division and the Land and Water Division, each of which will lose around one hundred people.

Back before the Coalition came to government, when I was working at CSIRO as a climate projections scientist, the prospect that Tony Abbott might become prime minister was causing widespread unease among scientists and other staff. The fears proved to be justified – the organisation’s headcount was reduced by more than 20 per cent after the change of government. The broader scientific community thought that those cuts took CSIRO’s staffing to rock bottom, but we were wrong. And now it seems likely that the organisation’s climate research capability will altogether starved of fuel.

The current “process” (to use Marshall’s term) took everyone by surprise, including CSIRO’s partners in universities, government departments and industry. One of them, the Bureau of Meteorology, received only twenty-four hours’ notice. It also showed an appalling lack of respect for the affected CSIRO staff, and suggested that the organisation’s executive team lacked an understanding of exactly what these key divisions do and how the cuts would play out nationally and globally.

CSIRO employees learned of Marshall’s plans via social media and news outlets. Only two hours later did an email from the CEO go out to staff. Amid four “rambling” pages about the new business goals of the organisation – which, perhaps unsurprisingly, were eerily similar to those of Silicon Valley-based companies whose goals are quite different from the CSIRO’s – was a plainly wrong observation that the question of climate change “has been answered.”

Marshall backed up this statement with a reference to the Paris Agreement of late 2015. In fact, the international scientific community agreed on the warming influence of human activity in the first Intergovernmental Panel on Climate Change report, published in 1990, with many underpinning studies dating earlier than this. So the “question” was actually answered over two decades ago. But there are many devils in the detail. Which regions have warmed most or will warm faster? How will rainfall change? How will climate variability, such as the El Niño/Southern Oscillation phenomenon, change? How will both rainfall and temperature extremes change, and at what pace?

The degree to which these questions, and many others, have been researched (or, in Marshall’s terminology, answered) varies. Indeed, for effective adaptation to be implemented, we need to understand these changes on fine temporal scales, which is still a very challenging task. And like any challenge, only continual work and research, as a global effort in which CSIRO participates, will be needed.

It has taken many years and countless billions of hours’ work to build our current generation of climate models. They are based on what we know about the climate system, as understood by observations. Over a four-dimensional grid, the models simulate changes and interactions of the atmosphere and ocean, including the effects of topography, the land surface, and how the system responds to changes in greenhouse gases.

The next step is a finer-grained analysis. The current models cover, at best, on a 1° x 1° grid (approximately 100 km x 100 km). We simply do not have the capacity to run global climate models at higher resolutions than this, nor the computational resources to process and store the resulting data. During the federal Senate estimates on 11 February, Marshall stated that the Australian climate model, ACCESS, in which CSIRO plays a large supportive and development role, would still exist. This is only technically true. The model’s code and algorithms will still exist, but the staffwho develop and support the model – the people who develop its capability and usability – will not. Australia will no longer be a global player in the crucial development of climate models.

It’s also important to remember that appropriate adaptation and mitigation practices go hand-in-hand with state-of-the-art climate modelling. CSIRO has a solid history of constructing and assessing climate projections for Australia based on the most up-to-date global climate model ensembles. The latest, and arguably the most detailed and user-friendly, was released in 2015, the one before that in 2007. It would be irresponsible to construct adaptation and mitigation policies on the latter, superseded assessment. The cuts that Marshall proposes will strip CSIRO of its capacity to construct and assess the most up-to-date climate projections.

Almost 3000 climate scientists worldwide have signed a petition imploring the Australian government to stop these cuts, and many more citizen-led petitions exist. World-leading scientists and organisations, including the World Climate Research Programme, a division of the World Meteorological Organization, are shocked, baffled, and extremely concerned by what these cuts mean for climate monitoring and modelling globally. CSIRO has always been a key link in the chain; its removal will damage not only our monitoring and understanding of changes in the Australian climate, but also the global climate effort.

The tangible benefit of science research, and especially of climate science, is not always blindingly obvious. The development of the ACCESS model, for instance, is not going to generate direct income for CSIRO, but its improvement has the potential to save many billons of dollars in mitigation efforts in years to come, both in Australia and globally. CSIRO’s chief executive says that the notion of “customers” for science research is new, yet he seems unaware of current and future global clients for the climate science research that CSIRO undertakes.

There is one thing Larry Marshall and I agree on: we need to increase our adaptation and mitigation outputs. But this must not be done at the expense of the underpinning climate science research. While Marshall has projected that over the course of the next two years no net job losses will occur within CSIRO, well over two hundred positions will be redeployed or completely cut from CSIRO’s climate science capability and knowledge pool – knowledge that has taken decades to build. •

The post CSIRO and climate: the devil in the detail appeared first on Inside Story.

]]>
Innovation: the test is yet to come https://insidestory.org.au/innovation-the-test-is-yet-to-come/ Thu, 10 Dec 2015 13:29:00 +0000 http://staging.insidestory.org.au/innovation-the-test-is-yet-to-come/

Education is the sector that most urgently needs to be freed from the Abbott legacy, writes John Quiggin

The post Innovation: the test is yet to come appeared first on Inside Story.

]]>
Ever since his ascent to the prime ministership, Malcolm Turnbull has represented a puzzle. Is the leadership change, as Tony Abbott and Labor argue, a mere change in symbolism masking a continuation of the substantive policies of his predecessor? Or is he offering something genuinely new?

The release of the PM’s National Innovation and Science Agenda raises this question in a particularly acute form. Turnbull has made this his signature initiative, and has promised a change in direction for the nation. But the statement is, at best, a first step in the direction of a knowledge-based, innovation-oriented society.

In symbolic terms, the contrast with his predecessor could scarcely be sharper. Tony Abbott was, by a wide margin, the most anti-science prime minister in Australian history. While the political requirements of his office imposed some restrictions on what he himself could say, his chosen cheer squad showed no such restraint.

Maurice Newman, his business adviser, claimed that the scientific community was engaged in a UN-backed conspiracy to achieve world domination. Nick Minchin, who helped organise Abbott’s coup against Turnbull, has expressed similar views.

Media supporters like Andrew Bolt and Miranda Devine have taken these conspiracy theories to their logical conclusion, attacking scientists and science in general. For Bolt, the “common sense of the layman” (that is, Bolt and his middle-aged male readers) is the touchstone against which the feeble efforts of entire scientific disciplines must be judged.

Miranda Devine is even more explicit in her hostility to scientists. ‘‘Environmentalism is the powerful new secular religion and politically correct scientists are its high priests…” she writes. “It used to be men in purple robes who controlled us. Soon it will be men in white lab coats. The geeks shall inherit the earth.”

Abbott’s policies, beginning with his decision to scrap the position of science minister, reflected these views. His government cut funding for CSIRO and, of course, for everything connected with climate change, while providing lavish funding to address superstitious fears about the supposed health risks of wind farms, appointing a part-time commissioner on a $600,000 contract to investigate.

Turnbull has dropped Abbott’s anti-science rhetoric and dispensed with the services of Maurice Newman. He has also reversed many, though not all, of Abbott’s most egregious culture war policies.

The innovation announcement takes this process a step further. The cuts to CSIRO have been reversed, and the threat of a cut in university research funding withdrawn. These steps have been accompanied by a set of smaller initiatives, ranging from tax concessions for innovative small business to the promotion of science and maths in schools. The total cost of the package is estimated to be around $250 million a year (or, in the form governments invariably announce such measures, $1 billion over four years).

To be sure, this is more than symbolism. But the real test is still to come. Abbott’s explicit attacks on science might have attracted plenty of pushback, but they served to give cover to measures with much larger implications for our prospects of becoming a genuinely innovative and ideas-oriented country.

Innovation starts with education and knowledge, and education starts in school. The Gonski funding measures, introduced late in Labor’s last term, finally provided a policy framework based on the actual needs of school students rather than the historical outcomes of decades-old debates about state aid to private schools.

Yet Wayne Swan’s deficit obsession led to the absurd decision to fund the Gonski measures by cutting university funding. This silliness has rightly been repudiated by Labor in opposition. Meanwhile, Abbott and Joe Hockey sought to pocket the university cuts while welshing on the Gonski promises.

In opposition, Abbott promised to match Labor’s commitment, claiming that there was not a “cigarette paper’s” distance between the two parties on education. Labor’s policy included six years of guaranteed funding, with the implication of a continued requirement for higher school spending into the future. Once in government, Abbott announced that his commitment was only good for four years, a period that is about to end.

Turnbull has to decide whether to deliver on the promise broken by his predecessor. That will cost the budget $3.8 billion in the first instance, with more to come.

Such a step would cause huge difficulties for treasurer Scott Morrison, who continues to insist that Australia has an expenditure problem, not a revenue problem. Morrison has acquiesced in the innovation agenda on the basis that the cost will be offset by cuts to be announced in the future.

The reality, though, is that if we are to be a successful and innovative country, we need to spend more on education at all levels, as well as on research. Sooner or later, that means a need for more revenue, not less. •

The post Innovation: the test is yet to come appeared first on Inside Story.

]]>
Asking the right questions about extreme weather https://insidestory.org.au/asking-the-right-questions-about-extreme-weather/ Tue, 24 Nov 2015 03:32:00 +0000 http://staging.insidestory.org.au/asking-the-right-questions-about-extreme-weather/

Diary of a Climate Scientist | It’s not a simple case of cause and effect, writes Sarah Perkins-Kirkpatrick

The post Asking the right questions about extreme weather appeared first on Inside Story.

]]>
Friday was the second-hottest day ever recorded in November at Sydney’s Observatory Hill, and the hottest temperature ever recorded this early in the season in the harbour city. In fact, only a handful of Sydney November days have ever cracked the 40°C mark. Friday was the last in a series that made up a short, sharp spring heatwave.

When extreme events like this occur I often get asked whether climate change is to blame for these extremes. It’s not a question we climate scientists can answer with a simple “yes” or “no,” and people often seem a little upset when I say that it’s the wrong question to ask or, if I’m feeling a little facetious, when I say, “Maybe.”

In fact, an entire research field is dedicated to working out whether we can quantify the role of human activity in extreme events. This field, known as detection and attribution, has been growing rapidly over the past decade. But the questions it seeks to answer are a bit different from the one I get asked when it’s very hot, and they don’t often have binary answers.

The questions that scientists ask are about how human activity has altered – rather than created – particular types of extreme events. They may be questions about how climate change has contributed to observed trends, which are answered using “optimal fingerprinting” (I’ll return to this in a future column). But what I want to focus on here is the process that helps us answer the question that arises when a heatwave, drought or intense rainfall event has occurred. To do this we undertake a “fraction of attributable risk” analysis.

This is not as complex as it sounds. It simply involves comparing the frequency of a particular event using two distinct and unrelated samples. A classic and well-used analogy is the link between smoking and lung cancer. In order to work out whether smoking increases the risk of lung cancer, the incidences of the disease among smokers and non-smokers are compared. The corresponding answer, or fraction, tells us whether the risk is altered between the two groups, and therefore whether an increased risk can be attributed to a particular cause.

In climate science, our two distinct groups are produced by physical models. We have one bunch of experiments that represent the climate had the industrial revolution never occurred, meaning that atmospheric carbon dioxide concentrations are kept at 280 parts per million. We have another bunch that represent the climate we actually live in, where atmospheric carbon dioxide levels increase through time due to human activity, which are known from observational measurements. All we need to do is compare how often our event of interest occurs in each group of experiments. Any fraction above 0 means an increased risk of that event, which is attributed to human influence on the climate, namely rising carbon dioxide levels. A fraction of 0.5, for example, equates to the event occurring twice as often as it did before we started driving climate change.

It is important to make the distinction between changes in the risk of an event and whether climate change is its entire cause. Again, this is where the smoking analogy is useful. While lung cancer is not solely caused by smoking, we can now confidently say that your risk of suffering the disease considerably increases if you smoke. While extreme temperatures still occurred before the industrial revolution, the possibility of their occurrence continues to increase alongside our greenhouse gas emissions. A heatwave like the one over much of Australia last week could have occurred without any human influence, but it now occurs more often because of human influence.

Among the many studies that have used this methodology, those that have investigated hot temperature events have almost alwaysfound an increased risk due to anthropogenic influence. Take, for example, Australia’s hottest summer on record in 2012–13: we now know that summers of this intensity occur six times more often than they did before the industrial revolution. The kind of heatwave we experienced in May last year has increased twenty-fold due to human influence, and Australia is now six times more likely to experience a month as warm as the hottest month on record (most recently in October).

A pertinent example of this is the 2003 European heatwave. In one of the first attribution studies ever conducted, it was found that the risk of such a heatwave had at least doubled due to anthropogenic climate change. Yet a more recent study has shown that, with the rise in atmospheric carbon dioxide levels, such an event is now twenty times more likely. The risk will only continue to grow in the coming decades.

Results about the altered risk in rainfall events are a little less consistent. No altered risk due to human influence was found, for example, in the extreme rainfall over southeast Australia during the 2011–12 summer. But human influence did increase the likelihood of floods in Britain in 2000. The evidence also suggests that human influence on the risk of extreme rainfall events will increase over the coming decades.

While “fraction of attributable risk” is a simple and effective calculation, it needs to be implemented every time a new event occurs. Each analysis is dependent on the magnitude of the event, how long it lasted for, and the size of the area it covered. And this is a reminder that a change in the risk of one event can’t be transferred directly to another.

Even though they seem similar on the surface, two events may have a different combination of magnitude, duration and extent, which affects the derived change in frequency due to anthropogenic climate change. That’s why the question is not whether climate change is the cause of an extreme event, but whether we are seeing more of these events because of climate change. •

The post Asking the right questions about extreme weather appeared first on Inside Story.

]]>
How should we feel about climate change? https://insidestory.org.au/how-should-we-feel-about-climate-change/ Wed, 23 Sep 2015 13:35:00 +0000 http://staging.insidestory.org.au/how-should-we-feel-about-climate-change/

Diary of a climate scientist | Where do emotions fit into the work of scientists who study climate, asks Sarah Perkins

The post How should we feel about climate change? appeared first on Inside Story.

]]>
Among the many activities at this year’s National Science Week was the Is This How You Feel? project, an exhibition of letters from climate scientists describing how the implications of climate change make them feel. This novel and successful project was the brainchild of Joe Duggan, a masters in science communication student, and I was delighted to contribute.

After the original project was launched in Australia, it went global. Letters have now been received from leading scientists all over the world. James Byrne lists why he feels afraid, angry, frustrated, sad and bewildered. Stefan Rahmstorf powerfully likens anthropogenic climate change to a reoccurring nightmare. Lesley Hughes makes clear how much we have to lose, and Katrin Meissner expresses her frustration and fear at the lack of acceptance of the facts. But it is worth pointing out that almost every letter conveys feelings of excitement or hope that it is still possible to make a difference.

Of course, as is the way in modern climate science, not everyone agreed with what Duggan was trying to do. Among the points raised to discredit the project was the argument that, as scientists, we should be completely objective – that no feelings, emotions, reactions or sentiments are permitted on the issue of climate change. It might not come as a surprise to hear that I am at odds with climate sceptics once again.

As scientists, we are trained to be objective. We are trained to be critical, even “sceptical,” and never to take anything at face value. Everything is scrutinised, and our analyses are repeated many, many times. This technique is so important that it is taught in secondary school and is consistent across all the sciences. In plain English: never believe your first result. Only after you have replicated it again and again might it even have a shot at being accurate.

We are also slaves to the scientific method. This approach has been around for hundreds of years and governs how scientists test their hypotheses. The experimental replication explained above is part of it, as is the gathering of suitable data, based on your initial observations, on which your experiment is then performed. But the method also consists of attempting to prove your hypothesis wrong by changing your approach. If you get the same answer to your hypothesis regardless of which (credible) experimental design is used, then this is more evidence that the theory is supported.

Scientists are also extremely critical of each other’s work. The best example of this is the peer review system. Once researchers painstakingly apply the scientific method to prove their hypothesis holds, they submit their methods, results and inferred conclusions to an appropriate journal of their field. Peer review dictates that before the article is even considered for publishing, expert reviewers examine the quality and appropriateness of the experimental design and the conclusions this underpins. In some cases, the study may need to be completely overhauled, or even rejected for publication altogether. Only once a paper has passed peer review can a study possibly be considered as acceptable scientific evidence.

All of the above most certainly applies to climate science – there are no exceptions. We are excruciatingly careful to make sure our results are as robust as possible, and to outline the quantifiable confidence around them as precisely as we can. (Our jargon term for this, which is commonly misunderstood, is uncertainty.) No feelings, emotions or sentiments are involved in reaching our conclusions, ever.

Indeed, even the raw conclusions themselves shouldn’t be subject to feelings. We have employed specific methods to objectively assess a physical state, with our conclusions indicating how and why that state may or may not be changing. The end.

But we are also humans, and as human beings we are permitted to react to the implications of these conclusions. Many thousands of scientific studies on human climate change have been conducted over the past few decades. The overwhelming majority (hovering between 97 per cent and 99 per cent) indicate that human climate change is happening and unmitigated impacts can and will be severe. These impacts are vast and many, including the initiation of physical processes that will further accelerate changes, permanent losses in our ecosystems, and severe financial burdens because of damage to swathes of industries.

I’m not arguing that we should become emotional wrecks when we are faced with the implications of climate change. But a future in which not enough has been done in response is a very worrying prospect. And it is depressing that – despite the plethora of scientific evidence demonstrating climate change is already occurring – we are currently not doing enough to safeguard the future.

So, even as a climate scientist, you can have all these thoughts and feelings about climate change. I don’t believe that interferes with the quality of the work we do. We will always be bounded by the scientific method in obtaining our results and conclusions. This is, at least to us, very clearly separate from the broad implications of our findings. Indeed, in this letters project climate scientist Ruth Mottram highlighted the difference between what we think and feel, and how the latter can be difficult to describe. We do our work because of our passion and natural curiosity to understand how the earth system works, not because we are compelled by negative emotions. These can be, and are, kept separate.

Duggan’s campaign demonstrated that climate scientists are humans, and that fact doesn’t need to undermine our objectivity and credibility. •

The post How should we feel about climate change? appeared first on Inside Story.

]]>
Wherever you are, heatwaves are getting relatively worse https://insidestory.org.au/wherever-you-are-heatwaves-are-getting-relatively-worse/ Tue, 25 Aug 2015 01:15:00 +0000 http://staging.insidestory.org.au/wherever-you-are-heatwaves-are-getting-relatively-worse/

Diary of a climate scientist | Even the “top of Europe” suffers in a heatwave, writes Sarah Perkins. And worldwide they’re becoming more frequent and more intense

The post Wherever you are, heatwaves are getting relatively worse appeared first on Inside Story.

]]>
Not long ago my partner and I were holidaying in Switzerland, revelling in the glorious warm summer weather. It seemed like an ideal time to venture up to Jungfraujoch and enjoy “the top of Europe” in all its snow-covered glory. So there we were, only 200 metres from our destination, when the train suddenly stopped and then retreated, backwards, down the mountain.

We found out from a local that we couldn’t go any further because the tracks had swelled too much in the heat. The same thing had happened just a couple of weeks earlier. The villagers were understandably confused – after all, the mountain is covered in snow and glaciers, and it’s cold up there all year round.

When we talk about heatwaves, it’s important to remember that everything is relative. The temperature in Grindelwald that day was a glorious 28˚C – perfect for two Australian tourists. But a lot of the locals were complaining about the heat, and about the fact that it had gone on for far too long.

It was a little chillier at the top of the mountain, of course. About 10˚C, in fact. But the average July maximum temperature up there is just over 3˚C. The infrastructure used to haul up thousands of tourists every day, accustomed to the cold, couldn’t cope in the relative heat. In this case the problem was compounded by the fact that the hot weather had lasted for a couple of weeks because of the heatwave engulfing Europe. Too much (relative) heat for too long and systems fail.

For us, the impact of the heatwave was trivial – we didn’t get to see the gorgeous view, but we got our money back and spent the day exploring nearby Swiss villages. But the impact on Swiss tourism might have been a little more serious – thousands of people would have wanted their money back over both days the buckling occurred (and possibly on subsequent days). Some might never be able to return for their chance to reach the top.

Europe is no stranger to much more serious effects of heatwaves. In 2003, more than 70,000 people died across the continent during a period of excessive and prolonged heat; production from crops and forests was severely reduced. In 2010 the wildfires that burned across Russia during the thirty-three-day heatwave caused US$15 billion in damages.

Before this year’s heatwave in Europe, two separate and extremely intense events hit Pakistan and India. Naturally, the climates of those countries are much warmer than Europe’s, but multiple consecutive days above 49˚C are catastrophic. Combined, thousands were killed, hospitals and morgues buckled under the pressure, and roads melted away. Again, too much (relative) heat for too long and systems fail.

But these were just a couple of events in a rather hot summer, weren’t they? And surely heatwaves are rare, and don’t last for very long?

Unfortunately, the intensity, frequency and duration of heatwaves have increased in many regions since at least the middle of the twentieth century, largely because of human influence on the global climate.

Back in Australia, at the University of New South Wales, I’ve been using climate model simulations to explore these changes. And if we compare long-term heatwave trends in a world with humans and a world without, it becomes clear that we are largely responsible for their increasing intensity. In other words, the speed at which they are increasing could not have occurred naturally. Heatwaves would fluctuate from year to year, but not on time-scales in the order of decades.

We’re also seeing more of them. And projections from climate models indicate that even more will occur in the future as our influence on the climate increases.

If the industrial revolution had never occurred, the frequency and intensity of heatwaves wouldn’t have changed – or, at most, would have changed insignificantly – since 1950 almost anywhere in the world. Yes, heatwaves would still have occurred and systems would have failed, but the climate model projections show that heatwaves would have been consistently rare, allowing sufficient time for recovery.

What actually happened were significant increases in heatwaves across almost every region for which there is sufficient data. This means that wherever you are, relative extreme heat is increasing.

In Australia, with our summer just around the corner, we too should brace ourselves. With an El Niño already formed and only getting stronger, we have higher chances of longer and more intense heatwaves this summer. Indeed, this phase of natural variability will further exacerbate the influence of climate change. Our health systems and public infrastructure will be strained, the risk of bushfires will be heightened, and the impact on our unique flora and fauna could potentially be large.

Whether it’s tourism, human health, infrastructure or ecosystems, that train ride in Switzerland is one more sign of things to come. •

The post Wherever you are, heatwaves are getting relatively worse appeared first on Inside Story.

]]>
Pope 1, Lomborg 0 https://insidestory.org.au/pope-1-lomborg-0/ Thu, 23 Jul 2015 13:18:00 +0000 http://staging.insidestory.org.au/pope-1-lomborg-0/

A new website allows scientists around the world to assess the quality of media coverage of climate change, writes Daniel Nethery

The post Pope 1, Lomborg 0 appeared first on Inside Story.

]]>
Those who associate the Vatican with the trial of Galileo may baulk at the thought of the Catholic Church as a scientific authority. But the Pope’s decision to confront the issue of climate change raised a pressing question: how much credibility does his latest encyclical have? Climate scientists are in the best position to answer this question, but until now they have had few ways of making their voices heard.

The Climate Feedback project opens a new channel of communication. The project uses the Hypothesis online annotation platform to allow scientists to subject media reports about climate change to a process akin to peer review. Scientists can trawl through online documents sentence by sentence, chart by chart, highlighting inaccuracies and misrepresentations or adding context. Hypothesis places these annotations in the public domain, allowing readers to see exactly how the article aligns with the latest science. The scientists also provide a score for scientific credibility; the average of these scores, converted into a simple scale, is presented along with a pithy summary of the credibility of article.

So far the project, which is hosted at the Center for Climate Communication at the University of California, Merced, has focused on articles in the American media. The community already comprises scientists from across the globe, however, and the papal encyclical, which addressed itself “to every living person on the planet,” provided a particularly apt case for an response. Climate Feedback collated the assessments of nine scientists who rated the overall scientific credibility as “high.” The scientists picked up on “a few minor scientific inaccuracies,” and warned that the text “could be interpreted as understating the degree of certainty scientists have in understanding climate change impacts,” though MIT Professor Kerry Emanuel, who served as guest commentator for the evaluation, found the encyclical to be “strongly aligned with the scientific consensus about the reality and risks posed by global warming.”

Providing positive endorsement of high-profile media coverage opens up new opportunities for climate scientists to engage on their own terms with readers. The project aims to identify sources readers can trust, alleviating the doubts that can prevent people from advocating for practical measures to address climate change. As Sarah Perkins, a researcher at the University of New South Wales who took part in the evaluation of the encyclical, put it, the “encyclical discusses why the climate is changing… and highlights that climate change is no longer a background issue – it is here and now, and we need to do something about it.”

Climate Feedback has also tackled the work of Bjørn Lomborg, whose attempt to found a “Climate Consensus Centre,” backed by $4 million in Australian government funding, has stalled – for now. On 1 February Lomborg published a piece in the Wall Street Journal titled “The Alarming Thing about Climate Alarmism,” in which he argued that “much of the data are actually encouraging.” But he only reached this conclusion by cherry-picking the observations, the Climate Feedback scientists revealed. They gave the piece a credibility score between “low” and “very low.”

The Lomborg evaluation shows how Climate Feedback can provide an efficient and effective way for climate scientists to refute the unsubtle claims of climate contrarians. Until now, it’s mainly been climate scientists with high media profiles who have championed the cause. This is time-consuming, however, and does nothing to refute one of the most persistent myths spread by climate contrarians, namely the lack of consensus on the science. In response to his WSJ piece, Lomborg came under the scrutiny of seven climate scientists, who criticised specific issues in the text without having to write up a larger response. The evaluation was then covered by Forbes magazine, which presented the Climate Feedback project and gave Lomborg the opportunity to respond. Lomborg accused the scientists of being motivated by ideological considerations. This sort of argument has become commonplace and may score points in a debate between two individuals, but becomes implausible when scientists from all over the globe provide independent feedback.

Founder Emmanuel Vincent is quick to point out, however, that tackling the work of climate contrarians represents but one goal of the Climate Feedback project, and not the main one. Rather, the project focuses on providing readers with a simple guide to finding credible sources. It represents a first step away from a climate change media “debate” that too often has been dominated and even driven by contrarians. Climate Feedback provides the opportunity for established professors to contribute alongside more junior colleagues and early-career scientists, whose research may place them at the cutting edge on a particular topic, but who are unlikely to get coverage in the media. The result is a dynamic platform for climate scientists to point journalists and the public to the signal in the noise. •

The post Pope 1, Lomborg 0 appeared first on Inside Story.

]]>
Lives in motion https://insidestory.org.au/lives-in-motion/ Wed, 28 Jan 2015 03:57:00 +0000 http://staging.insidestory.org.au/lives-in-motion/

Cinema | Sylvia Lawson reviews Wild, Birdman and The Imitation Game

The post Lives in motion appeared first on Inside Story.

]]>
For most of us, who will not have walked the Pacific Crest Trail from southern California to the border of Washington State, Wild offers a pretty fair scenic tour. Directed by Jean-Marc Vallée, Reese Witherspoon’s Cheryl Strayed leads us through desert bush as unforgiving as Central Australia’s, into snow country and towering forests, through numerous encounters in which the most threatening appearances are neither wolves nor rattlesnakes, but men; a couple of them come straight out of Deliverance. This is classically an odyssey, a journey driven by personal necessity, and a determinedly feminist project; the word is used along the way, with clear defiance. It is openly about taking control, conquering demons, self- and world-discovery. Witherspoon chose the story, and took a role on the production team as well as playing the lead; her choice of Nick Hornby as screenwriter was a mark of her ambition. With the film’s appearance, she has gained authority in the business – authority, that is, to make commercial movies stamped with guaranteed liberal worthiness.

To judge from magazine versions of Strayed’s well-published account, Wild delivers the story much as she told it. She was, at the outset, walking to deal with both grief and shame: grief for her mother Bobbi’s early death from cancer at forty-five, grief deepened and complicated by their fraught and messy relationship; Cheryl was exasperated by Bobbi’s failure to leave, decisively, the violent husband and father. The shame was for herself, her disgust with herself for messing up her own marriage and for her sexual fecklessness afterwards. The account of the gruelling hike is broken by flashbacks, rather too many of them; we’re threatened by glib psychologising, in which the past is constantly being called up to explain the present. So she’s walking away from protracted adolescent turmoil; what can she walk toward? The huge backpack is bigger than the actor; the visible process of the journey has to do with managing it, learning the tent, the right fuel for the portable stove, dealing with damaged feet and shoes. There’s something to think about in these practicalities, and in the relation of the traveller to the land as it changes around her; as she reaches the travellers’ posts with the guest books, and enters the quotes from Emily Dickinson, Adrienne Rich, Robert Frost (But I have promises to keep, And miles to go before I sleep).

Much more could have been done with those elements, and what they signal for people on long walks. That more, however, would be outside the literal/liberal genre which Wild convincingly inhabits. The whole outcome inevitably invites comparison with Tracks, John Curran’s film last year from Robyn Davidson’s memoir. These films approach adequate illustration, but illustration they remain. Each of them subtends the book; it does not take off from the book, it doesn’t get airborne. For each therefore, go back, read the book, and imagine. Or take a turn along the Larapinta Trail.


You could think of him as the wild man of world cinema, the prolix, extravagant Mexican director Alejandro González Iñárritu, who can somehow pull together big-budget films from seemingly anti-commercial elements. Birdman (or The Unexpected Virtue of Ignorance) is dividing audiences and commentators round the globe. In comparison with Babel, where he won and lost us by turns, this one offers something like an integrated story. Once upon a time the actor Riggan Thomson (Michael Keaton) was the star in the Birdman franchise (read Batman), ritually blowing up cities, flying above the skyscrapers and coming to the rescue. It emerges here that he had purposefully pulled out of the fourth in the series, wanting to build another kind of career, terrified of irrelevance; and now he’s got a serious, small-scale play on Broadway, one drawn from a Raymond Carver collection of stories (published 1981), What We Talk About When We Talk About Love. Low-budget, fine cast, accredited literary antecedence – what more could the over-indulged, demanding New York audience ask?

Irrelevance still threatens; and his alienated daughter Sam (Emma Stone, delivering fury with the biggest eyes in movies), just out of rehab, gathers up his fears and throws them back at him. It’s vengeance; as a father, he was never there. Her mockery is multiplied in the nervy gibes of Riggan’s star Mike (Edward Norton) and the exasperation of his friend and producer Jake (Zach Galifianakis). It doesn’t help when the current girlfriend has a pregnancy scare; and then, when Riggan’s ex-wife visits, we seem to be looking at the only grown-up in the lot of them. Here, in the midst of verbal and physical violence, profanity and mayhem, Amy Ryan’s performance has a fine, intelligent stillness. She says she’s trying to remember why they broke up; an attentive audience could devise its own answers.

There are other, crossing perspectives, from Naomi Watts’s Lesley in the on-stage cast, pathetically unable to believe she’s really making it on Broadway, and going into meltdown backstage (“I’m still just a little kid”); and from Lindsay Duncan’s waspish theatre critic, hooked on her own bit of power. It’s all an old story: the perilous allure and then the fading of stardom, the awful grip of the success/failure psychosis. There’s a fragmentary Raymond Carver poem, speaking the desire to “feel myself/ beloved on the earth”; and so, legibly, the film moves from Raymond Carver’s own story – much like Dylan Thomas, he flew high, and then drowned in the drink. Riggan identifies with Carver’s desire, but escapes his fate because he’s not a poet, but a performer who can whistle up magic-realist devices. Aspiring to seriousness, he grasps it precariously, and is then allowed to command the air again as Batman/Birdman – or if you like Peter Pan, who has his storyteller’s permission never to grow up. Michael Keaton does a wonderful job, balancing Riggan’s infantile self-absorption with a sad, open-eyed self-knowledge.

You might find the elements clichéd; some commentators have judged the work sour and shallow. Not if you go with its flow; Iñárritu, and his extraordinary cinematographer Emmanuel Lubezki have transformed the whole assemblage with sheer cinematic energy. In a continuous surge, we pursue Keaton and the rest along the narrow backstage corridors, watch from the flies above it, see night changing to dawn behind grimy facades, and then, as high farce takes over, pursue a near-naked Riggan pushing through the Broadway crowds: an extraordinary image of humiliation. The film’s narration isn’t so much in speech or in individual performance, not even Keaton’s; it’s in the drive, the fierce tidal rush, the purposeful circling of chaos.


Praised here, rebuked and reviled there, The Imitation Game is more than worth a second viewing. It is important that the Norwegian director Morten Tyldum’s film, written by Graham Moore from Andrew Hodges’s book about Alan Turing, disavows strict accuracy; the claim in the opening credits is that what follows is “based on a true story,” not that it actually is one. The story of the wartime cryptographers of Bletchley Park, and their role in cracking the codes used by the Germans in plotting their assaults, is now widely known from TV and theatrical versions; so is the name of Turing, the mathematical genius who devised the first massive mechanical computer. Images of its cumbersome, clanking operations, the banks of wheels watched in fear and hope by Turing and his team, are at the centre of the film. The yield from their calculations, from Turing’s claim that yes, machines could think, though differently, was crucial to the progress of the war.

Because we all vaguely know the antecedence of our endlessly pattering laptops, some commentators have written the film off as too comfortable to matter, too conventional in its telling. Others quibble at historical points, but the film isn’t trying for the long prehistory of cryptography; its centre is Turing. With the attention given to his lonely schooldays, with a beautiful brief performance by Alex Lawther as Alan, aged twelve, it forms a partial life story (which is not the same as a biopic). At that rate, some of the critique is off the mark; on questions of history, for instance, it surely doesn’t matter that in the milieu at Bletchley Park, Turing may never have met that colleague who turned out to be spying for the Soviets.

The Imitation Game offers a history behind the man’s suffering, his personal isolation. Benedict Cumberbatch gives a great, complete cinema performance; at close quarters, we look at gaucherie and near-autistic literalism in the workplace, at stumbling gestures towards fellowship, shy responses with rare smiles, and an awful retreat into stone-cold rejection when the girl comrade and colleague, Joan Clarke (Keira Knightley), proposes a shielding companionate marriage. Whatever intimacy Turing knew is reported, not seen; what’s seen is a near-incurable loneliness, mitigated – and to some degree overcome – in the major shared task: cracking the code.

Condemned for homosexuality when it was a criminal offence, and then punished by the barbaric method of chemical castration, Turing was pardoned by the Queen some fifty years after his death. The whole idea of the pardon is itself grotesque, but at least it carries important recognition; we now hear that Benedict Cumberbatch and Stephen Fry have together initiated moves towards extending such recognition to all the other, anonymous victims of that period’s punitive homophobia.

Thus filmic storytelling can make a difference, perhaps the more easily when its mode is classically conventional. As though sitting at a play, we can enjoy the turns of a great cast: Charles Dance doing inimitable top-brass nastiness as Commander Denniston; Rory Kinnear, visibly shifting sympathies as Turing’s police interrogator; Matthew Goode and others changing viewpoints, coming over to Turing’s side on the team; and Knightley in a role she endowed with the liveliest intelligence – you end up believing that she could indeed finish that testing crossword in five and a half minutes; and even knowing Turing’s end, you could stand and cheer for all of them. Forget the quibbles, and wave towards the wider histories beyond and behind our computers; by virtue especially of performance, this is a marvellous film. •

The post Lives in motion appeared first on Inside Story.

]]>
Are we going to die on Wednesday? https://insidestory.org.au/are-we-going-to-die-on-wednesday/ Wed, 28 Jan 2015 01:15:00 +0000 http://staging.insidestory.org.au/are-we-going-to-die-on-wednesday/

Television | Science broadcaster Brian Cox navigates the line between two kinds of uncertainty, writes Jane Goodall

The post Are we going to die on Wednesday? appeared first on Inside Story.

]]>
Brian Cox specialises in communicating the mysteries of the universe to television audiences, but one of the great mysteries is the communicator himself. It would make more sense if he were several people. He is a professor of particle physics, part of an international team working with the Large Hadron Collider in Switzerland, which was the subject of the recently released film Particle Fever. As one of the inner circle of wits on Stephen Fry’s QI, he keeps pace with professional comedians on flights of improvised lunacy. He’s also a former rock star. Yes, really. During his undergraduate years he was keyboard player for D:Ream, which scored a number one single and several other hit records in the 1990s. And he’s recently been cast as God in the stage version of Monty Python’s Life of Brian.

Cox’s most recent television series, Human Universe, is screening on ABC1. This is the fourth series Cox has made for BBC Two, and you do have to wonder how someone whose subject is, unashamedly, life-the-universe-and-everything can differentiate one series from the next. The titles aren’t much help – Wonders of the Solar System, Wonders of the Universe and Wonders of Life were the first three – though the wonders seem to have gone missing this time around.

Cox is widely regarded as David Attenborough’s successor and, as broadcasters, they encounter the same paradox: if the whole planet is your subject matter, and you have the resources to take your camera crew just about anywhere, how do you avoid repeating yourself? Ah yes, that’s Attenborough again, intimately chatting up a tree frog in some jungle, whispering to camera as he observes a colony of mountain apes, or lying on the sand watching giant turtles beach from the pre-dawn tide. Any of these could be a scene from pretty well any of his programs, and we’ve seen versions of most of them many times in the course of his career.

Cox has the problem in, if anything, a more acute form. How many times can you visit the universe at large – with its stars, planets and galaxies, its light particles, magnetic fields and soundwaves – and still have something distinctive to say? The problem is that we have become accustomed to being whizzed around the planet – and off it – on our TV screens. Whether it’s the shores of the Galapagos, the peaks of the Andes, the polar ice floes or a view of the Pleiades from a desert in central Australia, it’s all business as usual for viewers.

The challenge for the presenter is to break through this feeling of familiarity with a presence and a narrative that make you watch because you want to listen. And there seem to be two schools of thought in the documentary business about how to do this.

According to the first, the viewer’s attention is seen through the lens of the entropy principle: we out there in couch-potato land must be constantly jerked into reanimation by an onscreen personality who exudes energy at every turn. With exuberant body language and constant proclamations of what is fantastic, amazing and exciting, the type A presenter arrests us mid-yawn and lassoes our straying attention back into the world of wonders.

The second school is derived from astute training programs for teachers and actors, and its core technique is counterpoint. If your audience is noisy, lower your voice. If they are shuffling in their seats as if they are about to leave, slow down. If their attention is caught by the slightest distraction, narrow your focus to a detail. Both Attenborough and Cox are type B presenters. They are quiet, calm and relaxed because they have a genuine understanding of how human concentration works.

Calm people are good to watch, though when it comes to watchability, Brian Cox has a lot else going for him. He looks like Ben Whishaw as the romantic poet John Keats in Jane Campion’s film Bright Star – and, in fact, the Keats connection has other interesting resonances. In a sonnet from 1816, the poet describes feeling “like some watcher of the skies, when a new planet swims into his ken.” (This is thought to be a reference to William Herschel’s discovery of Uranus.) The poem concludes with the famous image of the Spanish explorer Cortés looking out over the Pacific, “silent upon a peak in Darien.”

Cox has a way of finding great vantage points around the globe and just standing there, so you’re not so much watching him as watching with him. In Human Universe, he climbs the tower of a Hindu temple in Rajasthan overlooking a massive crater, circles over Easter Island in a helicopter, and visits ancient sites in the Peruvian high Andes. At the end of the second episode, he takes a glass elevator high above the Tokyo skyline at night, and speaks to camera: “We’ve known for some time that we’re infinitesimal specks in a vast universe, but now the suggestion is that we’re infinitesimal specks in a vast infinity of universes.”

Keats would have his mind blown all over again by this quantum leap in the scale of things. But new worlds, new planets, new galaxies… are we simple mammals really cognitively equipped to have any meaningful sense of such shifts in scale? Keats’s description of the point at which the brain reaches its outer limits was “negative capability,” which he glossed as a capacity for “being in uncertainties… mysteries, doubts, without any irritable reaching after fact and reason.” Fact and reason are core business for scientists, but anyone who saw Particle Fever will be aware that researchers in this area of science are well advanced in negative capability. They can spend their entire careers on a hypothesis that may be proved wrong.

Good television science requires a dynamic balance of knowledge and uncertainty. If it’s all about the transmission of knowledge, the inevitable insistence on how fascinating it all is gets very wearing very quickly.

There’s a risk in presenting “science” as the only domain of true and proper knowledge in the human universe, and that risk looms very large right now, with the growing political tensions over climate change.


In 2012 Brian Cox was awarded the president’s medal at the Institute of Physics in London for his achievements in “promoting science to the general public and inspiring the next generation of physicists.” In his acceptance speech, he began on an optimistic note: “I believe we are entering a new golden age of physics.” Applications for physics courses at universities were going up, perhaps as a reflection of high-profile breakthroughs such as those being made with the Large Hadron Collider and its eighty-country collaboration. It was a tremendous cultural triumph, he said, prompting Peter Higgs, after whom the Higgs particle was named, to make the typically modest acknowledgement that “it’s very nice to be right sometimes.”

But before we start ringing in the golden age, we have to do something about all those alarm bells deafening us from the tabloid press, the demons of Fox “News” and their like. Cox reminded his audience that in the lead-up to the widely publicised trial run of the Collider, the Daily Mail ran the headline, “Are we going to die on Wednesday?” There were predictions of massive earthquakes, mega tsunamis, disasters in random patterns. Seas would start to boil, mountains would crumble… Well, it was 2012, after all, a year well celebrated in the Roland Emmerich movie.

Cox is certainly someone who can enjoy the melodramas of popular imagining. But even so, he wanted to sound a warning note in the speech. He quoted Carl Sagan: “We live in a society entirely based on science.” For a modern scientific democracy to function, Cox added, a fair percentage of the population needs to be educated in scientific principles. This matters because there is a point at which collective failures of understanding could put the entire human species at risk. In the United States, says Cox, they spend more on pet grooming than on nuclear fusion research. “You can comb your own cat and put the money into clean energy. It’s about education. There’s either education or dictatorship.”

Alarm bells, it seems, ring at both ends of the spectrum. What we like to call knowledge and what we presume to call ignorance can converge in a common human impulse to envisage disaster for those who don’t think like we do. That doesn’t mean that they’re as bad – or good – as each other. It’s the science communicator’s job to steer a path between them.

Cox does this not by pontificating about how science knows best, but by acknowledging the limits of scientific knowledge and focusing on the particulars of what and how we know. In Human Universe, he does this with exemplary skill.

“Science is a humble pursuit,” he says. It’s about observing regularities that make our world predictable. A cup that is knocked off a table will always fall. It doesn’t sail upwards. The meanders of a river seem to make random patterns, but there are consistencies in their ratios.

On the big questions, like how or why atoms came together to form us, we’re faced with an intersection of randomness (what we don’t or can’t know) and pattern (what we do know). Cox invites us to watch a cricket match being played on a pitch near the banks of the Ganges. It’s a combination of rules and accidents. Then, in a transition that has all the appearance of randomness, he goes to a camel race. He takes some cells from a camel’s mouth, and from his own, and compares them under a microscope, where they look like planets floating in space. There are similarities. In an ancient world of single-cell organisms, chance collisions produced symbiotic relationships that evolved into more complex organisms through natural selection, which has its rules and laws.


One of the problems for those who commit themselves to the public understanding of science is the precarious nature of facts and explanations in the scientific understanding. The grounds of evidence and interpretation are always shifting, and if you took account of all the qualifications and provisos, you’d have even the most dedicated viewers yawning. Not surprisingly, Cox has come under some severe criticism from other scientists. A television lecture he gave on quantum mechanics in 2011 provoked heated attacks from colleagues who accused him of misrepresenting the relationship between energy fields and creating a simplistic picture of how “everything is connected to everything.” Colleagues in physics and astronomy from the University of Nottingham came to his defence, offering a forensic view of his wording. Meanwhile, the statement was being taken up by “new age mystics” as support for the idea that all consciousness is interconnected. On that point, Cox offered a refutation in the Wall Street Journal.

Scientific explanation can be a fraught business because there is a lot at stake. A recent article in the Guardian accuses Cox of offering “a fatally flawed view of evolution” in his current series by talking about “our uniqueness as a species” and implying that evolution has singled us out for some kind of peak status as intelligent life forms. The author was Henry Gee, a senior editor of Nature and author of a recent book entitled The Accidental Species: Misunderstandings of Human Evolution.

Human evolution is an area of science where negative capability is in sadly short supply, and Gee’s book is a plea for more of it. What’s needed, he says, is “an intuitive understanding that the more we find out, the more our ignorance grows.” I like this statement, with its picture of ignorance expanding like the galaxy itself. Imagine what we may not know in a hundred or a thousand years’ time. If we’re still here, and the more wilful kinds of ignorance have not killed us off.

The sophisticated communicator of science to the public has to steer a course between these two kinds of ignorance: a state of expansive unknowing, versus a prejudice against research-based forms of understanding. Without some respect for the first, science becomes the subject of a kind of preaching that only serves to entrench resistance to it. Cox may at times present his subject matter in ways that reduce its complexity, but his overall steerage is quite trustworthy. •


The post Are we going to die on Wednesday? appeared first on Inside Story.

]]>
The compulsion in the quest https://insidestory.org.au/the-compulsion-in-the-quest/ Thu, 18 Dec 2014 06:20:00 +0000 http://staging.insidestory.org.au/the-compulsion-in-the-quest/

Cinema | Sylvia Lawson reviews Particle Fever, The Dark Horse and Finding Vivian Maier, and farewells Margaret and David

The post The compulsion in the quest appeared first on Inside Story.

]]>
The circulation of a documentary like Particle Fever is possible only because our curiosity about the wonders of scientific discovery runs well beyond the ordinary viewer’s ability to understand what she’s looking at. We know that there’s a huge circular tunnel, the Large Hadron Collider, at CERN, the European Centre for Nuclear Research, under the ground on the border between France and Switzerland, and that it was built to pursue answers to major questions posed by physicists. We’re told that the goal here is a special particle that may explain matter itself, the Higgs boson, which was initially theorised by a senior physicist, Peter Higgs – he is an endearing elderly presence in the audience when the finding is announced. This is almost unimaginable: a reality that is infinitesimally small.

We can follow the interactions of a group of scientists, Italian, American, Turkish, Iranian; we take in their talk around the water-cooler and coffee machine. Some have refugee histories, some did many other things before choosing this career; the brightly elegant Fabiola Gianotti, who will take over as head of CERN from 2016, once began to be a ballet dancer. The other woman in the central group, the younger Monica Dunford, is the most absurdly healthy screen presence you could imagine; when not in the main workplace, she is bicycling furiously or pounding the treadmill in the gym. Gaining virtual knowledge of them, we can still know little of the processes they are setting off, nor can we imagine what it means to talk of mass in sub-atomic entities. The images of the great ATLAS detector show us something of the actual machinery of discovery; the visible complexity is a marvel, stirring memories of old future-fantasies, the dreams of H.G. Wells and Jules Verne. But we can get very little idea of how it works, and the film would have gained from some attention to engineers and technicians alongside the scientists.

All we can do is go with the flow. The film was co-produced by David Kaplan and Mark Levinson, both physicists, and directed by Levinson; they, with cinematographers Wolfgang Held and Claudia Raschke, and editor Walter Murch, turn the pursuit into intelligent entertainment. The film’s own daring resounds with that of the scientists’ project; Kaplan said, “You have to ignore how irrational it is to think that you make a documentary film about science when you have no idea what the ending is going to be, and you just plunge ahead and believe that at some point you’re going to get a compelling story out of it.” The compulsion is in the quest, and in the palpable shared obsession. At the point of arrival, the film-makers are saying that with the crucial particle identified, these scientists have gained the top of the mountain; and here they call in Beethoven, with the climactic passage from the Ninth. Some viewers found this quite offensive; I’d call it a bit of overly triumphalist excess. While the Collider itself is international and European, this film is American – profoundly so: a romantic affirmation of exploration as a major human right.

The part that matters comes a bit further on, when lines of speculation are explained. There’s some play with opposing views of the universe: “supersymmetry” (yes, there is something out there that loves us) versus the sprawling multiverse (no, there isn’t; there’s only indefinite, formless chaos). Along lines of connection that only these master-physicists understand, the determination on the mass of the Higgs boson will point us one way or the other. The needle settles in the middle. The implications, so far as they are explicable, will give comfort neither to believers, nor to such fiercely devout atheists as Richard Dawkins; but the confirmed agnostics, those who can believe only in deep uncertainty, may find it an intellectually satisfying outcome.


There are numerous explanations for the general superiority of New Zealand’s film-making over Australia’s. My own preferred theory is that their film-makers are blessed with better confidence than ours have in their home audience; they’re not looking sideways towards Hollywood, or away toward the sensitive European art film. Another part of it is the strength of the Maori presence, something the film industry inherits: you can go back, if you want, to the Treaty of Waitangi, and ask what kind of difference such a treaty (long called-for, delayed and therefore denied) might have made to Australian filmic storytelling. (Consider: if some Aboriginal directors – Ivan Sen, Warwick Thornton, Beck Cole – make stronger films than most, this could be because they’ve got better stories to tell, and a greater need to tell them.)

The questions are provoked again by The Dark Horse, a close-to-true story written and directed by James Napier Robertson about the troubled life and work of Genesis (Gen) Potini, played superbly by Cliff Curtis. Gen, diffident and incurably bipolar, is fatally at odds with the masculine Maori world of the small town where, stumbling around in search of a track for his life, he spends time in residential care. Emerging from the institution, he finds someone he wants to look after, his unhappy adolescent nephew Mana (James Rolleston, who had the central role in Taika Waititi’s splendid Boy in 2010). Mana, like others in his age group – both boys and girls – around the small town of Gisborne, is adrift; he is also threatened. The big tattooed bullies of his hapless father’s gang want to induct him into their version of manhood, a culture of violent bikie rebellion.

In thrashing rain, Gen sleeps under a sheet of plastic on the ledge of a monument. He wanders at night, compulsively muttering to himself; he’s an overweight shambles, his front teeth knocked out. There are theories about mental instability and skill at chess, but if they come into play here, it’s not to offer solutions. Despite his own homelessness, Gen manages to pull the kids together into a determined chess group. From the fast montage of the ensuing games, you could learn little more about chess than you might about advanced physics from Particle Fever; no matter, what we do know is that the tournament becomes a stiff contest of Gen’s young Eastern Knights against privileged young pakeha teams from well-resourced private schools. There’s grist here for your class antipathies; but think longer about the centre of the story, and the stark pain enacted by Wayne Hapi as Mana’s father Ariki. His ways of looking are enough to communicate the sadness of an adult male caught between the claims of his fatherhood and the need for approval from his own cohort. The gender issues are half-submerged; alongside Ariki, the women are marginal but vivid presences as teachers and mothers, and the girls are bouncing around, defiantly holding their own among the chess players. The issues are powerfully unstated; what matters is the conflict over what it can mean, in that world, to become a man.


The Dark Horse has won several well-deserved awards in New Zealand – best film, best direction and screenplay, best actor and supporting actor, best score for Dana Lund. Find it if you can; the dawn of Christmas means the silly season on the cinema circuits – “it’s a positive desert!” cried one cinephile friend. “There’s absolutely nothing to look at.” True, but not for long. Finding Vivian Maier should also be sought, a beautiful, complex, meandering work which, more darkly than Bill Cunningham New York, brings photography and cinema together. Co-directed by John Maloof with Charlie Siskel, the film is the story of Maloof’s own obsession; he made the recovery of Vivian Maier’s vast archive of great city photographs, and that of her own story so far as it can be known, his project. She lived from 1926 to 2009, working as a servant, a carer for children and the aged, and taking photographs in her time off with a Rolleiflex, always held waist-high. As she looked at her subjects from above the camera, she could meet their gaze, and the results are utterly remarkable. She left images of children, strays and layabouts, of anger, resentment, rock-bottom poverty, and their vitality is in the way they return the gaze and look back at us through her.

Maloof says that some of the art establishment still ignores her, but for others she is named along with Cartier-Bresson and Diane Arbus. If she really wanted to remain unknown, she has failed in the end, not only because the photographs communicate her great skill in perception, but also because there are so many of the worker herself – self-portraits, using mirrors and other reflective surfaces – in the 1950s and 60s; she is still young, clear-eyed, determinedly isolated, and manifestly lacking in the kind of feminine vanity taken in those years to be proper and normal. Then there’s the darker thread; the witnesses – mostly her long-past employers and charges – come out with their memories of her savage and punitive moments; and there are the French connections, offering other insights altogether. It’s clear that it’s the work, the enormous load of it, that has its claims, not the fragmentary and undramatic biography. There Maier, perversely as you like, sets her own heavy question-mark against our period’s major cultural obsession: the tireless burrowing into authorship, the relentless probing of love lives and pathologies. Curiosity granted, we could do with a great deal less of it.


Finally for Christmas, a salute to those valued confrères who have left the televisual scene. No matter how often I and others might have disagreed with Margaret or with David, or with both, their responses have been at all times worth having. I have regretted that they didn’t make more room for documentary, but then the program has always threatened to burst at the seams with filmic variety; the lineup of classics has been marvellous. The two have done everything to build and sustain a general consciousness of cinema as a vast array of works worth taking seriously, worth maintaining as a local industry and local culture, and always worth arguing over. That said, they’ve been great TV as well (though I have at times wanted to say: please David, that tie should be taken out and shot). I hope Margaret gets to keep all those fabulous little numbers, and the shoes. The dynamics were terrific, she with her passionate enthusiasms and great humour, he with his unfailing, gentlemanly liberalism and encyclopaedic knowledge. They’re irreplaceable. •

The post The compulsion in the quest appeared first on Inside Story.

]]>
Pregnancy: guidelines and timelines https://insidestory.org.au/pregnancy-guidelines-and-timelines/ Thu, 06 Nov 2014 00:51:00 +0000 http://staging.insidestory.org.au/pregnancy-guidelines-and-timelines/

Two accounts of getting, and being, pregnant tell only part of the story about conception and childbirth

The post Pregnancy: guidelines and timelines appeared first on Inside Story.

]]>
On a glorious spring Saturday recently, hundreds of midwives, GPs and obstetricians sat in a vast conference room, curtains drawn against the sun, listening to expert speakers from one of Melbourne’s large public obstetric hospitals expounding the latest treatment protocols for pregnancy and intrapartum care. Professor Michael Permezel, the president of the Royal Australian and New Zealand College of Obstetrics and Gynaecology, or RANZCOG, outlined new guidelines for screening for gestational diabetes, under which all pregnant women will now be offered an oral glucose tolerance test, rather than the simpler oral glucose challenge test, in the second trimester.

While the simpler test is less expensive and more patient-friendly – there’s no need to fast, for instance – it is less than ideally sensitive. Up to a quarter of cases of gestational diabetes, a significant condition with health implications for mother and baby, are missed by the current two-step process, in which hospitals only administer the more sophisticated test if the first test picks up abnormally high blood sugar. Diagnosis and treatment of gestational diabetes is delayed, sometimes beyond thirty weeks, because of delays in arranging and performing a follow-up test, and some women don’t ever get the second test. This new guideline, backed by research from several large multicentre clinical trials, was reached by consensus among representatives of nine health bodies made up of specialist doctors, public health researchers, nursing staff, allied health professionals and consumers.

In her pregnancy self-help manual, Expecting Better, Emily Oster, mother of one, promises to take a new broom to the “tired old myths” of pregnancy care. She’s also prepared to tackle a couple of consensus guidelines, though diabetes screening is left untouched. The subtitle of her book, “Why conventional pregnancy wisdom is wrong – and what you really need to know,” invites the question, “Why should Oster, albeit an economics professor with a PhD, know more about antenatal care than, say, the American College of Obstetrics and Gynaecology or RANZCOG?” Call me a curmudgeonly medico, but the premise of Oster’s book implies a conspiracy of misinformation, or at the very least silence, on the part of obstetricians and midwives everywhere. Indeed, she portrays her own obstetrician as a fairly wooden and unforthcoming doctor who, when pressed, provides incorrect or woolly answers. Oster seems to suggest that definitive information is lacking simply because no one has bothered to evaluate the evidence properly. This isn’t true.

Take, for instance, her stance on alcohol. After looking at some large prospective studies (two of which were Australian) on alcohol consumption in pregnancy and subsequent behavioural problems and IQ in offspring, Oster concludes that “there is no good evidence that light drinking during pregnancy impacts your baby. This means up to one drink a day in the second and third trimesters and one to two drinks a week in the first trimester.” (These conclusions are stated under the heading “The Bottom Line,” a summary of Oster’s recommendations that appears at the end of every chapter.) Now, compare Oster’s advice with that of RANZCOG:

Alcohol is a teratogen [a substance that can cause birth defects]. The sensitivity of the fetus to the adverse effects of alcohol varies between women and between the different stages of gestation. Internationally there is no consensus on the safe level of alcohol during pregnancy and breast-feeding.

There is good quality evidence that drinking excessive amounts of alcohol during pregnancy can damage fetal development. However, the minimum or threshold level at which alcohol begins to pose a significant threat to pregnancy is not known. The likelihood of an adverse fetal effect increases with increased volume and frequency of alcohol consumption. Until better evidence is available, RANZCOG currently recommends that women avoid intake of alcohol during pregnancy.

The RANZCOG working party involved in establishing alcohol guidelines will have examined the same research as Oster (and probably more besides) and come to a different conclusion about alcohol consumption in pregnancy: that is, to avoid it altogether until better evidence comes to hand. Conservative? Yes, but understandably so, given the responsibility of the task. Doctors everywhere see medical practice guide- lines, obstetric or otherwise, as up-to-date, best-practice resources for guiding patient care. (I should add here that guidelines, while valuable, are not meant to be strictly prescriptive: patient preferences and values, clinician values and experience, and the availability of resources must also be included in the mix.) Guidelines are, by their very nature, firmly evidence-based: if there isn’t good evidence for a safe level of drinking in pregnancy, the guidelines must reflect that lack of assurance.

“No amount of alcohol has been proven safe” is the phrase to which Oster most objects. Her counter-argument is that “too much of anything can be bad.” She cites two examples: Tylenol (the US trade name for acetaminophen, which we call paracetamol) and carrot juice (potentially causing Vitamin A toxicity if massive quantities were consumed). I find this choice of examples confusing: while Vitamin A, a retinoid, is known to be teratogenic at high levels, there’s no evidence that acetaminophen is, although it will cause liver failure in overdose.

“The statement that occasional drinking has not been proven safe could be applied to virtually anything in pregnancy,” Oster adds. I don’t agree. Alcohol is a known teratogen, at least at higher doses. Surprising as it may seem, not many readily ingested substances are.

Oster devotes 272 pages of her book to the common, almost universal aspects of conception, pregnancy and birth, at least in the Western world. Timing of conception; folate intake; the vices (her word) of caffeine, alcohol and tobacco; foods to avoid; nausea in pregnancy; prenatal genetic screening; options in labour – these issues and more are discussed in depth, the relevant studies analysed, Oster’s conclusions drawn. Eight pages out of 280 are given to the rarer and more concerning events of pregnancy: premature labour and high-risk pregnancy. In this chapter Oster includes eight conditions, each explained with a few bullet points, but interestingly doesn’t include her own Bottom Line recommendations.

Ultimately, after all the hype of the front cover is stripped away, Oster’s recommendations don’t differ much at all from those of most health professionals in this country. And she equivocates too, as well she might, when she tries to interpret studies that include few participants or are not well-designed, showing small yet significant correlations, say between different maternal sleeping positions and risk of stillbirth. So much information, and much of it difficult to interpret – which is why clinical practice guidelines are written in the first place.

Would I recommend this book to my pregnant patients? Perhaps, but only to those who are very keen to be inform-ed at every level (and by that I mean to the point of deliberating over decisions such as whether to have a glass of champagne at their best friend’s wedding). Oster’s book wouldn’t be enough of an antenatal resource itself, as no mention is made of foetal growth and development, routine antenatal testing (though she does include an informative and sensible chapter on genetic screening) or stages of labour. Oster has a likeable, chirpy style and has clearly done loads of research, and she focuses her attention on the common antenatal concerns that, at times, it must be said, doctors don’t spend as much time discussing with their healthy pregnant patients as might be desired. But an obstetrician’s day is still only twenty-four hours long, and adequate time must be given to the pointy end of pregnancy: women in labour, and those with significant and serious obstetric conditions, such as those mentioned in the eight pages of Oster’s 280-page book.


Like Expecting Better, Tanya Selvaratnam’s The Big Lie is both personal journey and crash course in medical facts and figures, but this time the cited research is on the sobering topic of infertility. Unlike Oster’s book, The Big Lie takes a broader sociopolitical approach to its subject matter. The big lie of the title relates to the feminist goal of “having it all”: that is, motherhood and career. For women of Selvaratnam’s milieu – middle-class, educated women born in the seventies and raised in the invigorating wash of feminism’s second wave – this goal has been deeply internalised.

Selvaratnam’s treatment of the motherhood/career dichotomy is largely temporal. In the author’s words, “The Big Lie is that women can do what they want on their own timetables.” Her point is not that women lose out by juggling family and work, but that women who want both need to be mindful, very mindful, of the ticking biological clock. More mindful, indeed, than the author, for this, dear readers, is a cautionary tale. Selvaratnam relates the joy of finding the right man at age thirty-seven, the shock of her first miscarriage and the grief of two more. At age forty she and her husband are about to commence IVF when she is diagnosed with an uncommon gastrointestinal cancer and requires extensive surgery. And soon after that her husband leaves her. At the end of the book Selvaratnam is divorced and childless, but optimistic that she’ll have a child, biologically or otherwise. There’s a sad, generous but somewhat amorphous wisdom she imparts at the end of this rather harrowing journey, a wisdom summed up in six maxims (or what she calls Action Items):

Share your stories.
Know your fertility.
Free yourself from convention.
Strategize for your goals.
Don’t be afraid of feminism.
Advocate for a better future.

The stress that both infertility and its treatment place on couples is well-recognised, I think, at least here in Australia. Sadly, Selvaratnam and her husband were not offered psychological support during the time of her miscarriages and IVF treatment, and this omission no doubt had significant bearing on the breakdown of their relationship. But she doesn’t make this point emphatically enough. If her book is supposed to be instructive, I would like to have seen more practical advice of this nature.

Selvaratnam provides a lot of factual information about infertility, and interviews many women who’ve experienced it, but the predominant narrative, her own, seems out of kilter with her broader message. This is partly because she doesn’t actually undergo IVF treatment – the cancer diagnosis and surgery intervene, and then her relationship breaks down – so the highs and lows of treatment are never explored at a personal level. And the academic information she imparts is not new. Her central point – delaying motherhood is a risk because fertility declines with age – is hardly a well-kept secret in 2014. Selvaratnam regrets her ignorance of this fact, but is unsure how she could have remedied it. “Like most women,” she writes, “I grew up without every really learning the most basic facts about the impact of delaying motherhood. I could have educated myself, but how was I supposed to look for information?”

I think the information was there for Selvaratnam to find, at least by the time she was in her mid thirties, but her lack of knowledge probably points not to lack of access to advice, but to the likelihood that pregnancy was not top of her mind. And therein lies the rub: information on age-related fertility is useful and empowering, but it only goes so far. Many other factors – the right partner (or any partner at all), financial security, completion of studies, personal health, one’s friends having children – all commonly feed into the decision about whether and when to have a baby. Personal chronology is only one of many factors.

Selvaratnam is at pains not to blame feminism for the perpetration of the “lie” that one can have it all at one’s own pace. Hers is a collaborative and information-seeking approach, and for that I commend her. •

The post Pregnancy: guidelines and timelines appeared first on Inside Story.

]]>
Natural born killers https://insidestory.org.au/natural-born-killers/ Wed, 27 Aug 2014 05:05:00 +0000 http://staging.insidestory.org.au/natural-born-killers/

With one-in-two people dying within days of becoming ill, it’s little wonder that Ebola causes panic. But the real threat can only be assessed if we understand the history of the virus and how it is transmitted, writes Frank Bowden

The post Natural born killers appeared first on Inside Story.

]]>

In 1980, the year I started my clinical studies, the World Health Organization announced that smallpox, one of the most dread infections in human history, had been eradicated. The new antibiotics and vaccines developed in the decades following the second world war had gone some way towards neutralising community fear of infectious diseases, and the success of the smallpox campaign reinforced the growing belief that the Age of Infections was coming to an end. From the 1960s on, Australian parents no longer had to face the terror of a polio epidemic; scarlet fever was rare, diphtheria was gone and tetanus going; even the diseases that were thought to be inevitable in childhood — measles, mumps and rubella — were disappearing. Cancer and heart disease were emerging as the big killers: forget about germs, it was our lifestyles that needed to change. This was, of course, a premature claim of Mission Accomplished.

New infectious diseases attenuated the medical triumphalism in the late 1960s and 1970s. An epidemic of arthritis and neurological symptoms associated with a distinctive rash occurred in a town called Lyme on the east coast of the United States; a number of young women, first in the United States but then in other countries, contracted a sometimes lethal condition known as toxic shock syndrome, found to be the result of a toxin produced by Staph aureus that contaminated their tampons; an epidemic of serious viral hepatitis was detected in recipients of blood transfusions and in countries where injecting drug use was becoming fashionable. But the diseases that concerned health authorities most — one in Europe and the others in Africa — were the newly recognised causes of viral haemorrhagic fever, or VHF.

VHFs can result from any of a number of different viruses with distinct structures, epidemiologies and means of transmission. They are linked together by the fact that they are all RNA viruses, they are usually confined to specific geographic areas, humans are not the natural reservoir and they are among the most deadly of all known infections. VHFs are characterised by severe, but usually non-specific, flu-like illness, followed in a proportion of cases by low blood pressure, which leads to organ failure (“septic shock”) and, in the most severe cases, bleeding into the conjunctivae, skin, mucous membranes or bowel. Although bleeding is one of the feared parts of VHFs, it is a sign that the patient is critically ill rather than a cause of death itself.

VHFs are not new. One of them, yellow fever, has been known as a human disease for hundreds of years. It came to prominence in the early twentieth century when the American physician Walter Reed proved that the great killer of the workers on the Panama Canal was transmitted by mosquitoes, the first time an insect had been unequivocally found to be a “vector” in the transmission of an infectious disease. The viruses responsible for other VHFs were successively identified from the 1960s on. Crimean-Congo haemorrhagic fever, for instance, is caused by a tick-borne virus. Dengue fever, an increasingly common mosquito-borne infection in many parts of the world, including northern Australia, can act like a VHF if a previously infected patient is exposed to a different dengue subtype. Other VHFs include Lassa fever, Hantavirus and Rift Valley fever.

If the VHF is insect-borne, then there is no risk of transmission in places that are free of that insect. But what if the infection, once it moves from its animal reservoir, can spread from person to person?

In 1967, hospitals in the German cities of Marburg and Frankfurt admitted a number of very sick patients with fever, severe flu-like illness, conjunctivitis, nausea, vomiting and diarrhoea. A handful of patients with similar symptoms were also seen in Belgrade. The patients were originally thought to have a Salmonella or Shigella infection of the bowel, but the usual stool and blood cultures were all negative. About a quarter developed a severe derangement of their blood coagulation and bled from the mouth, lungs, bowel and needle-puncture sites. In all, thirty-two patients were seen and seven died.

Each of the Marburg patients was associated in some way with the pharmaceutical company Behringwerke; those in Frankfurt had some relationship with the Paul Ehrlich Institute; and one of the Belgrade patients was a veterinarian researcher. Within just three months German virologists identified the first of the human filoviruses to be discovered and named it Marburg virus.

The likely source of infection was infected cells, blood and other tissues derived from a consignment of Cercopithecus aethiops monkeys that had been imported from Uganda in June 1967, right in the middle of the Israeli Six-Day War. Because of the conflict, the monkeys were delayed in transit in Britain and then by a strike at Heathrow Airport. How or when the monkeys became infected with Marburg virus has never been established but it is known that they came into close contact with a shipment of South American finches and langur monkeys from Sri Lanka in an animal house near the airport. Two of the monkeys escaped into London but apparently didn’t transmit any disease there.

No further cases of illness associated with Marburg virus were identified in Europe. The next case came in 1975 when an Australian backpacker died in South Africa after falling ill while travelling in Zimbabwe. His female companion and a nurse at the South African hospital where he was treated developed a milder illness. Both recovered and were found to have antibodies to Marburg virus. In 1990 a Russian laboratory worker was fatally infected. Only two other substantial epidemics of Marburg virus have been recorded: an outbreak in the Democratic Republic of the Congo in 1998, which killed 118 of the 154 people affected, and 227 people died among 252 identified cases in Angola in 2004.

The sudden appearance and disappearance of Marburg virus in Europe was puzzling. Only those who had been in direct contact with infected tissues and other samples in the laboratories seemed to be at risk, and only a very small proportion of those who cared directly for those with symptoms became ill themselves. Subsequent blood testing showed very few, if any, people without symptoms developed antibodies to Marburg (known as “asymptomatic” infection). Viruses such as influenza, chicken pox and measles can be spread through the air in droplets and aerosols, but the pattern of spread seen in the Marburg outbreak made it unlikely that airborne transmission occurred. In fact, it seemed to be quite hard to become infected with the virus — but if you did, you had an extremely high chance of dying.

New VHFs continued to appear. In 1969, two nurses died of a VHF-like illness contracted in a town named Lassa in Nigeria. A third nurse looking after them in a mission hospital became critically ill and was repatriated to New York. Her blood was sent to a Yale University laboratory where the head, Jordi Casals, and a research technician contracted the illness. The technician died but Casals recovered and went on to isolate a new arenavirus, which was named Lassa. The disease was subsequently found to be endemic in West Africa and the World Health Organization estimates that there are between 300,000 and 500,000 Lassa fever cases each year.

In 1976 a Belgian doctor working in the Democratic Republic of the Congo (then Zaire) sent the blood of a dying nun to the Institute of Tropical Medicine in Antwerp, hoping to identify the cause of her illness. Standards for the transport of biological materials were different then, and by the time the thermos flask arrived in Belgium the blood had leaked out of its tube into the defrosted ice water that was keeping the sample cool. Despite the poor state of the sample, electron microscopy immediately identified the presence of a filovirus that looked very similar to Marburg virus but was subsequently shown to be a distinct viral species. Although an epidemic appeared simultaneously in the Sudan, the virus was named Ebola after the river that ran through one of the main areas affected in the Democratic Republic of the Congo. It soon became apparent that the Ebola virus was one of the most deadly human viral infections — in Africa the case-fatality rate appeared to be as high as 90 per cent.

It is worth remembering that emerging diseases often record very high mortality rates. Only the most severe cases are seen in the early stages and, not surprisingly, they have the worst outcomes. Indeed, many diseases that appear to have very severe clinical manifestations are often mild or even asymptomatic in a majority of cases. The Spanish influenza pandemic of 1918, considered to be the worst influenza pandemic in history, had a case-fatality rate of around 2 to 3 per cent. Its notoriety results from the interaction of that rate with an “attack rate” — the proportion of the entire population that is infected — of at least 30 per cent. Three per cent of nearly a third of the entire population translated into millions of deaths globally. The 90 per cent mortality among several hundred early cases of Ebola obviously had a different absolute effect on the population.


The rapid growth in international travel in the 1970s meant that the jumbo became as important a vector for the spread of disease as the mosquito. The medical profession began working out how to minimise the risk of an outbreak of VHF in the developed world, and one of the results was that millions of dollars were invested in hospitals and laboratories across the world to deal with these viruses.

In 1982, the Fairfield Infectious Diseases Hospital in Melbourne was the last of the Australian “fever hospitals.” No other state had kept their equivalent quarantine hospitals open, and the care of infectious diseases had been devolved into the mainstream health system. So this was the logical site for a high-security ward to house patients with suspected VHF, and for a Level 4 biocontainment laboratory where blood samples could be tested and the viruses grown.

I toured the newly commissioned facility when I was doing my two weeks at Fairfield as a medical student. The austere brick structure with tiny windows and exhaust vents protruding through the roof was about one hundred metres away from the rest of the hospital. It looked safe. When in operation, the inside of the building was kept at a slightly lower atmospheric pressure so that air from the outside moved in, rather than the other way round, to reduce the risk of an escape of any infectious material.

One of us was allowed to get dressed up in the positive-pressure biosecurity suit. Now this was just like being in a movie (for us it was The Andromeda Strain, based on the Michael Crichton book from 1969; Outbreak, with Dustin Hoffman, was still more than a decade in the future). The suit was hot, stuffy and claustrophobic and the student who volunteered for the demonstration looked slightly dehydrated and a little worse for wear when she emerged after half an hour.

I remember being very impressed with the whole set-up. But when I returned a few years later as a junior doctor I learnt that the unit had been the baby of a senior doctor who was now in a different sort of quarantine — professional in this instance. Although the reasons for his exile were never spoken out loud, there was clearly great antipathy between him and the rest of the staff, who usually referred to the facility as the White Elephant. One of the doctors was pleased to tell me that the first time the negative pressure system was switched on part of the ceiling collapsed. Not that there was much risk of hurting any patients; the unit had only ever housed one, an African man with a fever who was transferred from another state.

The plan was for a patient with a suspected VHF to be transferred from his or her place of diagnosis to Fairfield in one of ten portable high-security quarantine “cots” that had been built by the federal government. The African patient was brought back to Melbourne by the retrieval team and this attracted a considerable degree of media attention. He was admitted to the negative pressure ward, the ceiling of which remained in place this time, treated by the doctors and nurses in their sci-fi pressure suits, and very soon diagnosed with Streptococcal pharyngitis. He was reported to have been very impressed with the Australian medical system and the way it cared for people with a sore throat.

In 1989 I took part in what was to be the last training flight of the retrieval team. We flew to Cairns on an Ansett 707, off-loaded the cot and drove to the Base Hospital to pick up our pretend patient. One of our nurses volunteered to spend the return trip in the plastic-encased, negative-pressure cot. We practised caring for the patient in mid-air but no one seemed to be taking things too seriously — by then the medical world had learnt that the viruses that caused Lassa fever and Ebola were not transmitted through the air and could be easily contained with standard precautions to avoid direct contact with blood and other bodily fluids.

One other thing had changed. For the past eight years staff at Fairfield had treated hundreds of patients with a then universally fatal virus in their blood, HIV. We had been right to be worried about the movement of viral diseases in and out of Africa — we just became preoccupied with the wrong virus. While the medical profession had been working out how to minimise the risk of an outbreak of VHF in the developed world, an African virus just as deadly but much more insidious in its action was quietly establishing itself across the globe. HIV would soon become the worst pandemic of the twentieth century and it would push fears of Ebola, Lassa and Marburg into the background.


Under an electron microscope, the slender, filamentous appearance of Ebola looks like an Egyptian hieroglyph. It contains a remarkably simple, and scary, piece of ribonucleic acid that encodes for fewer than ten proteins that are manufactured by virus-infected human cells. These proteins pack a very powerful punch: they stimulate the release of chemicals, known as cytokines, into the local tissues and into the bloodstream, which in turn produce a profound inflammatory reaction. One of the proteins directly damages the cells that line the blood vessels, which results in the bleeding typical of the more serious cases. It is the intensity of the immune response that makes the patient look and feel so sick. As is the case in many fatal infections, the immune response is maladaptive and, rather than helping patients recover, it actually contributes to their demise.

Unlike Marburg, which only has one viral strain, there are five known subspecies of Ebola virus. One of these, the Reston strain, caused considerable concern in 1989 when it was identified in monkeys held in a laboratory in Reston, Virginia, near Washington DC. Three laboratory workers, including one who sustained a needlestick injury from one of the infected animals, developed antibodies to the virus but did not become ill, suggesting that the risk of human disease is low.

Ebola is probably thousands of years old and has been quietly reproducing in a non-human reservoir for most of that time. The best evidence at the moment is that bats are the natural reservoir. It is only when the virus is transmitted to humans and other primates that it causes symptoms of disease. Transmission is thought to occur when humans kill, prepare or eat infected bats and bush meats (including monkeys). The occasional explosive outbreaks of Ebola in humans are really just an accidental sideshow to the main cycle of transmission.

That didn’t stop the feeling of panic in March this year when the World Health Organization revealed that an Ebola epidemic had broken out in the west African country Guinea. There had already been at least twenty significant outbreaks of Ebola in Africa, with a total of 3000 reported infections. The case-fatality rate has ranged from around 50 per cent to 90 per cent, depending on the size and place of the outbreak. The current epidemic has spread to Liberia, Sierra Leone and Nigeria and is the largest ever recorded: as of late August there had been 2473 cases and 1350 deaths, giving a mortality rate of 55 per cent. The actual number of infected people is likely to be higher than this, owing to the limited local resources for case counting.

On 7 August this year the World Health Organization declared the Ebola epidemic a Public Health Emergency of International Concern. The key word here is “international,” because it is hard to argue that it is a crisis in African terms, given that the continent is subject to so many other infectious diseases. This epidemic will need to be at least eighty times worse if it is to kill as many people as yellow fever did in 2013, or 500 times worse to reach the annual death toll for malaria in Africa. Lassa fever is estimated to kill at least 5000 people each year.

But diseases like Ebola draw attention to themselves in a way that other viruses don’t. The fact that at least one-in-two people with Ebola die within days of becoming ill gives the disease a Hollywood cachet, but the fact that it is so obviously an infectious disease is what seals its ability to cause fear and loathing.

Throughout history there have been diseases that are very clearly infectious — cholera, the Black Death and influenza, for example. With these diseases the time between exposure to a patient with the illness and onset of symptoms in the exposed is so short that it is easy to see the association, even if the exact nature of the contagious material is not as obvious. It takes a much more sophisticated analysis to work out the cause of diseases with longer incubation periods or where asymptomatic infection is common.

Tuberculosis, today a paradigmatic example of an infectious disease, was thought to be an inherited condition until the nineteenth century. Hepatitis B has been present in human populations for tens of thousands of years but because it rarely causes symptoms in newborns and children, the ages when most transmission happens, its existence was not even suspected until 1885 when an outbreak of jaundice occurred in Germany in the recipients of contaminated smallpox vaccinations. (The first epidemic of Ebola was also probably fuelled by contaminated injections, given by the very nun whose blood provided the evidence of the virus’s existence.)

The developed world is fearful of the effect that Ebola will have outside Africa and, ironically, it will be the West’s response to this threat that will control the disease at its source. Ebola epidemics occur because of fundamental weaknesses in health infrastructure in Africa, not because of a lack of expensive (or even cheap) drugs and intensive care units. Ebola persists because of the health system’s inability to provide adequate protective equipment such as gowns, goggles, masks and gloves. The safe containment and interment of the bodies of those who have died is also a major issue in any setting of mass death — but never more so than when the dead pose a true threat to the living.

There is no drug on the market that has been shown to be effective for the treatment of any of the VHFs. An experimental formulation containing a mixture of antibodies has been used on a handful of patients with Ebola but there have been no formal trials of its efficacy.

The current epidemic has been a spur to vaccine research and there are a number of candidates ready to enter Phase 1 trials. Despite some encouraging results in animals, there is no guarantee that the vaccines will work in humans, and it is possible they may be not just ineffective but also harmful, as has been seen with some other vaccines. Testing the real-world value of such a vaccine is also difficult because of the sporadic nature of the epidemics.

Regardless of its efficacy, the funding for the roll-out of an Ebola vaccine is not guaranteed: a safe, cheap and effective yellow fever vaccine has been available for nearly eighty years but vaccination coverage in Africa was virtually zero until a decade ago. The GAVI Alliance, backed by Bill and Melinda Gates, is working to increase the rate of vaccination but the coverage in Africa is not yet sufficient to interrupt transmission.


Earth has never had so many humans living on it. We travel obsessively, we progressively encroach on jungles and other remote areas, we interact with domestic birds and animals in new ways, and we eat meats that were never part of human diets in the past. It is inevitable that viruses that have been quietly reproducing in animals, causing little harm, will move into human populations and cause much harm. Inevitable but unpredictable.

It is impossible to know how the current Ebola epidemic will unfold, but it is likely to go the way of all previous outbreaks once strict compliance with infection control has been achieved. All viruses evolve in response to changes in their environment, though, so it is possible that the strain responsible for the 2014 outbreak has mutated. Since it is already one of the deadliest viruses known to science, the most worrying change would be one that makes the virus easier to transmit. It is unlikely to go through the significant, and unprecedented, genetic alteration necessary to spread via aerosols or droplets. But if a change of that nature ever did happen then that would constitute a Public Health Emergency of International Concern. •

The post Natural born killers appeared first on Inside Story.

]]>
Spaceship of the imagination https://insidestory.org.au/spaceship-of-the-imagination/ Sun, 08 Jun 2014 00:46:00 +0000 http://staging.insidestory.org.au/spaceship-of-the-imagination/

Cosmos: A Spacetime Odyssey is an important chapter in the evolution of how we learn about science, says Martin Bush. But it’s far from being the last word

The post Spaceship of the imagination appeared first on Inside Story.

]]>
The American astronomer Carl Sagan’s television series Cosmos: A Personal Journey is one of the most celebrated works of popular science. Originally screened in 1980 on PBS in the United States and the ABC in Australia, it ranks alongside David Attenborough’s Life on Earth and the more recent Walking with Dinosaurs as an icon of factual broadcasting. Sagan presented a wide-ranging view of astronomy, covering science, history and a cosmic perspective, communicated in a poetic style and with an optimistic humanism that was appealing and memorable for a large audience. It has even been suggested that Cosmos inspired, almost on its own, a decade-long boom in science communication in the 1980s. No wonder, then, that so much attention has been given to the remake of this series as Cosmos: A Spacetime Odyssey, which has been screening on the National Geographic Channel in Australia and is out on DVD in July.

Sagan himself is no longer at the helm of this “spaceship of the imagination,” having died in 1996, but the new series is being steered by his co-writers Ann Druyan and Steven Soter. The presenter is astrophysicist Neil deGrasse Tyson, a former student of Sagan, who is director of the Hayden Planetarium in New York City and one of America’s most prominent science communicators. Druyan, Soter and Tyson worked together for seven years developing the pitch for this reboot, and in 2011, with the assistance of Family Guy director Seth MacFarlane, they received the go-ahead from Fox network.

The new Cosmos closely follows the format and philosophy of the original show, which is a testament to the strength of Sagan’s presentation. Both are thirteen-part series with episodes focusing on different aspects of astronomy, from the main features of our own solar system to the characteristics of distant galaxies. Some episodes are almost identical in theme, such as on the life cycles of stars, and both series present science beyond astronomy: in episodes on the development of life on Earth through evolution, for example. Most importantly, both series use the most effective tools of science popularisation. Stories of human endeavour and other emotional triggers, vivid analogies with political developments, and references to the philosophical implications of scientific knowledge are as much a part of the presentation as the finer points of astronomy. In particular, Sagan’s sense of connectedness with the universe – “we are all starstuff” – carries through both series.

This style of science popularisation has a long history. Exactly a hundred years before Sagan appeared on screen the astronomer Richard Proctor – the Sagan of his era – was lecturing to Australian audiences. Like Sagan and Tyson, Proctor was using some of the most advanced and realistic visualisations of his time. And, also like Sagan and Tyson, he was not afraid to engage in philosophical speculations or to couch his material in an emotionally appealing way. Indeed, many of the themes that Proctor developed – such as the possibility of life on other worlds, the potential effects of a comet hitting Earth, or the spiritual implications of the vastness of space – would still be part of the repertoire of astronomy popularisers a century later.

This is not to say that science communication hasn’t changed. In Sagan’s day, exploration by spacecraft had just begun; now, space telescopes routinely study distant galaxies. In Proctor’s time we didn’t even know other galaxies existed. A decade can be a long time in science, and a century an eternity. Yet old discoveries remain fresh for each new generation, as anyone who has looked at the ringed planet Saturn through a backyard telescope can attest. Science popularisations weave together stories of the old and the new: the details build and shift but the conversation is ongoing.

The media landscape has also changed dramatically since 1980. These changes have affected the production of Cosmos, and particularly its techniques of visualisation. While special effects were important in Sagan’s series, it is stating the obvious to say that technical advances since then have been extraordinary. Computer animation now allows sophisticated simulations of astronomical images presented in a photorealist manner, and effects like these are a baseline expectation for younger audiences brought up in a sophisticated visual media environment. It’s hard to imagine a series like Cosmos succeeding today without them.

Although the visualisations in the new Cosmos meet these high expectations – and some, like the cloud-tops of Jupiter, are alone worth watching the series for – the visual sequences are not entirely without flaws. Productions like this work best when form and content come together effortlessly, but unfortunately the directors of this series sometimes use recognisable but inaccurate visual devices to tell a story. This means that the first in line to nitpick about Cosmos were visualisation professionals, which is no surprise given not only that the American style of astronomical visualisation emphasises hyper-realistic data-driven simulations, but also that Tyson himself has been a somewhat pedantic critic of scientific errors in mainstream films. For some viewers, showing the asteroid belt as a crowded mass of rocks (whereas individual asteroids are separated by enormous gaps in reality) is like running fingernails down a blackboard.

These spectacular visualisations take time and money to produce. Even a show as well-resourced as this (the series budget has not been released, but no doubt it is somewhat higher than the original Cosmos’s $6 million) can’t deliver an hour a week of such content. So Cosmos largely consists of Tyson’s pieces to camera – sitting around a campfire or cycling through the Italian countryside – and the animated cartoons through which the histories of science are told. The former scenes work well – Tyson is a good presenter with enough (but not too much!) enthusiasm to draw audiences into his subject – but the latter aren’t as successful.

Visually, the cartoon style forms an odd juxtaposition with the sumptuous astronomical visualisation, but that isn’t the main issue. Although it’s unfair to the animation industry that “cartoon” is often used as a pejorative term, in Cosmos the unfortunate connotation is justified. The history of science presented in Cosmos is in large part shallow. Critics might suggest that this is one thing that hasn’t changed in thirty years; the original Cosmos contained its own dubious histories.

The show’s writers have defended their handling of some of this material, claiming that the script is accurate. Unfortunately, even when that’s the case (and sometimes they have just got it wrong) the show is often misleading. This is a common problem when science communicators reduce complex subjects to relatively few words. The words might be carefully crafted, but in the context of the program itself – which is how most of the audience will actually hear them – they can convey the wrong idea.

The discipline of science is highly attuned to the role of precedent and priority, and scientists are often keenly interested in the history of their subject. But the versions of scientific histories that they present for public consumption can sometimes be oversimplified, recast as instructive parables with success emphasised and failure always framed as a cautionary tale. Science educators are not the only ones who experience this impulse: the demands and conventions of broadcast production work against nuanced histories in any subject area. In order to make things easier for the viewer, stories are streamlined and themes strengthened; minor characters are discarded, and ambiguities stripped out. While these decisions are understandable, it’s reasonable to challenge them. Telling nuanced histories in science documentaries is difficult but worth the effort.

Historians of science may have the most reason to complain about Cosmos, but the loudest debate has involved viewers who are either for or against the series’ treatment of religion. Tyson – like Sagan before him – maintains a principled agnosticism, claiming that we cannot know that there is no supreme being. Both men describe how they have appreciated the wonders of astronomy as a religious experience that spiritual people need not fear. But this series – like the one before it – makes a clear argument that excessive religiosity stifles free enquiry.

Comparing how this element of the two series was received gives a very clear sense of how times have changed. Sagan was known to be a figure of the liberal left, and no doubt he attracted scorn from the American religious right at the time. But this didn’t significantly spill over into the public reception of the series. Thirty years on, the US debates over Intelligent Design, New Atheism and the increased partisan lines marking out evangelical Christianity have considerably increased sensitivity around the discussion of religion in society, and Tyson’s offerings have gone neither unnoticed nor unchallenged by the many defenders of religious values within the American commentariat. Tyson’s frequent use of Biblical phrasing or allusions – evolution is “the greatest story science ever told” and the Law of Relativity is a new “commandment” – and his playing on the scientist-as-god trope (at one point Tyson appears to pull apart and levitate the sedimentary layers of the Grand Canyon) must only add to their motivation. The Cosmos team may have intended such devices to be conciliatory, reflecting that shared sense of spirituality, and that may have been the case thirty years ago. Today they come across as something of a poke in the eye.

It is particularly interesting in this light to consider the decision to pitch the show to Fox, and the support given to the show by the network. While Fox’s overall primetime audience is light years away from the stridently partisan Fox News demographic, nonetheless the show is, as Fox executive Kevin Reilly observed, “on the periphery of our brand.” Its ratings, while good for a science documentary, have slipped disappointingly from the initial level of interest, indicating that the show failed to gain traction with these audiences. (For those of the Fox News audience who did stick around for Cosmos, I’m sure the episode dealing with scientific support for environmental regulation went down a treat.)


In many ways, Cosmos is best-practice science communication, and where it is not, it’s relatively easy to imagine how it could be made better. But to improve the series in this latter regard – to shift the relationship between the show and its audience – we need to reconsider the motivations behind science communication. As soon as a show looks like it is trying to win over opponents, to persuade doubters, to teach the masses, it is in trouble. This may seem paradoxical – surely the whole point of factual programming is to educate? The problem is that, much like the monkey trapped while reaching for a banana, the direct approach is not always the best.

If there is one thing that unites the science communication community it is that they know what they don’t like. And what they don’t like is the “deficit” model, the idea that the lay public lacks knowledge about science and it is the task of science communicators to provide this knowledge. Science communication is generally successful to the extent that it doesn’t remind people of a classroom, and this means that approaches based primarily on isolated facts are unlikely to be successful. Virtually every article about science communication over the past decade has contained the obligatory criticism of this model. So if everyone agrees this model is wrong, why do we need to keep talking about it?

It’s because this model is still the default position for many people – witness how discussions about the public understanding of science tend to be dominated by tests of the public’s recollection of various scientific facts. Communicators can fall into this trap even when they are aware of the risk. Tyson has stated in interviews that he doesn’t want to teach textbook science, because that’s not what people remember. Yet Cosmos has too many “textbook” moments trying to bring its audience up to speed on various technical details. And the more time spent telling us these details, the less time there is for stories about how we know, or why we care – hence the shortcuts made in the segments dealing with the histories of these ideas.

Because this problem is so widespread it may not seem to make much of a difference to the end result. Cosmos is still a great program. But if we want to improve the quality of the entire genre we need to consider the alternative models for understanding science communication. So what are they? Unsurprisingly, the research here is much less clearcut. Understanding that there are continuities of practice between scientists trying to convince each other in journals and presenters trying to convince audiences on television, another model for science communication sees it as a dialogue between scientists and communities, a negotiation between interested parties. But while these continuities exist, this picture doesn’t adequately capture experiences like Cosmos. Contemporary practitioners often speak about their role as one of inspiring: not as concerned about how many answers are recalled in post-evaluation testing as they are about how many new questions are prompted. Engaging new communities is crucial and outreach to non-traditional, less engaged audiences is prioritised.

Again, while this is a powerful part of the picture, it is only part. Much of science communication’s strongest interactions are with highly engaged audiences and it seems perverse to value least the interactions that are among the strongest. I would suggest that science communication is best viewed as a form of cultural support: building new relationships is important but so is strengthening those that already exist. Facts are important, but it is the themes and dreams that are built from them that will be significant for audiences.

There is one final reason why an excessive focus on detailed learning outcomes is detrimental to productions like Cosmos. The media environment has entirely changed as a result of the world wide web. Science communication podcasting or video offerings on YouTube are substantial – for example The Naked Scientists podcasts, Vi Hart’s mathematics videos, the Facebook (and beyond) phenomenon that is I Fucking Love Science and, of course, the ubiquitous Ted-Ed talks. The short, focused, on-demand and repeatable nature of these offerings makes them much more suitable for explaining detailed concepts. Large, setpiece television productions like Cosmos need to differentiate themselves, and they can do this most effectively by focusing attention upwards and outwards rather than inwards. Thirty years on, people still remember Sagan describing the thrill of a universe with billions and billions of stars: the techniques needed to measure them didn’t prove quite so memorable.

At this point, television productions still lay down markers in the history of popular culture in a way that the on-demand internet doesn’t, and can thus shape the ongoing cultural dialogue around astronomy, science and society. Cosmos: A Spacetime Odyssey is an important chapter in this dialogue, visually satisfying and ultimately worthwhile but not so well done that it will be the final chapter. It leaves us to anticipate meeting some protégé of Tyson’s as she walks along the Californian coastline in a few decades’ time. •

The post Spaceship of the imagination appeared first on Inside Story.

]]>
Unpredictable to whom, and in what way? https://insidestory.org.au/unpredictable-to-whom-and-in-what-way/ Fri, 28 Mar 2014 00:00:00 +0000 http://staging.insidestory.org.au/unpredictable-to-whom-and-in-what-way/

Not only is he an anti-Chomskyan, Philip Lieberman is also an enemy of evolutionary biology and pop neuroscience, writes Ben Eltham

The post Unpredictable to whom, and in what way? appeared first on Inside Story.

]]>
IN 1977, an American missionary named Daniel Everett was given a challenge that would upend modern linguistics. He was asked to translate the Bible for a remote Amazonian tribe called the Pirahã. As you might imagine, this wasn’t an easy task. After taking introductory courses in Portuguese, Daniel and his wife Keren moved to Brazil, where they found themselves deep in the jungle with a tribe that had largely resisted outside contact for four centuries. They slowly began to get to know the villagers, eventually becoming the first outsiders to learn their language.

We are often told by linguists and anthropologists that the rapid death of indigenous languages around the world is destroying treasure troves of linguistic and cultural diversity. Nothing proves the point better than the language of the Pirahã.

As Everett explains, in a celebrated 2005 paper in Current Anthropology, Pirahã, pronounced pee-da-HAN, is a rather unusual language. There are no tenses: Pirahã speakers live in a constant present. There are no colours; nor are there any numbers. In fact, there seem to be no quantities at all: no counting, no numerals, and no words like “many,” “most,” “every” or “each.” Pirahã has the “simplest pronoun inventory known,” no creation myths, and little collective memory beyond a single generation. Most importantly of all, Everett observes, “it is the only language known without embedding (putting one phrase inside another of the same type or lower level, e.g., noun phrases in noun phrases, sentences in sentences, etc.).”

Everett’s findings were explosive. They appeared to undermine one of the most cherished theories in social science: Noam Chomsky’s theory of a “universal grammar” of language.

Chomsky’s pre-eminence in linguistics hardly needs to be restated. Although best-known for his spiky political views, his claim to academic fame rests on his theory that certain aspects of language are hard-wired in the human brain, and that, therefore, all languages share some basic structural similarities. Beginning in the 1960s with his famous monograph, Aspects of the Theory of Syntax, Chomsky has expanded and zealously defended his theory of a “deep structure” of language encoded in the brain’s neurophysiology. Neuroscientists and evolutionary biologists like Steven Pinker have become adherents of his idea that the brain has a kind of “language organ” that is evolutionarily adapted for the learning of languages.

Everett’s study – indeed, the very existence of the Pirahã language – appears to cast doubt on Chomsky’s universal grammar. Most controversially, the Pirahã seem to lack the linguistic expression for the one core structure that Chomsky has argued every language must possess: recursion.

Recursion is simply the ability to nest one thought inside another thought. “The cat sat on the mat” has a noun (“the cat”), a verb (“sat”) and a propositional object (“on the mat”), and they can be nested inside larger language elements, potentially infinitely. “Pirahã doesn’t have expressions like ‘John’s brother’s house,’” Everett told a 2007 roundtable for the website Edge.org. “You can say ‘John’s house,’ you can say ‘John’s brother,’ but if you want to say ‘John’s brother’s house,’ you have to say ‘John has a brother. This brother has a house.’ They have to say it in separate sentences.”

The unsettling implication is not just that Chomsky might be wrong. It also suggests, as early linguists like Edward Sapir and Benjamin Whorf first proposed, that someone’s cognitive reality might be determined, not by some common human genetic inheritance, but by the very words they use: by the specific and contingent cultural reality they inhabit.

“What if a language didn’t show recursion?” Everett continues:

What would be the significance of that? First of all, it would mean that the language is not infinite – it would be a finite language, there could only be [a] limited number of sentences in that language. It would also mean that you could specify the upper size of a particular sentence in that language. That sounds bizarre, until we think of something like chess, which has also got a finite number of moves, but chess is an enormously productive game, it can be played and has been played for centuries, and many of these moves are novel, and the fact that it’s finite really doesn’t tell us much about its richness, or its importance.

IN MANY ways, Philip Lieberman mounts a similar argument to Everett’s. Indeed, he ends The Unpredictable Species with a careful discussion of the Pirahã controversy in which he firmly sides with Everett and his supporters.

Lieberman, too, is a savage anti-Chomskyan. He is also an enemy of the evolutionary biology and pop neuroscience that litters the remainder tables at the front of airport bookstores. Instead, Lieberman is that strangest of modern creatures, the pro-science cultural determinist. “Virtually everything that characterises the way we live is the product of human culture,” he argues, “transmitted from one generation to the next through the medium of language.” Far from a “deep structure” inside the brain determining our language instinct, Lieberman thinks that “the form of one’s language is culturally transmitted… And, in turn, culture shapes the human ecosystem and can lead to genetic evolution.”

The key neural structures that make us human, Lieberman writes, are neither our linguistic abilities nor the genes we’ve inherited from primate ancestors. Rather, he argues, the true seat of humanity may be in internal anatomical structures of the brain, such as the basal ganglia and the prefrontal cortex, which “constitute the brain’s sequencing and switching engine.” Exactly what genes might encode for such structures is still unclear, “but it is clearly the case that the genes that make us human are ones that contribute to cognitive flexibility and creativity, not genes that rigidly channel our thoughts and behaviour.”

This, then, is Lieberman’s thesis. But it would be a mistake to think that expounding it constitutes the main thrust of his book. A better description of the bulk of the content of The Unpredictable Species is score-settling.

Like a lot of working professors, Lieberman has intellectual enemies. Unlike most, however, Lieberman seems only too happy to allow them to distract his scientific endeavours. Lieberman blazes away at these apostates, employing whatever evidence comes to hand. Examples from all sorts of disciplines and intellectual traditions are brought to bear, sometimes bewilderingly, in an attempt to ambush his opponents. The result sometimes reads like an extended polemic rather than a carefully structured argument. Where a lesser author might be content to weave a narrative or buttress a thesis, Lieberman jumps from critique to critique, only occasionally alighting on a through-line.

Strangely missing from the book is any careful attempt at a definition of unpredictability. Indeed, as other reviewers have noted, the term is often used interchangeably with the word “creativity,” often in an artistic or cultural sense, but with little precision.

That’s a worry, because unpredictability is hardly an easy concept to grasp. Is it risk? Is it uncertainty? Is it the problem of induction? Lieberman prefers questions stemming from neuropsychology. Thus, in the book’s index, we see a large number of entries for the term “prefrontal cortex” but none for “probability,” and when Lieberman discusses “unpredictability” directly, what he really seems to mean is “cultural diversity.” That’s certainly a valid way of discussing humanity’s manifold cultures, behaviours and languages, but it is also a rather fuzzy and indistinct characterisation.

Lieberman is at his best when demolishing the many imprecisions of pop neuroscience. In one of the most refreshing passages, he carefully explains just what the famous magnetic resonance imaging machine can and can’t see, convincingly demonstrating why the popular understanding of MRI scans is little better than a modern version of phrenology. Similarly, by carefully following inconsistencies in the experiments of disgraced Harvard psychologist Marc Hauser, he shows just how far-fetched and reductionist the farther reaches of evolutionary psychology have become.

On the other hand, Lieberman’s trenchant critiques often elide the controversies they address, rather than tackling them in a spirit of intellectual generosity. The Everett–Chomsky debate, for instance, is by no means settled – indeed, how could it be? Finding out that an isolated tribe appears to lack a certain key aspect of universal grammar doesn’t necessarily invalidate Chomsky’s thesis. (Everett himself admits that the Pirahã understand the concept of recursion, while lacking a grammar to express it.) Similarly, while Lieberman is certainly correct to draw our attention to the over-ambitious claims of evolutionary psychologists, it is also true that many of his own theories are susceptible to similar critiques. MRIs do have their flaws; the brain is not just a Rube Goldberg machine of different neural contraptions… but nor does that make the basal ganglia and prefrontal cortex the seat of all human “unpredictability.”

Lieberman’s thesis has a bigger problem: the problem of consciousness. Linguists and psychologists don’t necessarily need a theory of mind any more than the Pirahã do. But much of what Lieberman claims as unique for humanity seems to reduce to similes for consciousness itself. “Cognitive flexibility – the engine of human creativity that allows us to shape our destiny”: this doesn’t sound like the sort of thing that could be found in a Chinese room. But Lieberman never unpacks this idea far enough to allow us to interrogate it. Unpredictable to whom, and in what way? We never quite find out. •

The post Unpredictable to whom, and in what way? appeared first on Inside Story.

]]>
Injecting a dose of science https://insidestory.org.au/injecting-a-dose-of-science/ Thu, 05 Sep 2013 23:53:00 +0000 http://staging.insidestory.org.au/injecting-a-dose-of-science/

Opposition to vaccination has something in common with well-funded anti-science lobbies, writes John Quiggin, but its roots in genuine confusion mean it should be handled differently

The post Injecting a dose of science appeared first on Inside Story.

]]>

ONE sidelight of the election campaign that deserves more attention is Labor’s announcement that it would deny Family Tax Benefit A end-of-year supplements to parents who fail to immunise their children, a policy that received qualified support from Tony Abbott. The announcement allows for exemptions on religious and medical grounds; it is not clear whether the latter category allows for sincere, but mistaken, beliefs about the supposed dangers of vaccines.

Before considering the details of the policy, it is worth looking at the sources of resistance to vaccination and the more general problem of combating misinformation about scientific issues of this kind. Anti-vaccination beliefs are particularly interesting because, while they have a lot in common with other anti-science beliefs (creationism, climate science denial, denial of the health effects of active and passive smoking), there are equally important differences.

To start with the similarities, the anti-vaccination campaign begins with genuine concerns about the safety of vaccines and possible links to conditions such as autism. These concerns have been investigated and found to be baseless. Those unwilling to accept the findings of science use a mixture of pseudo-scientific claims, selective quotation of genuine scientific research and conspiracy-theoretic reasoning to render themselves immune to contrary evidence. The favoured target for the conspiracy theorists in this case is “Big Pharma,” even though, in reality, vaccines are among the least profitable medicines and, by their success, wipe out markets for drugs to treat the problems they prevent. In all of this, anti-vaxerism is exactly like climate science denialism or tobacco industry propaganda.

On the other hand, whereas most forms of anti-science are associated with the political right (the big exception being spurious claims about the health risks of genetically modified foods) anti-vaccination beliefs do not have a clear-cut ideological or cultural-tribal association. In the United States, political figures as diverse as Michelle Bachmann (the far right candidate for the Republican presidential nomination in 2012) and Robert F. Kennedy Jr (a Democratic activist more notable for sharing his father’s name than for any concrete achievements) have pushed the anti-vaccination line. Mostly, however, anti-vaccine advocates have no particular political association. Anti-vaccination sentiment is found among libertarians, Christian fundamentalists, New Age believers and supporters of alternative medicine.

In Australia, the main promoter of anti-scientific misinformation is the Australian Vaccination Network. It shares with other anti-science groups a penchant for misleading names (despite its apparently neutral title, it presents only anti-vax material) but is otherwise dissimilar. AVN relies on support from fewer than 2000 financial members, and has been led, for most of the past twenty years, by a single, apparently unpaid individual, Meryl Dorey (who has been replaced this year by Greg Beattie).

By contrast, Australia’s leading anti-science organisation, the Institute for Public Affairs, has a large paid staff, and is supported by deep-pocketed donors such as Gina Rinehart and British American Tobacco. The IPA is part of a right-wing anti-science industry whose typical members have moved from denial of the health risks of smoking to dismissal of ozone depletion to conspiracy theories about climate science. The industry operates across the English-speaking world, and encompasses think tanks, “astroturf” organisations and bogus experts, ranging from complete charlatans to paid hacks. It has the active support of media organisations such as the Murdoch empire, and of political organisations such as the Republican party and (despite attempts to pretend otherwise) the Coalition parties in Australia.

There’s nothing comparable in the anti-vax campaign. A few charlatans, most notoriously a former doctor, Andrew Wakefield, have profited handsomely from exploiting anti-vaccination fears. Wakefield was struck off for falsifying research and for conflicts of interest associated with a new medical test he was trying to promote. But he is an isolated individual, not to be compared with the legion of hacks working for the tobacco and climate denial industries.

Despite its limited resources, the anti-vaccination movement has scored significant successes, if contributing to disease and death can be described as a success. Following Wakefield’s spurious claims, vaccination rates fell sharply in parts of Britain, and epidemics of measles and other preventable illnesses duly followed. In Australia, whooping cough has made a comeback, with tragic consequences.

The impact of the anti-vaccination campaign reflects the intuitive appeal of much anti-science reasoning. Injections and needles raise natural fears, as well as social taboos about bodily purity. The idea that injecting children with viruses is good for them seems profoundly counterintuitive (though, oddly enough, the crank system of homeopathic medicine is based on a superficially similar idea).

Given the inchoate nature of anti-vaccine sentiment, and the urgency of reversing recent declines in vaccination rates, what is to be done? First, it’s important to remember that the great majority of parents who may fail to vaccinate their children are genuinely trying to do the best they can to give those children a healthy life. They are unlikely to respond well to a head-on attack on their beliefs or to a dismissal of their concerns as nonsensical.

The easiest groups to deal with are those who are ambivalent about vaccination and take the path of least resistance by doing nothing. For this group, financial incentives like the proposed withholding of the Family Tax Benefit will be enough to change the default choice.

For parents who are genuinely convinced that vaccination represents a serious health risk, the medical exemption should be available. For this group, simply dismissing concerns about vaccination is probably not the most effective approach. More effective would be to use the evidence to dispel the idea that the “diseases of childhood” (measles, mumps, rubella, chicken pox, whooping cough) are no big deal. All of these diseases can have devastating effects, with a frequency far greater than any adverse reactions to vaccination. This fact needs to be publicised as widely as possible, particularly when there are breakouts of disease.

Finally, there are the hardcore advocates of anti-vax conspiracy theories. Members of this group can’t be convinced, but they can and should be discredited. It’s important to avoid giving them any platform that implies they hold a legitimate viewpoint. In particular, the spurious notion of “balance” in media coverage must be rejected in favour of a commitment to truth. The evidence in favour of vaccination is overwhelming, and should inform any media reporting about anti-vaccination organisations and campaigns.

Some but not all of this reasoning applies in relation to other anti-science activities. As far as possible it is necessary to present genuine scientific information in a non-threatening way to those who are genuinely confused, while giving no quarter to those paid to lie on behalf of the tobacco industry or in pursuit of tribalist vendettas against the environment movement. The difficulty is that the proportion of tobacco and climate “sceptics” who are honestly seeking the truth is far smaller, while the proportion of paid hacks and culture warriors is far higher. Members of the latter group will seize on any concession made to encourage genuine understanding, and will use it as evidence of weakness in the scientific position.

Despite all this, there is hope. The vast resources of the tobacco industry and the wishful thinking prompted by addiction to nicotine have, in the end, been insufficient to suppress the truth about smoking. The case for vaccination seems, at last, to be prevailing. The same will be true, sooner or later, for climate science. Hopefully it will not be too late. •

The post Injecting a dose of science appeared first on Inside Story.

]]>
Eye on the sky https://insidestory.org.au/eye-on-the-sky/ Tue, 30 Apr 2013 03:26:00 +0000 http://staging.insidestory.org.au/eye-on-the-sky/

Amateur astronomers are making a unique contribution to science’s understanding of the universe, reports Marilyn Moore

The post Eye on the sky appeared first on Inside Story.

]]>

CHRIS MORLEY’s Sandy Ridge Observatory sits astride a prominent ridge in Victoria’s Strzelecki Ranges. Morley belongs to the Latrobe Valley Astronomical Society, a small group of enthusiasts based some 150 kilometres east of Melbourne, and he enjoys hosting occasional viewing nights. Only a short walk from his house, the observatory is home to a Meade 14-inch Schmidt Cassegrain telescope as well as smaller refractor and finder scopes. These are not huge telescopes – deep space telescopes are measured in metres rather than inches or millimetres – but they are far bigger than those that most people would typically look through.

It’s 9.30 pm and moonrise is still several hours away. We cross the lawn in complete darkness, assisting the slow process of allowing our eyes to adjust. An engineer by day and stargazer by night, Morley built this observatory himself. The building, set on a heavily reinforced concrete foundation, is cylindrical, 3.6 metres in diameter and two storeys high to the base of the dome, which rotates on a pair of custom-made steel rings fitted with heavy-duty rollers. A sliding roof panel opens the dome to the sky.

Morley unlocks the observatory and we step inside. An almost impenetrable darkness smothers a brief impression of computers, bookcase, desk and whiteboard. Barely illuminated, an elegant timber staircase spirals upward. We tread cautiously to the upper level, where the dome has already opened to reveal a narrow slice of star-studded sky. I begin to make out the main telescope, locked into position on its heavy computer-controlled mount, with piggybacked refractor and finder scopes. The setup fills centre-stage, leaving only a narrow ring of standing room for onlookers. Morley picks up a remote control device and enters a code that identifies our first target object. “Watch out!” he calls. From out of the gloom, like the massive arm of a giant deep-sea creature, the forty-six-kilogram counterweight swings past my ear as the telescope rotates smoothly into position.

Now I, too, am about to witness the magic that has transfixed stargazers throughout history as Chris Morley takes me on a beginner’s tour of the night sky. I put my eye to the eyepiece and am almost immediately mesmerised. Our first object is Pleiades, better known as the Seven Sisters. It’s an “open star” cluster, one of the closest to Earth, consisting of an exquisite “little dipper” among an array of other stars. Most people see at least six of Pleiades’s stars with the naked eye, but up to one hundred are visible through the refractor scope tonight. It’s as stunning as a giant explosion of fireworks during that brief eternity when they seem to hang in the sky. This cluster of predominantly hot blue stars is relatively young and lies about 400 light years from Earth, so we are looking at it as it was 400 years ago – while Shakespeare was writing his plays and Galileo convincing himself of the merit of Copernicus’s heliocentric theories.

Reluctantly we leave Pleiades to travel to the great star Aldebaran. With a luminosity 425 times that of the Sun and a diameter forty-four times as great, it’s classified as a Red Giant and is the fourteenth-brightest star in the sky. It sits in the constellation Taurus, easily visible between Pleiades and the nearby constellation Orion (“The Saucepan”). The name is Arabic: Al Debaran, the follower. Pleiades’s little friend. It’s humbling to realise that the ancients knew these stars so much better than I do.

The big counterweight swings again as the telescope repositions its focus on Jupiter low in the northern sky. We can easily pick out all four Galilean moons and the latitudinal banding on the giant gas planet. I am impressed: for the first time in my life, I am beginning to see the night sky in three dimensions. It’s no exaggeration to say that this is a life-changing experience.

Chris Morley’s observatory in Victoria’s Strzelecki Ranges.
Photo: Chris Morley

It changed Galileo’s life too. His discovery in 1610 that Jupiter had orbiting moons helped confirm his belief that the planets in our solar system orbit the Sun. By 1671, a Danish astronomer by the name of Ole Rømer was timing Io’s eclipses using a primitive telescope. Io disappeared into Jupiter’s shadow every 42.5 hours, give or take a few minutes, and Rømer was trying to pin down this behaviour. You might be excused for thinking that every detail about the orbits of these four moons would be well-known by now. Not so. Their orbits are still being fine-tuned, and over the past two decades, two amateur astronomers from Gippsland have been instrumental in this work.


PETER NELSON’s Ellinbank Observatory has a particularly impressive dome: it’s fully electronic and rotates automatically as the telescope tracks across the night sky. Nelson himself is impressive, too: an engaging man with a quiet twinkle, he’s a physicist-turned-radiographer by day. He’s also an excellent communicator, the sort of person people listen to. It’s no surprise to find that he was the early driving force behind the contributions made to world astronomy by members of the Latrobe Valley Astronomical Society. A member himself since 1987, Nelson is also director of the Variable Star Section of the Astronomical Society of Victoria. It was at the 1987 Victorian Astronomy Convention that Nelson first met Chris Stockdale, a lifelong enthusiast who’d built his own observatory and joined the Latrobe Valley group in 1972. Stockdale, whose background was in IT and electronics, proved an eager colleague for Nelson, adept at building and adapting software and gadgetry to make observing simpler, faster and more accurate.

At first glance, Stockdale’s Latrobe Valley observatory appears more primitive – you have to stoop quite low to get inside and the computers are far from new – but it’s a serious workplace. Noticeably, both Stockdale’s 11-inch Schmidt Cassegrain and Nelson’s 12.5-inch Corrected Dall Kirkham telescopes have no eyepieces fitted. They are set up for scientific work and have cylindrical CCD detectors (effectively digital cameras) mounted behind the lenses and connected to various computers. Nelson and Stockdale are both internationally recognised experts in the art of high-precision photometry.

Because the tracking capability of the telescope allows long exposures to be made, photometry is especially useful when the target object is only faintly detectable. The fainter the object, the higher the absolute value of the star’s brightness (or “magnitude”). The naked eye detects objects up to magnitude 6, or magnitude 9 with binoculars. Nelson’s and Stockdale’s CCD/telescope combinations usually detect to magnitude 17 or 18, especially when images are combined to improve signal-to-noise ratios. “A lot of useful science can be done with small telescopes,” says Nelson. “Off the shelf stuff can reach magnitudes that Palomar” – Caltech’s much-lauded 200-inch telescope, still used by researchers today – “did in the fifties.”

To investigate changes in the brightness of an object – a star, planet or moon – telescopes set up for photometry take repeated snapshots over a period of time. The data is corrected for background noise, colour bias and uneven illumination, and then graphed, resulting in a curve that describes the variation in brightness over time for the event being observed. This could be an eclipse or occultation (when an apparently larger body passes in front of, and blocks light from, an apparently smaller body) or any other event that causes the amount of light reaching Earth from a particular source to change. Eclipses of moons by their planets provide important data for deducing precise orbits, essential for navigating interplanetary spacecraft, among other things.

Peter Nelson started working on the Jupiter project with US mathematician and software engineer Tony Mallama and a couple of others. Between 1990 and 2000 the group, which Stockdale joined around 1996, made over 200 measurements of the occultations of Jupiter’s moons. The results were impressive. “The exact moment that the moon exits Jupiter’s shadow is taken to be the halfway point between it being in full shadow and full light,” says Stockdale, “and every pixel of each CCD image is analysed to determine those points as accurately as possible. By taking many measurements over a number of years, a precise story gradually emerges.” Countless observations of these eclipses had, of course, been previously made, but the use of CCDs and Nelson’s and Stockdale’s meticulous observations served to significantly refine existing models. “We found that Io was in a different place in its orbit by sixty-two kilometres,” says Stockdale. “Europa was out by 267 kilometres, Ganymede by 142 kilometres and Callisto by 146 kilometres. That’s quite a lot, really.”

According to Morley, it’s a discovery that “would have been of particular interest to an organisation like NASA.” The more precisely the locations of Jupiter’s moons can be predicted, the less likely it is that interplanetary spacecraft will interact with them in an unexpected manner. Close-range work like remote sensing and gravity-assisted spacecraft navigation is critically dependent on knowing exactly where the moons will be at any given moment. “The Galileo space probe used our data,” recalls Nelson with satisfaction. The probe, launched on 18 October 1989, monitored the Shoemaker-Levy 9 comet’s collision with Jupiter in 1994, observed Io’s volcanism, and discovered not only liquid oceans under the icy surfaces of Europa, Ganymede and Callisto but also Ganymede’s magnetic field. These discoveries all owe something to Nelson’s painstaking early work.

Stockdale quickly got used to setting his alarm for the wee hours and was soon monitoring every eclipse by a Jovian moon that was visible from his observatory. By 2010, Tony Mallama at the University of Baltimore had analysed 548 data points spanning two decades and produced a precise data set with an almost continuous distribution. “The new data set allowed measurement of regular oscillations in the orbits of the Galilean moons,” explains Stockdale, “something which could never have been achieved before.” As with the earlier work, a number of anomalies showed up between the predicted and observed locations of the moons – anomalies so fine-scale they’d previously been unrecognised.

Such far-reaching results seem a far cry from the discoveries arising from Rømer’s first occultation measurements in 1671, but the basis for well-supported research has never changed – it has always required dedicated, accurate and reliable observation. When Stockdale and Nelson independently record the same event, the tight coincidence of their data presents an unassailable story. “We’re usually the same to within 0.3 seconds or better,” says Stockdale. “Most curves are virtually on top of one another.” Even using only a “primitive” time standard – the two observatories are about 45 kilometres apart as the crow flies – agreement to within a fraction of a second is pretty amazing given the relatively imprecise definition of some of the events they monitor.

An international focal point for variable star research is Arne Henden, director of the American Association of Variable Star Observers, or AAVSO. Henden, a professional astronomer and software/instrument specialist, has done much to upgrade the calibre of (and respect for) amateur observations. “Previously things were a bit patchy,” says Nelson, who has been on one of Henden’s courses in Boston and has now “officially” notched up over 100,000 observations. “You had to establish credibility over time.” Despite their modesty, observations conducted by Nelson and Stockdale are clearly up there with the best. Recognition brings rewards in the shape of exciting collaborations – Nelson babysits an AAVSO research telescope in a second observatory on his property – and a significant proportion of their work focuses on requests from professional organisations. “There’s a project happening with Hubble and the Cosmic Origin Spectroscope,” Nelson later reveals. “They’re studying a white dwarf at the hub of a binary pair, and they need to do some far-UV spectrometry. They can only look at this variable star when it’s dull, or the satellite’s delicate spectroscope sensors would be irretrievably damaged. ” The project relies on a small group of amateurs, including Nelson, to provide this vital information. There’s no shortage of motivation to keep up the good work.

Are there other amateur observers doing this sort of work? Not too many, it seems. Nelson mentions Tom Richards at the Woodridge Observatory near Eltham in Victoria, whom both he and Stockdale have collaborated with from time to time. “He does his own thing (with variable stars) and he does it very well.” Richards, a former professor of computer science, is the current director of the Variable Star Section of the Royal Astronomical Society of New Zealand, or RASNZ, (now called Variable Stars South) and, like Stockdale and Nelson, is also a regular contributor to New York’s Centre for Backyard Astrophysics, a global network of small telescopes dedicated to photometry of cataclysmic variables. There are, of course, quite a few private observatories run by amateurs not known to the ASV, as well as a large number of telescopes not necessarily being used in an observatory. According to the Australian Astronomical Society, there are ten registered private observatories in Victoria, forty-two in Australia. The list doesn't include Nelson’s, Richards’s or Morley’s, although all would meet the strict registration conditions.

Registered or not, individual backyard astronomers who contribute significantly to the professional research effort generally become well-known within the fraternity. And it’s not necessary to own a lot of expensive gear. Many important discoveries have been made (or contributed to) by amateurs using relatively basic equipment. “Amateur astronomers are usually best placed to make discoveries that require long-term detailed observations,” confirms Chris Morley. “They have relatively unlimited viewing opportunities. Professional astronomers usually have limited access to large telescopes – there is a lot of competition.” Nelson agrees. “As an amateur, you can contribute significantly,” he says. As a professional, he adds, “you might book three nights on Mauna Kea two years ahead. When you get there your target might not perform, or it might be cloudy, or you might have equipment problems … As equipment improves, astronomers look further and further into space for fainter and fainter objects. However there is still much to be learned from studying brighter objects closer to home. We shouldn’t forget that.”


DEEP in Gippsland’s Strzelecki Ranges, another dedicated enthusiast shows just how much can be achieved with a minimum of equipment. Rod Stubbings, a plumber by trade and self-confessed “variable star” junkie, has made more than 214,000 observations of the changing brightness of variable stars over the past nineteen years. This feat alone elevates him to the ranks of a small handful of observers worldwide. Along the way his humble observations have led to a string of significant discoveries.

Stubbings, who looks pretty good for a bloke who didn’t get to bed until 4 am, tells me his life changed dramatically after he attended his first meeting of the Latrobe Valley Astronomical Society in 1992 or 1993, where he was introduced to Peter Nelson. Nelson was clearly angling to recruit new blood, and Stubbings, on his own admission, took the bait. He realised that he could make a real contribution to astronomy if he were to accurately record changes in brightness of variable stars and send them on to the proper organisations.

The Royal Astronomical Society of New Zealand had become the recognised centre for variable star research in the southern hemisphere, so Stubbings contacted then director, Sir Frank Bateson. Armed with the society’s beginner’s manual, he made his first observation in May 1993. After a month he’d made ten. He took to it like a duck to water, and Nelson kept him supplied with charts of additional variable stars to monitor. By 1997 he realised he was observing more outbursts than the “official” ones appearing on the network, so he decided to submit his as well. “There was no point in keeping my outbursts in my logbook,” he says logically. About two months after he had begun to send off his data, he received the following email from Bateson: “You will be pleased to know that your alert notices are being well regarded world wide. Keep up the good work.” The former director of the AAVSO, Janet Mattei, also encouraged him, requesting that the association be copied into all his alerts. Stubbings suddenly found himself centre-stage in a global network.

Rod Stubbings’s biggest asset is his extraordinary memory – the sort of memory that leaves mere mortals dumbfounded. Nelson confirms that Stubbings has “rock-star” status in astronomy circles, but you’d never realise this in meeting him; he’s just an easy-to-talk-to chap who’s as proud of his family, the new paint job on the house and his pedigree black-faced Dorper sheep as he is of his astronomical achievements.

What is it, exactly, that he does so well? Stubbings has memorised some 500 different groupings of stars that he observes during the course of a year. Not only has he memorised their locations, but also their magnitudes. If one star is even 0.1 brighter than before, he notices. Experienced observers can estimate the brightness of a variable star by comparing it with nearby reference stars of known (and constant) magnitude. He insists that it’s not difficult. “I ‘star hop’ with the finder first,” he explains, pointing to the refractor that’s piggybacked onto his Meade DS-16 400 mm Newtonian reflector, “and I see angles and patterns that lead directly to my various star fields. That’s all done from memory. Then I transfer to the big telescope.” The locations and magnitudes of countless guide stars, along with hundreds of variable stars and their associated reference stars, are all in his head, although he admits he does occasionally double-check with a set of star charts before sending out an important alert. No computers, no complicated analysis or calibration, no worries about temperature fluctuations and shifting focus or tracking… just an astoundingly accurate memory.

Stubbings studies about 150 stars on a single night. “I don’t follow the guidelines for observing,” he admits. It seems that once observers understand a star’s activity pattern, they usually concentrate on observing it only when it’s expected to flare. “I’d rather look at all of mine every couple of days, even several times per night,” says Stubbings. That’s how I pick things up.” It’s a system that’s certainly worked well for him. Stubbings is responsible for a large number of first-ever visual outburst detections, along with many others that have revealed the true nature and reclassification of variable stars. A list of his publications, awards and most notable observations was recently compiled for a forthcoming AAVSO newsletter. It’s a huge body of work, a significant achievement by any standard. And he’s still a youngish man.

Over time, Stubbings’s attention has progressively shifted from regularly variable stars like Eta Carinae to “cataclysmic” variable stars, whose shifts in brightness were unpredictable but often explosive and nova-like. He likes to recall one of his first such observations, on the eclipsing-type dwarf nova OY Carinae, which occurred around 2 am. “Do I ring somebody at this hour?” he wondered. Would Frank Bateson be awake? Stubbings made the call. Bateson answered immediately – typically, as Stubbings was later to learn. “He was very appreciative,” recalls Stubbings. “The alert triggered [all the necessary] satellite observations.”

Development of the internet during the 1990s streamlined the notification process considerably. The Variable Star Network was set up to create a global professional-amateur network of researchers observing not only cataclysmic variables but also things like black-hole binaries, supernovae and gamma-ray bursts. Stubbings, now recording more than 1400 observations (including thirty to sixty dwarf nova outbursts) each month, emails notifications of outbursts to the American Association of Variable Star Observers, the Variable Star Network and the Royal Astronomical Society of New Zealand each night. Photometrists Nelson and Stockdale would also be alerted by this process and leap into action.

“Last year T Pyxis went off,” says Nelson. “It’s a recurrent nova that goes off every thirty years or so. And that’s the beauty of the network: Rod alerted us and within half an hour we were all onto it.” Nelson recorded some 10,000 observations of this event, including three hours of data taken over the time that the star brightened. The professionals need as much data as possible, says Nelson, covering “the brightening, the maximum, the decay so they can look at pulsations and orbital humps.” As with most of his work, Nelson’s detailed observations were conveyed to AAVSO as well as to the Centre for Backyard Astrophysics in New York and Taichi Kato in Japan.


ON STUBBINGS’s long list of “firsts,” one particular date stands out in his memory: 15 September 1999. An abnormally bright flare-up of variable star V4641 Sgr (in the constellation Sagittarius) caused him to send out an alert that attracted immediate attention around the world. Information consequently collected from large radio telescopes revealed that V4641 Sgr is a black-hole binary system, a fact previously unrecognised, and at 1600 light years away, one of the nearest black holes to Earth. “This emphasises the scientific value of visual observations in variable star astronomy,” Stubbings firmly points out. He is adamant that amateurs are best-placed to make them. “Professionals have to schedule observing time at large observatories a year or so in advance. These (important) outbursts would go unobserved.”

Stubbings received the prestigious AAVSO Director’s Award  in 2002. On 24 January 2012 he clocked up his 200,000th observation. As well as conducting his own observations, he regularly receives requests from astronomers and professional organisations around the globe to monitor specific stars for them, usually to assist with scientific research programs. Perhaps the most poignant request came from New Zealand astronomy legend, Albert Jones, now aged ninety-two. Realising that he didn’t have a lot of observing time left, Jones wrote to Stubbings a couple of years ago and asked him to take over the regular monitoring of a number of “his” variable stars. “He was worried that nobody would keep a proper eye on them,’ says Stubbings.

Stubbings has been compared with another amateur astronomer noted for his exceptional memory, Reverend Robert Evans of Hazelbrook in the Blue Mountains of New South Wales. Evans, who has received numerous prestigious international awards and been written about by Bill Bryson (in The Short History of Nearly Everything) and Oliver Sacks (in An Anthropologist on Mars), regularly scans more than 1500 galaxies by eye through his portable twelve-inch telescope, and can immediately tell if a new brighter star appears. Since 1980 he has single-handedly discovered more than forty supernovae (giant stars that collapse and brilliantly explode as they die, visible as such for only a few days or weeks); prior to his discoveries, fewer than sixty supernovae had been discovered worldwide. When interviewed, Evans inadvertently (but beautifully) summed up the improbability of one man’s ever achieving such a feat: “There’s something satisfying, I think,” he told Bill Bryson, “about the idea of light travelling for millions of years through space and just at the right moment as it reaches Earth someone looks at the right bit of sky and sees it.” I’m sure Stubbings would feel this way too.

Evans’s world record is likely to remain unchallenged as traditional techniques are superseded by technology. Stubbings is concerned that the move towards automation and digital processing might affect the usefulness of his own work. “Dunno if you’ll get observers like me any more,” he says pensively. “We live in a gadget world now.” Will astronomers stop listening to his alerts? It doesn’t seem likely, not in the short term, anyway. Evans and Stubbings are both delighted by the occasional failures of modern technology. “Automatic detectors can’t always distinguish binary stars,” says Stubbings. “And occasionally they are simply wrong! I believe you cannot undervalue the contribution of visual observing compared to CCD imaging. A CCD camera cannot make instant judgments on a star’s state, or call upon experience the way a visual observer can when directly viewing the stars.” A massive new CCD array being built in South America “will minimise bleeps like the ones Rod mentioned,” says Peter Nelson, “but people like Rod will always be needed. The new array will be programmed to immediately latch onto important alerts. There are lots of smarts happening.”

Of course Nelson, Stockdale and Stubbings are not the only Australian amateurs to have made a significant contribution to global astronomy. There is a long tradition here of noteworthy observations, and the achievements of two early amateurs in particular have been famously recognised: Gale Crater on Mars, the touchdown site of NASA’s probe Curiosity in August 2012, was named after renowned planetary observer Walter Frederick Gale (1865–1945); and the Australian $100 note bears a portrait of legendary astronomer John Tebbutt, who between 1880 and 1889 continually outperformed the state observatories. Exciting achievements continue to this day in backyards across the country: since 1973, the Astronomical Society of Australia has awarded its Page Medal seventeen times for outstanding feats of amateur research. “Amateur” in no way implies second-rate when referring to such distinguished researchers – it’s just that nobody pays them to undertake their work.


BACK at the Sandy Ridge Observatory, we have been admiring Alnitak (a hot blue supergiant) and Rigel (the brightest star in Orion, a blue-white supergiant with a luminosity 130,000 times that of the Sun). Now Morley has bumped up magnification of the Schmidt Cassegrain telescope so that we can have a closer look at that most intensively studied of celestial features, the Great Orion Nebula. If you could spend the rest of your life looking at only one thing, this magnificent nebula might be it. Some 1320 light years distant, Orion has revealed much to scientists about the process of how stars and planet systems are formed from collapsing clouds of hydrogen gas and dust. It’s an engrossing sight, often referred to as a stellar nursery.

But perhaps the best is yet to come. The roof opening is rearranged so that we can take a look at the southern sky. The telescope zooms in on a large, low-density cloud of partially ionized gas sitting on the leading edge of the Large Magellanic Cloud, itself a well-known galaxy easily visible to the naked eye. And what a sight: the Tarantula nebula, a mere 160,000 light years away, looms immediately into sharp focus, revealing events that happened at the time when modern man – Homo sapiens sapiens – first walked on Earth. Fabulously beautiful swirling arms of cloud surround a brilliant star cluster. Morley describes the core as an “extremely bright compact star cluster some thirty-five light years in diameter.” His words are technical and precise but the enthusiasm shines through. “It’s by far the most active starburst region within the realm of ‘nearby’ galaxies,” he says. “Tarantula Nebula would cast shadows on Earth were it as close as Orion.”

With the imminent rise of a gibbous moon over Sandy Ridge, our evening’s viewing comes to an end. We shut down the observatory and trail blindly back across the lawn, eyes watering from the unaccustomed brightness of household lights. My brain’s simply buzzing. It’s been such an experience. Such a privilege, too, to have been allowed a glimpse into this hidden community of dedicated observers, driven by a passion for their subject and an unbounded sense of adventure. A touch of technical genius hasn’t gone astray, either. They say they do it for relaxation, for the excitement of discovery and for the sheer enjoyment, but there’s no doubt whatsoever that the extraordinary commitment of these unpaid astronomers and their meticulously obtained (and generously shared) observations are germane to many of the successes of professional astronomy.

“It’s more than excitement,” confirms Peter Nelson. “It’s really worth doing. We do have an impact.” So next time you read that a black hole has been discovered, or an especially bright supernova, or a new comet – spare more than a passing thought for the observers whose countless hours of nighttime effort may well have triggered that discovery. In the cut-throat world of scientific research, this example of ongoing cooperation and respect between amateur and professional astronomers is a circumstance almost as wondrous as the heavens themselves. •

The post Eye on the sky appeared first on Inside Story.

]]>
Germ warfare opens a new front https://insidestory.org.au/germ-warfare-opens-a-new-front/ Thu, 28 Feb 2013 22:29:00 +0000 http://staging.insidestory.org.au/germ-warfare-opens-a-new-front/

Overuse of antibiotics is not only creating resistant bacteria but also changing the ecology of the human body, writes Melissa Sweet 

The post Germ warfare opens a new front appeared first on Inside Story.

]]>

AS RESEARCH projects go, it probably didn’t sound too earth-shattering for the volunteers who offered to help. Apart from demonstrating their good health, they simply had to give blood, stool and saliva samples, as well as have swabs taken from various locations on their bodies.

We will probably never know their names but the contribution of those 242 American men and women, who were willing to share the intimate details of their microbial profile with the world, is having a profound impact on our understanding of health and illness, and is even raising questions about what it means to be human. This research into the microbiome – the viruses, bacteria and other microbes living with us – also puts a whole new slant on some longstanding public health problems, like the overuse of antibiotics – but more on that later.

In June last year, after five years of work by around 200 scientists from eighty universities, the US-based Human Microbiome Project released the initial analyses of those volunteers’ donations. The results paint an extraordinary, though preliminary, portrait of the richness of our microbial life. The researchers found over 10,000 species of microbes living in and on their subjects, with each person carrying about eight million different bacterial genes (compared with 22,000 or so human genes). They described their findings as “the largest and most comprehensive reference set of human microbiome data associated with healthy adult individuals.”

“The more closely we look, the more bacterial diversity we find,” said one of the scientists, Susan Huse, from the Marine Biological Laboratory, when the microbiome “map” was released. “We can’t even name all these kinds of bacteria we are discovering in human and environmental habitats. It’s like trying to name all the stars.”

The research, published in a series of articles in Nature and Public Library of Science journals, found that the composition of our microbial load varies enormously, both between individuals and between sites within the same person. It also found that their functions are surprisingly similar – that many different types of microbial communities can do similar work. Or as one scientist put it, “apparently, there are many different ways to be healthy when it comes to our microbes.”

Just as we unconsciously help the microbes in their quest for survival, so do many of them return the favour, whether by producing beneficial compounds, helping us to digest our foods, or boosting our immune system. By colonising our skin, gut and other surfaces, they help reduce the opportunities for more dangerous bugs to take hold. The research found that most healthy people carry pathogens, or microbes capable of causing disease, prompting some speculation that there may be hitherto unrecognised benefits from such relationships.

At a National Institutes of Health briefing accompanying the release of the findings, Phillip Tarr, director of paediatric gastroenterology and nutrition at Washington University School of Medicine in St Louis, said that research into the human microbiome offers “a whole new way of looking at human biology and human disease.” According to Tarr, “We’ve now been introduced to this biomass in each and every one of us. These organisms, these bacteria are not passengers. They’re metabolically active. As a community, we have to reckon with them much like we have to reckon with the ecosystem in a forest or a body of water.”

With most of us carrying ten times more microbial than human cells (although they make up only a small proportion of our body mass), some researchers now talk of us as “human ecosystems” or “super organisms.” As Amy McGuire, an associate professor of medicine and medical ethics at Baylor College of Medicine in Houston, told the briefing, the ethical, legal and social implications of the emerging field of microbiome science are profound. “There are also very interesting questions about whether the fact that we have more microbial DNA in and on our bodies than human DNA changes how we think about what it means to be human,” she said.

The upsurge of scientific interest in the human microbiome came when scientists working on the Human Genome Project realised that so much of the genetic material they were discovering wasn’t of human origin. They investigated further, helped by technological advances that enabled DNA sequencing at a fraction of the previous cost.

Another lesson from the genome project was the wisdom of establishing a reference database of a “normal” human microbiome. Hence the release of the volunteers’ data last year, with the goal of enabling further investigations. As one of the Nature papers says, “Collectively the data represent a treasure trove that can be mined to identify new organisms, gene functions, and metabolic and regulatory networks, as well as correlations between microbial community structure and health and disease.”

While these are early days in this burgeoning field of research, it seems to signal a profound shift in our relationship with the microbial world. “This is only the beginning,” writes Joy Yang, a researcher at the National Human Genome Research Institute. “We have learned that the bacteria living in and on us are not invaders but are beneficial colonisers. The hope is that, as research progresses, we will learn how to care for our microscopic colonisers so that they, in turn, can care for our health.”


YET we’ve been doing the opposite for many years by waging a long-running war with our microbiome. This has often been unintentional, through shifts in our food supply and way of life, or even in the way we give birth. (It’s been suggested, for example, that the global growth in caesarean deliveries may have reduced the transmission of health-giving microbes from mother to baby.) But much of the warfare has been deliberate, conducted via an arsenal of antibiotics, antibacterial wipes and other efforts to avoid germs. If ever there has been a campaign to demonstrate the wisdom of the statement that “you can no more win a war than you can win an earthquake” (credited to Jeannette Rankin, a pacifist and the first woman elected to the US Congress), then this is it.

From the earliest days of antibiotics, the bugs fought back. In a 1945 interview with the New York Times, the scientist who first observed the antibacterial properties of mould, Alexander Fleming, sounded an alarm about overuse of penicillin and the development of resistance. According to Lyn Gilbert, an infectious diseases physician and clinical microbiologist at Westmead Hospital in Sydney, it was only a short time after penicillin became available for treatment of civilians in Sydney in 1946 that a resistant strain of Staphylococcus aureus was found in about half of the surgical wound infections at the Royal Prince Alfred Hospital.

While much concern about antibiotics overuse has focused on how it promotes the emergence of treatment-resistant organisms, perhaps the collateral damage has been much more profound. “The Menace of Antibiotics” was the title of a plenary presentation at a major infectious diseases conference in San Francisco last October by Martin Blaser, a physician, epidemiologist, and professor of microbiology at NYU School of Medicine.

A former president of the Infectious Diseases Society of America, Blaser holds senior advisory positions with the National Cancer Institute and the National Institutes of Health. His talk covered many of the concerns he has raised in journal articles in recent years, suggesting that antibiotics have affected the microbiome in ways that have had adverse long-term health consequences.

Blaser suggests, for example, that the widespread eradication of Helicobacter pylori (a bacterium associated with stomach inflammation and duodenal ulcers) may be implicated in the rise of conditions such as obesity, type 2 diabetes and asthma. He also argues that the eradication of relatively benign bacteria leaves us exposed to the risk of being colonised by more harmful bugs.

Blaser’s concerns are based largely on experimental and epidemiological evidence and are not proven, but he is sufficiently alarmed to have gone on the record calling for caution in the use of antibiotics during pregnancy and childhood, noting that the average child in developed countries has received between ten and twenty courses of antibiotics by the age of eighteen.

At the Institute for Molecular Bioscience at the University of Queensland, Matthew Cooper also advocates a more judicious use of antibiotics, particularly the broad-spectrum types, and thinks it is sensible to avoid using antibiotics in children unless absolutely necessary. He agrees that the drugs may affect our microbial colonisers in ways we don’t yet understand, but stresses that the implications of the microbiome for health are complex and will not quickly or easily be understood.

Much of the research to date, Cooper points out, is showing associations between characteristics of the microbiome and disease, rather than cause and effect. One exception is a small European clinical trial that recently generated international headlines after finding that faecal transplants were more effective than conventional antibiotics in treating recurrent Clostridium difficile infection (a potentially serious form of antibiotic-associated diarrhoea). An accompanying editorial in the New England Journal of Medicine predicted that the findings would encourage wider trials of “intestinal microbiota therapy” for inflammatory bowel disease, irritable bowel syndrome, bowel cancer prevention and metabolic disorders. Other scientists are now looking at ways to transfer the “good” bacteria in the form of pills.

Cooper has been sufficiently impressed by the emerging evidence in another area to turn himself into a guinea pig. After reading studies suggesting that a high-fibre diet produces a type of bacteria in the gut that is associated with reversal of lung disease in mice, he put himself on a high-fibre supplement for two years to see if it would help his asthma. “I thought, what the hell, have a go,” he says. “I was taking a steroid every day for asthma for twenty years.” Now, he adds, “I am no longer taking asthma medication.”

Cooper stresses that a study like this, with only one subject and no controls or placebos, is meaningless. But his research group is now studying the role of by-products of good bacteria, molecules termed short-chain fatty acids, and how they interact with key receptors in the human gut. These receptors bind to the fatty acids produced by good bacteria in response to a high-fibre diet, and then modulate inflammation in the body – one of the causes of diseases such as asthma.

While any wider significance to Cooper’s self-experiment is not yet clear, he does hold to the adage that “you are what you eat.”


IN HIS small office on the sprawling campus of the John Hunter Hospital in Newcastle, John Ferguson is bolting down a sandwich while giving an overview of recent developments in antibiotic resistance, which include the alarming, the encouraging and the just plain interesting.

An infectious diseases physician and microbiologist, Ferguson has prepared a long reading list for me, including an article from July last year titled “Emergence of Antibiotic Resistance: Need for a New Paradigm,” from a European journal, Clinical Microbiology and Infection. Its French and Spanish authors argue for a rethink of the accepted wisdom that antibiotic resistance has resulted mainly from the use of antibiotics over the past seventy years, citing research showing that antibiotic-resistant genes were around at least 30,000 years ago, having been found in ancient permafrost sediments, in Antarctic waters, and in remote peoples who have had very little exposure to modern antibiotics.

“The emergence and rise of antibiotic resistance observed worldwide cannot be explained only by the increasing modern use of antibiotics in humans,” they say, “but involves a complex interaction in an ecosystem comprising microbial communities, antibiotics and antibiotic resistance genes.”

These points are also taken up by US doctors in an article published in January in the New England Journal of Medicine, which cites the recent discovery of antibiotic resistance among bacteria in underground caves that had been cut off from the surface for four million years. “Remarkably,” the doctors write, “the resistance was found even to synthetic antibiotics that did not exist on earth until the twentieth century. These results underscore a critical reality: antibiotic resistance already exists, widely disseminated in nature, to drugs we have not yet invented.”

That the origins of antibiotic resistance are more complicated than previously thought does not make the modern problem of antibiotic overuse any less significant, says Ferguson. Rather, it reinforces the importance of reducing the exposure to antibiotics of people, animals and our wider environment, particularly given the growing realisation that resistance spreads in many ways. It is not only that microbes evolve in order to survive antibiotic attacks, or even that the drugs encourage a selective advantage for resistant germs; it is now known that resistant genes can move around, both within and between microbial species.

The global movement of animals, food and people also helps resistant microbes spread rapidly around our increasingly connected world. Indeed, many travellers may not appreciate the risk that they will bring home such a souvenir. After Canberra doctors observed many multi-drug-resistant infections in patients who had recently been overseas, they decided to investigate further. They tested 102 people, before and after travel, for treatment-resistant E. coli. Beforehand, 8 per cent showed evidence of such germs; afterwards, the figure was 49 per cent. Most of the resistant infections had probably been acquired from food.

Like many of his colleagues, John Ferguson has been pushing hard for reduced use of antibiotics in hospitals, in the community and on farms, and for more systematic use of basic infection control practices like hand-washing. This is not only important for health services, but for the rest of us, at work, at home and at play. (Ferguson cites, for example, an outbreak of a resistant germ in a well-known football team that may have been spread via shared towels.)

Studies around the world have repeatedly shown that about half of antibiotic use in humans is inappropriate – for example, used for viral infections such as colds. In Australia, the National Prescribing Service, or NPS, is running a campaign to discourage such use. Similar campaigns in Europe have been given the credit for falling use, at least for a while, but one study found that about half of Europeans still mistakenly believe that antibiotics kill viruses and are effective against colds and influenza.

Between mouthfuls of sandwich, Ferguson explains how he has seen the impact of antibiotic resistance develop since he began working at this hospital in 1994. Where once the serious infection methicillin-resistant Staphylococcus aureus, or MRSA, was mainly acquired within the hospital, now most of the patients that he sees with this bug have picked it up in the community. “This is happening across the country,” he says.

Not surprisingly, treating patients with MRSA or other resistant infections tends to be complicated and protracted. Less well-known is the fact that the treatment itself may also put patients at increased risk of contracting other infections. Ferguson points to a US review suggesting that people who have had an antibiotic are at greater risk of going on to develop an infection from a treatment-resistant strain of the food-borne bacteria Salmonella. The review estimates that between 13 and 26 per cent of drug-resistant Salmonella infections result from a previous exposure to antimicrobial agents.

But the rise of antibiotic resistance doesn’t only increase our susceptibility to infections; it also casts doubt over the future of many procedures that rely on antibiotic cover, from cancer surgery to joint replacements and organ transplantation. As the World Health Organization’s director-general, Margaret Chan, said last year, “A post-antibiotic era means, in effect, an end to modern medicine as we know it.”

The costs are also overwhelming. In the late 1990s, for example, a single outbreak of a treatment-resistant infection in an intensive care unit at Westmead Hospital was estimated by one doctor to have cost $1 million. Not only did the infection double the time patients spent in the unit, but a study also found the death rate was twice as high among those who were infected. In the United States, the annual cost of antimicrobial resistance in hospitals is estimated at more than US$20 billion.


WHILE there is no shortage of alarming news, there are also signs of a growing momentum for action, both nationally and internationally. Ferguson cites a heap of work being done by various committees and by the Australian Commission on Safety and Quality in Health Care. New national standards for health services require an explicit focus on wiser use of antibiotics and better infection-control practices. The Commonwealth’s chief medical officer, Chris Baggoley (who formerly ran the Commission), doesn’t need convincing that the issue should be a priority, adds Ferguson.

Public reporting of health service performance – the MyHospitals website lets us compare rates of hand-washing and blood infections involving Staphylococcus aureus among hospitals – has been helpful in focusing the attention of service executives, Ferguson says. A number of health services have managed to improve hand-washing rates and other infection-control measures, reducing antibiotic resistance. “Australia is vastly ahead of where it was five years ago,” says Ferguson.

One glaring gap, though, is the absence of national surveillance and reporting for antibiotic-resistant infections, despite many calls over many years. (It was also a key recommendation of a landmark WHO report last year, The Evolving Threat of Antimicrobial Resistance: Options for Action.) Planning for such a system is under way, but must negotiate the jurisdictional differences and rivalries that so often impede healthcare improvement.

Silos within the health system constrain action more generally across hospitals, primary care, aged care and the wider community. Like many, Ferguson laments the lack of a national communicable diseases agency to help drive action across these divides, and to bring the veterinary and agricultural sectors into the discussion.

No doubt we will be hearing more calls for such an agency, with a Senate inquiry investigating progress on the implementation of a 1999 report from the Joint Expert Technical Advisory Committee on Antibiotic Resistance, better known as the JETACAR report, which found that two-thirds of the antibiotics imported into Australia were used in animals.

Since then, many countries have moved to tighter regulation of antibiotic use in animals. Denmark is often held up as an example of the benefits of strong controls; since it banned the use of antimicrobials as growth promoters in 2000, levels of resistant bacteria in animals there have dropped and productivity has not suffered. According to the recent WHO report, though, the quantities and classes of antimicrobials used in food animals today are still insufficiently documented and controlled worldwide. Maryn McKenna, an American journalist and author of Superbug, who keeps a close watch on developments in antibiotic resistance, has recently reported at her blog on new data raising concerns about antibiotic use in the United States and China.

Australia often pats itself on the back for not allowing fluoroquinolones, the important broad-spectrum antibiotics, to be used in food-production animals, but researchers were surprised a few years ago to discover that 30 per cent of the poultry carcasses tested in Western Australia had E. coli resistant to these drugs. “We believe our results are explained by co-selection due to the use of non-fluoroquinolone antimicrobials during rearing of Australian poultry,” they reported, in a presentation to an international microbiology conference in San Francisco last year.

Peter Collignon, an infectious diseases physician and microbiologist from Canberra who was a member of JETACAR, remains concerned about the use of antibiotics in food production, but will encourage the Senate inquiry to look beyond what happens in healthcare or even in agriculture. Issues such as international trade, and governance in countries such as India and China, also have a direct bearing on our exposure to antibiotic resistance, he says.

“Medical people look at it too simplistically, at what antibiotics doctors are prescribing,” he says. “You need to look at the total picture – environment, agriculture, water, poverty, health and housing. One of the best examples is that TB had a marked decrease before any drugs were available, and it was all to do with nutrition and housing.” If there is to be a national communicable diseases agency, says Collignon, then it must recognise “that human health isn’t just what happens in the health sector.”

Meanwhile, Chris Del Mar, professor of public health at Bond University, and a longstanding advocate for evidence-based healthcare, would like the inquiry to consider tightening controls on doctors’ use of antibiotics. “There was a time thirty years ago when erythromycin could only be prescribed on authority. Maybe we need to go back to that,” he says. “That would bother a lot of doctors, but it works.”

Del Mar and his colleague Paul Glasziou, who are setting up a new NHMRC-funded Centre for Research Excellence in Minimising Antibiotic Resistance from Acute Respiratory Infections, suggest that public education campaigns should strongly discourage people from seeing GPs for colds and other viral infections, and instead promote self-care.

They argue that the “good news” about resistance – that it will diminish if fewer antibiotics are used – is not widely enough appreciated, and that there should also be more focus on “delayed prescribing,” where patients are asked to wait before filling scripts. The new centre will also investigate the potential for alternatives to antibiotics, including the use of corticosteroids, caffeine (yes, coffee!), and non-steroidal drugs to relieve cold and flu symptoms.

Del Mar and Glasziou would also like to see more education about the side effects of antibiotics. A similar point was made recently by Jeffrey Linder, an associate professor of medicine at the Brigham and Women’s Hospital in Boston, in the journal JAMA Internal Medicine. He was commenting on a clinical trial that found antibiotic prescribing rates dropped after some US primary care services systematically introduced decision aids to encourage doctors and patients to think twice about using the drugs for bronchitis.

But Linder said the results could not be considered a success when 60 per cent of patients were still being given antibiotics for a condition where they would at best gain a half-day reduction in symptoms, while 5 to 25 per cent would suffer an adverse event reaction, and at least one in 1000 would wind up in an emergency department with a severe adverse event. “We should address our patients’ symptoms,” he said, “but for antibiotics we need to tell our patients that ‘this medicine is more likely to hurt you than to help you.’”

Such blunt messages do not feature in the NPS campaign, which is using social media and other channels to encourage Australians to use fewer antibiotics and become “resistance fighters.” It says that Australians are among some of the highest users of antibiotics in the developed world, with around nineteen million prescriptions written every year.

Paul Glasziou says much more proactive efforts are needed – that efforts like the NPS campaign, while important, are “a drop in the bucket compared with the size of the problem.” In a health system where services and providers are largely paid to do more rather than less, Glasziou suggests that financial compensation – for example, paying doctors more to prescribe fewer antibiotics – merits investigation.


IT’s early on a Tuesday evening, and Jon Iredell looks tired. He has spent the previous two hours doing the rounds – checking about thirty patients in the intensive care unit at Westmead Hospital, most of whom are on life support and antibiotics.

His phone rings constantly with queries about patients’ management as he explains the delicate and not-so-delicate tactics that have been involved in changing the prescribing culture at the unit over the past decade or so. Back-of-the-envelope calculations by Iredell, a professor and head of the NHMRC Centre for Research Excellence in Critical Infection, suggest that associated reductions in resistance and shorter periods in intensive care have resulted in savings of about $3 million a year.

But it has been tough going. Antibiotic resistance is often discussed as an example of the “tragedy of the commons” – where individuals acting for short-term self-interest undermine longer-term communal interests. (Indeed some experts have called for antibiotics to be included in the UNESCO global heritage list for humanity to raise awareness of their importance for the greater good.)

A large part of Iredell’s job is spent on what might be called “the battlefield of the commons.” His priority is to ensure the most restrictive possible use of antibiotics, and he must negotiate with clinicians whose first priority is their individual patients. “It’s saying to a senior surgeon or physician, you can’t use this drug because it’s bad for the ecology, although they think it’s the best for their patient,” says Iredell. “You have a clash between the private good and the public interest.”

Public peer review – also known as “name and shame” – has been important in helping to shift the prescribing culture, adds Iredell. “The very extravagant prescriber will be exposed to peer criticism.”

While new antibiotics and rapid diagnostics to enable more tailored treatment are desperately needed, Iredell agrees with Collignon that antibiotic resistance needs to be tackled at a global level. Iredell’s nightmare scenario is that we reach a tipping point, where the most virulent bacteria combine with the most resistant genes.

“The real focus needs to be outside Australia,” he says. “The problem is coming from countries where antibiotics are unregulated. It’s a bit like having uncontrolled industrial waste leaking into your rivers. We’re all drinking from the same global pool so if someone is soiling the waters upstream, everyone is affected.”

Parallels are sometimes drawn between the challenges of climate change and antibiotic resistance, both involving fundamental shifts in complex, entrenched systems and behaviours, and concerns about tipping points and sustainability. A recent report for the World Economic Forum includes both these issues among the critical global risks, and also highlights that there are many links between the world’s major challenges.

In a section also relevant for antibiotic resistance, the report examines how cognitive biases, such as the tendency to give disproportionately more weight to immediate costs and benefits than to delayed ones, inhibit action on climate change. “The cumulative effect of such cognitive biases is that we may not pay due attention to, or act effectively on, risks that are perceived to be long-term and relatively uncertain,” the report cautions.


MARTIN Blaser has also drawn comparisons between our macro and micro ecological challenges. A seminal essay he co-authored with the distinguished US scientist Stanley Falkow, published in Nature Review in 2009, suggested that the cost of progress may be higher than we realise. “We are exploring the outer reaches of the universe and can travel across continents in hours, but we are also warming the Earth and depleting its oceans,” the two wrote.

“These major changes in our macroecology represent the price for the past century’s technological progress. We have also begun to explore the microscopic universe within us. Is this smaller-scale biosphere affected by similar changes? In particular, has the progress that has lowered infant mortality and prolonged lifespan also caused unanticipated alterations in our microecology and, consequently, our health?”

While we await further insights into the mysterious workings of our microbiomes, we can at least be certain that more of the same is not a sustainable option when it comes to how we use antibiotics. Or as a declaration arising out of a 2011 meeting of world experts in France put it: “Resistance of bacteria to antibiotics has reached levels that place the human race in real danger.” •

The post Germ warfare opens a new front appeared first on Inside Story.

]]>
Unlucky in love https://insidestory.org.au/unlucky-in-love/ Mon, 08 Oct 2012 22:27:00 +0000 http://staging.insidestory.org.au/unlucky-in-love/

Has the market economy changed the way we love? Anna Cristina Pertierra looks at three new books dealing with the difficult intersection of love, sex and gender

The post Unlucky in love appeared first on Inside Story.

]]>

WE LIKE to see love as the purest of feelings, an antidote to the cold calculations of work life, government or finance. In a society where the market rules, personal emotions – of which love must be the most intense – are often portrayed as among the few things that lie beyond economic incentives. People who confuse love and money are derided for being gold-diggers or worse, and when sex is mixed up with the market – in pornography, prostitution or sexualised advertising – it causes great consternation.

What are we to make, then, of sociologist Eva Illouz’s new book, Why Love Hurts: A Sociological Explanation, which tells us that the suffering caused by love as we know it is actually the product of modern capitalism? When life is painful in the modern, secular world, we are taught to look inwards to overcome our problems. Rather than turning to God, or to traditions, we turn to psychologists, financial planners, personal trainers and others who can help us to help ourselves. “Throughout the twentieth century,” writes Illouz, “the idea that romantic misery is self-made was uncannily successful, perhaps because psychology simultaneously offered the consoling promise that it could be undone.” Entire industries of self-help are underpinned by the premise that we must look within ourselves to understand and overcome the flaws that cause us to find love painful.

Like any sociologist worth her salt, Illouz pushes readers to consider how our experience of love might largely be created by the kind of society we live in. Tracing a sort of history of emotions through archives and literature since the Regency era, she argues that in earlier times people’s feelings about love and sentiment were quite different from those we take as self-evident. Although we think of love as entirely spontaneous and natural, the way we speak of love, and the problems that love creates for us, are more a historical formation than a natural occurrence.

Take, for example, this set of clichés: a woman who is sexually active and carefree in her twenties gets to her thirties and starts looking for a serious relationship, suddenly aware of her closing window of fertility. But all she can find is men who claim to be looking for love, and who think of themselves as good guys, but refuse to settle down. Some people might think it has ever been this way, but history and the social sciences suggest that these styles of behaviour are a recent invention. In the wake of a generation of new sexual freedoms, men no longer see women’s sexuality as something that is scarce, and this decreases women’s value in a market of love and sex. There are always more and younger women around the corner, which in itself makes women both more desirable in the abstract and less desirable in the concrete: “The modern situation in which men and women meet each other is one in which sexual choice is highly abundant for both sides; but while women’s reproductive role will make them end the search early, men have no clear cultural or economic incentive to end the search early.”

The detached modern man of this stereotype is probably not deliberately being caddish, says Illouz. He simply tries, in avoiding attachment, to find value in a process of looking for love within which he happens to control the field. Recent decades have brought all sorts of social changes that affect how and when we seek love. Women wait longer to have children, men no longer feel the pressure to marry young, women and men start careers quite late in life – these are not so much natural or biological phenomena as social and political shifts. Modern capitalism changes how we love as much as it changes how we work.


IF OUR experience of love is shaped by, or even created by, capitalism, then a modern emblem of capitalist love must be the world of online dating. This is the subject of Jean-Claude Kaufmann’s Love Online, a sociological meditation on the practices and problems of looking for love and sex via dating sites that form a “hypermarket of desire.” Women and men can browse apparently endless profiles without risk. They can chat online to potential dates and are free to be direct, romantic, sexual or daring without any great consequence. But, as Kaufmann shows, when people come to set up face-to-face dates, they seem overwhelmed by uncertainty. Love and sex are a complicated combination, and deciphering how much of each anybody is looking for is one of the many hazards of online dating. People can turn out to be different from what they promised. Sometimes they don’t turn up at all. Even when a meeting works well, the rules for progressing from an initial coffee or drink can be fraught with miscommunication and unknown intentions. Kaufmann takes us through the problems that both men and women face in navigating the murky waters of cyberdating, although women often seem to have more to lose emotionally in this an uneven market.

While internet dating is a recent phenomenon, Kaufmann locates it within a brief history of the modern courtship rituals through which a culture of dating emerged in twentieth-century United States and Western Europe. Dance halls established spaces where young people could meet more freely and choose their partners. In the twenty-first century, this process of choosing a partner sped up and spread out with the arrival of the internet. “The whole of society has become a huge dance hall where anyone (boy or girl) can ask anyone else (boy or girl) to dance,” writes Kaufmann. “As in the past, the ritual is reassuring because it is so well-oiled and gives the impression that the rules of the game are clear. But what we make of a date is not predetermined and can vary a great deal, more so than ever now that sex is part of the equation and may even be the most important thing about it.”

For Kaufmann, although love online increasingly looks like a hypermarket, it doesn’t really offer all the ease and convenience promised. We remain only too trapped by our own passions and humiliations when we try to build relationships with the real people on the other side of an internet exchange. Internet daters must learn the courting rituals of our age: first sending some messages, perhaps a photo, then having an initial drink, followed by “real” dates. But they are disappointed to find that the new rules of dating solve few of the problems that engulf someone looking for love. As Kaufmann puts it, “the rules relate to the formalities and the setting. They tell us nothing about the content of the game. On the contrary, the issue of sex/love is now more confused than ever. Sex has not become just another leisure activity.” Now that sex has become so commonplace, it is love that Kaufmann sees as truly radical. Love is what people struggle to realise: “We were deluding ourselves when we believed, as we often did in the 1960s and 1970s, that it is sexuality and not feeling that has a revolutionary import.”


KAUFMANN and Illouz want readers to understand the very specific and recent changes that have made men and women what they are today. Readers looking for an explanation grounded in evolutionary theory can turn to The War of the Sexes by Paul Seabright. Although he is an economist by training, Seabright’s work draws from evolutionary biology and evolutionary psychology to explore what he sees as the riddles of modern sexual inequality. Describing the process of natural selection as like moving through a long tunnel, he suggests that “during the passage through that evolutionary tunnel, men who could acquire economic resources were able to coerce or bribe their way into sexual reproduction and left more descendants than those who could not. Those conditions have changed beyond recognition today, but if we are to make economic inequality between men and women a thing of the past, we need to understand the psychological marks that the tunnel has left on us.”

Far from arguing that women are “naturally” less capable of succeeding in the modern economy, Seabright says that the data shows that any sex differences in ability pale in comparison to real-world differences between male and female economic success. Instead, he suggests that a more important difference between women and men (as well as the males and females of other species) is in how they collaborate and compete: how they network.

In modern capitalism, we are supposed to prize efficiency above all else. But biology shows us that what is important – even more important than efficiency – is success, which is achieved through collaboration as much as through competition. Evolutionary theory says we seek success in sexual selection: women seek to make the best choice of partner for a considerable investment in fertilising their eggs, while men must work hard to ensure that their plentiful sperm can have a chance of reaching any eggs at all. Seabright says that this battle of sexual selection has effects in most other arenas of life as well, and he is particularly interested in the arena of paid employment. He argues that modern workplace research provides many examples of where what should be efficient fails to happen. Women, whose labour should be very valuable, continue to be underrepresented at the top end of business. While men fill out the top bracket beyond their due, they also dominate the most marginalised sectors of society, including the homeless and the imprisoned.

Such inefficient differences, for Seabright, are partly explained by the fact that our prehistorical wiring tends to make men and women work within certain kinds of networks in ways they are overlooked in the modern era. Women tend to create strong ties through relationships in which they invest a lot, while men tend to create weak ties across a wider field. While strong ties are important for maintaining a family (you need people to help you in a very committed way when you have mouths to feed), a wider range of weak ties can be much more useful in seeking jobs across a dispersed employment market. To complicate this idea, both women and men tend to prefer working with members of their own sex, which not surprisingly leads women to be shut out of the boys’ clubs that continue to dominate elite groups. We need to understand such differences, says Seabright, if we want to help both women and men reach their full potential economically, creating an economy that is both better and fairer.

The War of the Sexes begins with a wide-ranging discussion of prehistory and evolutionary biology before jumping, quite suddenly, to contemporary differences between male and female economic power. The gap would have been nicely overcome by readings of anthropology and history as careful as those Seabright presents from evolutionary biology and psychology. But this economist does not make the claim so hated by those of us at the more “cultural” end of the social sciences, that all of our social behaviour can only be explained by market mechanisms or by biology. He deeply disapproves of people, from the left or the right, who argue against scientific findings on a political or moral basis, and clearly believes that the answers to explosive debates about sexual difference and gendered abilities can best be found using the superior weapon of science. But he also takes obvious pleasure in revealing findings from biological research that confound conservative views of sexual behaviour, arguing, for example, that “the view that natural selection has made men promiscuous and women monogamous is factually incorrect.”

These are three quite different books: Kaufmann looks at practices of love and romance, Illouz prefers to unpack the structures underneath such practices, and Seabright tends to be more interested in sex differences than in sexuality. But all three concentrate on the difficult intersection of love, sex and gender. They all argue that love, sex and gender are inherently problematic in the contemporary age. All three see that the distance between men and women creates pain and suffering, and perhaps more importantly that such misunderstandings have wide-ranging social consequences well beyond individual psyches – consequences for what we earn, when we work, and how and when we raise our children. Presumably, these consequences also deeply affect people who do not live within the heterocentric models of love and sex that are the focus of all three books (a limitation they acknowledge to varying degrees).

Whether the problem is caused by the evolutionary need to care for children much longer than most other animals or the historical formation of women as emotional and men as detached, all three books aim to unpack the truths and the myths that seem to have left women with the short end of the dating stick, not to mention the working stick. Eva Illouz tries to console the reader by writing, “If there is a non-academic ambition to this book, it is to ‘ease the aching’ of love through an understanding of its social underpinnings.” It is not our own fault love hurts, Illouz tells us; it is inherent to our modern condition.

Understanding exactly why love is guaranteed to be miserable is certainly illuminating. But it hardly makes for cheerful reading for a thirty-something reviewer in the early twenty-first century. •

The post Unlucky in love appeared first on Inside Story.

]]>
A networker’s manifesto for open research https://insidestory.org.au/a-networkers-manifesto-for-open-research/ Sun, 24 Jun 2012 06:16:00 +0000 http://staging.insidestory.org.au/a-networkers-manifesto-for-open-research/

Michael Gilding reviews a lively manifesto for an important cause

The post A networker’s manifesto for open research appeared first on Inside Story.

]]>
IN 2009 Tim Gowers – a mathematician at Cambridge University and a recipient of the prestigious Fields Medal – used his blog to invite readers to help him solve a difficult mathematical problem. He dubbed his experiment the Polymath Project.

For seven hours there were no replies. Then a Canadian academic posted a comment, followed by an Arizona high school teacher, then a fellow Fields Medallist from the University of California. Over the next five weeks, twenty-seven people exchanged 800 online comments. They not only cracked the problem; they also solved a more difficult conundrum that included the original as a special case.

The Polymath Project exemplifies the new possibilities of networked science explored by Michael Nielsen in Reinventing Discovery. Nielsen, an expatriate Australian and one-time Federation Fellow at the University of Queensland, has spent most of his career in North America – first as one of the pioneers of quantum computing, and more recently as an advocate of open science. Reinventing Discovery is a manifesto for open science, directed towards breaking the shackles of contemporary scientific culture and the scientific publishing industry.

Nielsen believes that we are on the verge of a new era of scientific discovery facilitated by the internet. Future generations will look back on this era in the same way as we look back on the first scientific revolution of the seventeenth and eighteenth centuries, when organised science transformed human societies. While there is a tension between Nielsen as a chronicler of this transformation and as an advocate of further change, this complicates Reinventing Discovery rather than diminishes it.

The first half of Reinventing Discovery elaborates on how online tools make us smarter. It employs examples such as Microsoft’s online chess match, “Kasparov versus the World,” Linux open-source software and Wikipedia. Nielsen argues that these examples go above and beyond the “wisdom of crowds,” amplifying human intelligence at the limits of human problem-solving ability. (Nielsen has no time for those who argue that the internet reduces our intelligence. This “is like looking at the automobile and concluding it’s a tool for learner drivers to wipe out terrified pedestrians.”)

The key to online tools, Nielsen argues, is making the right connections with the right people at the right time. As it stands, scientific discovery is often constrained by lack of specific expertise, and breakthroughs often depend on fortuitous coincidence. Online tools facilitate “designed serendipity” by creating an “architecture of attention” that directs people’s attention and skills to where they are most needed.

Specifically, effective online tools “modularise” the problem, splitting it into small sub-tasks which can be attacked more or less independently. They encourage small contributions, which reduces barriers to entry and extends the range of available expertise. And they develop a rich “information commons,” allowing people to build on earlier work. Wikipedia provides a neat example of all of these things.

But online tools only work when participants share a body of knowledge and techniques – which Nielsen describes as a “shared praxis.” There are many fields of activity where there is no shared praxis, such as fine arts, politics and the better part of economics. In these circumstances, people are unable to agree on the nature of the problem, and online tools provide no help in scaling up collective intelligence.

Science is an exemplar of shared praxis. Nielsen cites the examples of Einstein, Watson, Crick and Franklin to illustrate the strength of shared praxis in science. Einstein’s theories prevailed, notwithstanding his obscurity, because “in science it’s often the person with the best evidence and best arguments who wins out, and not the person with the biggest reputation and the most power.”

The second half of Reinventing Discovery focuses on science. Again, there are plenty of examples, including the Sloan Digital Sky Survey, the Human Genome Project, Galaxy Zoo and the Open Dinosaur Project. These cases are used to make two main points.

First, open data exponentially expands scientific capacity. The Sloan Digital Sky Survey, or SDSS, illustrates this point neatly. The project has made its massive database freely available, contributing to the dramatic discovery of dwarf galaxies and orbiting black holes. “The SDSS is one of the most successful ventures in the entire history of astronomy,” Nielsen writes, “worthy of a place alongside the work of Ptolemy, Galileo, Newton, and the other all-time greats.”

Second, the internet creates the scope for citizen science on a giant scale. Galaxy Zoo, an offshoot of the SDSS, is a good example. The classification of galaxies is a task that can’t be automated but is too large to be done by professional scientists alone. Galaxy Zoo is a website that recruits amateurs to the job. So far it has attracted more than 200,000 participants, some of whom have made their own discoveries, including a new type of galaxy.

The online tools for open science include open access, citizen science and science blogging. These tools facilitate a new kind of large-scale coordination, independent of traditional hierarchies, private or public. This coordination is motivated by curiosity, rather than money. Nielsen’s quote from another author captures the idealistic spirit of his message: “We are used to a world where little things happen for love and big things happen for money. Love motivates people to bake a cake and money motivates people to make an encyclopaedia. Now, though, we can do big things for love.”


YET Nielsen is not a technological determinist. He does not think that the internet will automatically give rise to a scientific revolution. The “first open science revolution,” he reflects, occurred long after the invention of the printing press. In the seventeenth century, Galileo and his contemporaries jealously guarded their discoveries. The big breakthrough occurred with the invention of scientific journals and peer review. “That system, modest at first, has blossomed into a rich body of shared knowledge for our civilisation, a collective long-term memory that is the basis for much of human progress.”

Nielsen recognises that there are powerful forces opposed to a second scientific revolution. Traditional scientific publishing is immensely profitable. In 2009 the Amsterdam-based academic publishing company Elsevier made a profit of US$1.1 billion on revenue of $3.2 billion, which puts its profitability in the same league as Google’s and Microsoft’s. The increasing commercialisation of public research organisations also creates a major barrier to open access.

But Nielsen thinks the main barrier is the culture of scientists themselves. The current system of career advancement rewards scientific papers in refereed journals and creates a powerful disincentive to share data and build online tools. On this account, attempts to establish wikis in a wide variety of scientific fields – including quantum computing, genetics, string theory and chemistry – have failed dismally, in sharp contrast to the spectacular success of amateur enterprises such as Wikipedia.

The solution lies in building new scientific institutions to replace those that have become fetters on scientific discovery. Nielsen finds inspiration in the first scientific revolution, compulsory schooling, democracy and Galaxy Zoo. “Institutions are what happen when people are inspired by a common idea, so inspired that they coordinate their actions in pursuit of that idea.”

Government intervention is one means to new institutions. In particular, Nielsen wants to see open access and open data policies developed by government and scientific grant agencies. But he also urges scientists to build new systems of incentives. For example, “if we got a citation-measurement cycle going for code, then writing and sharing code would start to help rather than hurt scientists’ careers” and provide a “strong motivation to create new tools for doing science.”

Nielsen believes that there is a lot at stake in the reinvention of scientific discovery. In the fifteenth and sixteenth centuries, he reflects, Easter Island society collapsed because of an “ingenuity gap.” Islanders were “unable to find solutions to the problems they had created.” Contemporary global society faces its own ingenuity gap in fields including HIV/AIDS, nuclear proliferation, looming resource shortages and human-caused climate change. Online tools provide a unique vehicle to amplify our collective intelligence and rise to these challenges.

This is a very good book. Nielsen is especially skilled in explaining complex scientific ideas in an engaging and accessible style. He boldly spans diverse fields of inquiry, from his own specialist field in quantum computing, to astronomy, biology, economics and social science. Although he is passionate about his subject, he does not launch into hyperbole; he is always measured, and quick to identify challenges, problems and limitations. As a result, he brings the reader along with him.

Yet there is a profound tension throughout the book. I have already mentioned the tension between Nielsen’s roles as chronicler and advocate. This is part of a wider tension around shared praxis. There is certainly a shared praxis in the physical and life sciences, which is a prerequisite for the new era of scientific discovery. There is no shared praxis in the study of social institutions, which is in fact pivotal to the argument of the book.

Nielsen is most authoritative when he writes about online collaborations in the physical sciences, his field of dedicated expertise. He is less authoritative when he writes about the task of building new institutions, his cause as an advocate. At this point he fudges the absence of a shared praxis in the study of institutions, and the apparently limited scope for online tools in advancing our understanding of this field.

This gap raises wider questions about the problem-solving capacity of online tools. There is certainly shared praxis in some aspects of work on HIV/AIDS, nuclear proliferation, resource shortages and climate change. But there are also fundamental and contentious issues involving ethics, politics and power. Nielsen’s faith in science causes him to promise too much for the second scientific revolution.

Yet it is difficult not to be enthused by Nielsen’s cause. If you want to get a taste, check out his lecture on Ted.com. Better still, read the book. It is a worthy manifesto for an important cause. •

The post A networker’s manifesto for open research appeared first on Inside Story.

]]>
Genetic injustices https://insidestory.org.au/genetic-injustices/ Thu, 07 Jun 2012 01:16:00 +0000 http://staging.insidestory.org.au/genetic-injustices/

DNA evidence has exonerated nearly 300 prisoners in the United States, but an Australian case highlights its potential to mislead

The post Genetic injustices appeared first on Inside Story.

]]>
Why would a law professor write a book about several hundred horrific murders and rapes? It’s not because Brandon Garrett is trying to cash in on the popularity and guilty pleasures of “true crime.” Rather, he wants to use these stories to reveal another sort of truth.

A centrepiece of the book’s earliest case is a presumed confession to a horrific murder in Garrett’s home state of Virginia. Asked by two detectives what he used to tie up his victim, David Vasquez first suggested rope, then a belt, then a clothes line — answers his supposed interrogators rejected in turn. Finally, they simply told him he used a cord from some Venetian blinds:

Vasquez: Ah, it’s the same as rope?

Detective 2: Yeah.

Detective 1: Okay, now tell us how it went, David – tell us how you did it.

Vasquez: She told me to grab the knife and, and, stab her, that’s all.

Detective 2: (voice raised) David, no, David.

Vasquez: If it did happen, and I did it, and my fingerprints were on it…

Detective 2: (slamming his hand on the table and yelling) You hung her!

Vasquez: What?

Detective 2: You hung her!

Vasquez: Okay, so I hung her.

The cops must have thought Vasquez was toying with them, or stupid, or simply crazy. So must the jury, which convicted him of murder. But, like all 250 people found guilty of horrible crimes in Garrett’s book, the United States criminal justice system now accepts a quite different story: that Vasquez was wholly innocent.

Garrett’s genre of “false crimes” has a worthy lineage. Convicting the Innocent shares its title with a book written by another American law professor, Edwin Borchard, eighty years ago. That was an era when justice officials proudly declared that wrongful convictions in criminal courts were “a physical impossibility.” Borchard’s response was to detail sixty-five tales of innocent people found guilty in the preceding decades in American and (in a few cases) English courts.

Nowadays, the task of convincing Americans that their courts can make the worst of mistakes requires just one, repeated, tale: exoneration by DNA evidence, the common thread running through the 250 cases examined by Garrett. Vasquez’s exoneration came four years after his trial, when pubic hairs from his file were genetically linked to Timothy Spencer, the “Southside Strangler,” who had been convicted of three similar strangulation murders in Virginia in 1987. Vasquez was the first of nearly 290 prisoners (including seventeen facing the death penalty) exonerated so far by DNA, many decades after their alleged crimes. Spencer was the first person executed on the basis of DNA evidence.

Because no thinking American now needs to be convinced that their criminal justice system can and does make terrible mistakes, Garrett sets himself the different task of finding common threads in this extraordinary set of cases, examining available trial transcripts, news reports and court judgments to uncover “systemic” causes of error.

The first half of Convicting the Innocent is devoted to the flawed evidence America’s criminal justice system used to wrongly convict those 250 people. Three-quarters of the cases, including Vasquez’s, involved eyewitness identifications and forensic evidence. Garrett’s study of available transcripts suggested that almost 90 per cent of the eyewitness identifications were transparently marred by procedures that “suggested” identifications to witnesses and by discrepancies between the witness’s initial description and testimony in court. More than 60 per cent of the forensic evidence, meanwhile, involved claims that exceeded or contradicted even the shonky standards of dubious techniques such as hair comparison. Rounding out the categories of flawed evidence are the suspects’ alleged confessions to police and jailhouse informants; almost all were bolstered by claims, evidently false, that they must be accurate because they included details (like the type of cords used to tie a victim’s hands) that only the real culprit could have known.

Flawed evidence alone isn’t enough to explain these wrongful convictions. After all, the criminal justice system is meant to separate the wheat from the chaff. So, in the second half of his book, Garrett turns his attention to flawed court processes, including superficial trials during which the case for innocence was barely mentioned, appeal courts that turned a blind eye to the dangers, and authorities who actively resisted efforts to uncover or act on exculpatory DNA evidence.

Because DNA exonerations tend to occur in particular sorts of cases, Garrett is quick to caution against generalising from the cases he examined. The 250 exonerations were “limited to a small set of mostly rape convictions, mostly from the 1980s, in which the evidence happened to be preserved.” According to the compilers of a new US National Registry of Exonerations, the DNA-based cases make up just one quarter of the miscarriages of justice uncovered there since 1989. The remaining three-quarters include people cleared after key witnesses changed their stories, the real culprit confessed or a court or the executive belatedly acknowledged that the state’s evidence had never been adequate. One possible contender for inclusion is Carlos DeLuna, who, according to yet another law professor’s recently published work, was executed for a murder that was probably committed by a different Carlos (who looked like DeLuna, had been fingered by him as the true culprit from the outset, had committed a string of similar offences and admitted to committing the murder).

Australia’s most famous modern miscarriages of justice, including the cases of Lindy Chamberlain, Andrew Mallard and Farah Jama, would fall outside a similar Australian study to Garrett’s because none were uncovered by DNA evidence. Indeed, Australia has nothing like America’s string of belated DNA-based exonerations. Our best known DNA-revealed error – corrected after post-trial tests of the linen of the accused rapist Frank Button – was labelled a “black day for justice” by the Queensland Court of Appeal, even though (unlike every case in Garrett’s book) it was uncovered before the defendant’s first appeal.

But it is surely noteworthy that contemporary Australian courts still use all four categories of flawed evidence that Garrett found dominated the trials that were overturned in the United States. An example in the first category occurred when Queensland police breached their own procedures by asking three bank robbery witnesses to go to a courthouse and keep an eye out for the robber; all three picked Brunetta Festa, who was being committed for the robbery that day and was the only woman under forty present. The trial judge’s admission of this patently dangerous identification evidence was upheld by the High Court in 2001. The evidence used in 2004 to convict Bradley Murdoch of the notorious murder of Peter Falconio had two kinds of flaw. Not only did it include Joanne Lees’s eyewitness identification (made after she saw his photo in a British paper with a caption revealing that his DNA was linked to the case) but it also featured a Sydney academic’s “body mapping” technique (never explained in court) that “matched” Murdoch to a grainy truck stop video (and has since been rejected by NSW courts as inadmissible for that purpose.)

Three years later, at Glenn McNeill’s 2007 trial for the murder of Janelle Patton on Norfolk Island, the prosecution told the jury that McNeill’s confession to the Federal Police included “information that was not in the public domain.” Not only is such a claim ludicrous in a close-knit island community, but it’s also the case that all the “information” was detailed in a book on the case by ABC journalist Tim Latham published five months before the police first spoke to McNeill. And Australian prosecutors continue to rely on prison informants – the fourth of Garrett’s flaws – in difficult cases, including last year’s trial of Peter Dupas for the murder of Mersina Halvagis, which was partly based on Dupas’s alleged confession to disgraced Melbourne lawyer Andrew Fraser while the pair were gardening at Port Phillip Prison.

Some of the systemic problems identified by Garrett in the United States – decentralised and nakedly politicised courts and prosecutors, hopelessly inexperienced attorneys assigned to defend capital murder cases, and the nation’s tortured and cursory appeals process – do not resonate so strongly in Australia. And some of the reforms he advocates, such as the recording of police interrogations, have been legal requirements throughout Australia for decades.

But the contrast with America isn’t always flattering. The US system for post-conviction reviews by courts, which Garrett excoriates for its slow pace and procedural tangles, is still superior to the systems operating in much of Australia, where defendants who have exhausted their appeals must plea for mercy to politicians if they want to obtain DNA evidence for fresh testing or rely on the results to overturn their convictions.

Nor have either of Australia’s two most populous states followed Britain and some American states in creating independent commissions to comprehensively investigate claims of innocence. The failure of political will was most apparent in 2003, when the Carr government hurriedly suspended a modest “Innocence Panel” it established a year earlier simply because one of the people convicted of murdering Janine Balding asked the panel to arrange DNA testing of some of the evidence against him. It seems that our new foreign minister, like many a hick American prosecutor, thought that the interests of victims of notorious crimes necessitated non-negotiable “closure.” Garrett would no doubt direct Carr’s attention to the Florida prosecutors who spent a decade opposing Frank Lee Smith’s requests for DNA testing in a child murder case. While the prosecutors presumably thought they were protecting Smith’s victims, the person who they were really protecting was the real killer, Eddie Lee Mosley, who committed sixty rapes and seventeen murders, including some during Smith’s lost decade in prison.


THE several hundred Americans who now owe it their freedom are only a tiny part of the story of forensic DNA. The very things that allowed DNA evidence to convince a sceptical American public of the innocence of a fraction of their prisoners – its foundation in mundane lab work and peer-reviewed science, its independence of the biases and memories of eyewitnesses, its longevity and robustness – have also driven its dramatic take-up by police worldwide as a means of convincing courts of the guilt of countless accused. This part of the DNA tale has been told in the many “true crime” books that show how dogged investigators have used emerging technology to zero in on criminals who would otherwise have gone to their graves with their villainy undiscovered.

In Genetic Justice Sheldon Krimsky and Tania Simoncelli offer a different take. Their foreword commences with the story of Lily Haskell, who was arrested at a demonstration against the Iraq war in 2009 and whose DNA profile is now one of more than eight million permanently stored by US law enforcers. This is despite the fact that she has never been charged with, let alone convicted of, any crime, much less a horrible one. The book reviews developments in the United States and (more briefly) five other countries, including Australia, to argue that the once rare use of DNA profiling to catch (and sometimes exonerate) violent criminals “has given way to a massive and ever-expanding system of collecting and permanently retaining DNA for ongoing investigation and use.”

The easy part of Krimsky and Simoncelli’s case is their argument that supposedly strict government regulation of the investigative use of DNA is anything but. Take Australia, where a Model Forensic Procedures Bill was developed at the turn of the century, creating an elaborate regime of orders for “intimate” and “non-intimate” forensic sampling and a complex DNA profile database whose use was regulated by a table of “matching” rules. The Bill was so packed with poorly drafted protocols creating sham protections for defendants (such as a rule requiring everyone who has their hair plucked to be handed one of their own hairs for independent testing) that no Australian jurisdiction adopted it in its entirety. In any case, the sampling regime proved to be irrelevant in practice because nearly all DNA was obtained through people “consensually” swabbing their own mouths; and the matching table has been gradually amended so that virtually all matches are now allowed.

Although the move to allow uncharged people’s DNA profiles to be held in the database is relatively recent in the United States, such holdings have been permitted in Australia from the outset, with any samples taken to test a person’s link to one crime automatically placed on the full database for matching against all unsolved crimes. In England, a significant fraction of the DNA database (the world’s first) is now made up of non-offenders who were only briefly of interest to investigators, such as suspects whose DNA samples were taken to investigate (and often dispel) suspicion and people who volunteered their samples as part of mass DNA screenings aimed at excluding entire sub-populations from a police inquiry.

In fact, few nations prevent the police from asking for DNA samples from anyone they wish or from gathering samples (from cigarettes, cans of Coke and the like) abandoned by people who refuse a request or where the police have insufficient grounds to obtain an order. But perhaps you think the police haven’t got your DNA profile because they never ordered or asked you to provide one (and you don’t smoke or drink Coke)? Think again. Anyone who has been at a crime scene (including many victims of crime) may have their DNA profile unknowingly stored on the database, ready for matching against all unsolved crimes. And, for every profile they’ve obtained, by hook or by (convicted or suspected or even exonerated) crook, the police can also explore the link between any unsolved crime and everyone who genetically “resembles” that profile (that is, all of the close blood relatives of the profile’s “donor”).

Having pricked the balloon of rhetoric about close regulation of genetic forensics, the two American Civil Liberties Union associates (one a research fellow, the other the organisation’s Science Adviser) embark on a far tougher sell: that this mother of all genetic sweeps is a bad thing. Not surprisingly, their first argument concerns forensic DNA’s impact on our privacy – not just of our individual DNA but also our bodies, our health, our relationships and our movements. Krimsky and Simoncelli contrast the high levels of protection afforded to genetic privacy in medical and research contexts with the “different playing field” that applies in forensics. They bemoan the focus by the police and the courts on the relatively trivial bodily intrusion of DNA sampling rather than on the informational value of DNA, and predict that public anxiety about genetic privacy will eventually force a reckoning. But that argument ignores the quite different contrast that can readily be drawn between the informational intrusion of DNA profiling (even in its most extreme forms) and the everyday indignities imposed on criminal suspects by police questioning, or body searches, or entry into homes, or tapping of phones, or searching of government records, which are part and parcel of modern investigations.

The manhunt for the killer of backpacker Peter Falconio is a good example. The Northern Territory police relied on tip-offs, a blurry service station photo and Joanne Lees’s even blurrier memories to generate a shortlist of thousands of “persons of interest.” Because of the legal and political complexities of cross-border investigations, few were forced to provide DNA samples and only a few others were asked politely, despite the police obtaining a DNA profile from a spot of blood on Lees’s shirt. Instead, the investigators worked through every man on the list, questioning them personally, investigating their alibis and searching phone and banking records to determine their movements. It was only information from a fellow drug dealer that led the police to refocus on Bradley Murdoch, who’d politely refused to provide his DNA early in the investigation. Scared of spooking him, they sought DNA from his brother and later used a sample repurposed from an earlier investigation into rape charges that Murdoch claimed were trumped up and that a jury later dismissed. Those who argue against contemporary DNA profiling face the twin burdens of advocating a return to old-school policing methods such as these and barring the police from some of the unorthodox methods that eventually netted a killer.

Ultimately, Krimsky and Simoncelli don’t advocate a retreat from genetic forensics. Instead, they ask whether more of a good thing is necessarily better. They question the extent of the benefits of DNA profiling, pointing out that the evidence offered by administrators about the utility of their database – the proud recitals of “hits” – tells little about outcomes the public actually cares about, such as the number of matches of interest (as opposed to belated confirmations of already known links), the number of convictions that follow (as opposed to charges dropped because the match proves to be uninteresting) and the number of crimes prevented (a matter about which much is claimed but little can be proved).

Krimsky and Simoncelli also question whether the well-publicised successes in forensic DNA’s heartland – the matching of convicted serious criminals to unsolved sexual or violent crimes – will continue as the emphasis moves to arrestees, minor criminals, and property or drug offences. It is one thing to apply cutting-edge science to Joanne Lees’s t-shirt, but another altogether to gather, analyse and identify all of the samples found in every stolen car or drug lab, each yielding dozens of (typically spurious) investigative leads and generating lengthy backlogs at overburdened, demoralised labs. Rather than revolutionising criminal investigation, the expansion of DNA databasing may simply replicate the failings of regular criminal justice, including the social and racial disparities in its application. In short, a lot of contemporary and future DNA databasing may simply be a very expensive way of rounding up the usual suspects.

Alas, this clever argument will find no purchase among voters and politicians who are quite comfortable with the unprincipled face of criminal justice. Many of the reform “principles” set out in the final chapter of Genetic Justice (mostly requiring court involvement in any envelope-pushing steps, such as mass screenings and familial searches) were recommended in 2003 by Australia’s Law Reform Commission but remain unimplemented. Indeed, the trend here is to oust courts from the system, with parliaments in South Australia and Victoria repealing various court-enforced rules after judges had the temerity to apply them against the police. In the United States, broad DNA databasing laws have been approved by popular referendums and the Ninth Circuit Court of Appeals recently rejected a constitutional challenge by Lily Haskell, the briefly arrested protester.

Indeed, Krimsky and Simoncelli’s complaints about the inequities and inefficiencies of contemporary DNA databasing suggest a quite different way forward: putting absolutely everyone’s DNA profile on an investigative database. Their chapter titled “The Illusory Appeal of a Universal DNA Data Bank” actually sets out a compelling case for such a step, which would remove racial and social disparities and considerably reduce the problem of unidentified profiles and spurious matches. The authors’ main counter-arguments – the logistics of establishing and securing such a depository and the failure to tackle the inequities in the rest of the criminal justice system – are far from overwhelming.

“How many offenders might we tolerate escaping in order to avoid an innocent person being wrongfully condemned?” This familiar question is at the heart of Convicting the Innocent, yet it is also the closing plea of Genetic Justice, whose authors invite us to factor in the plight of innocents like Haskell who have been swept up in the worldwide DNA dragnet. But a case that features in both books suggests a different answer to the one they favour.

In Garrett’s book, Darryl Hunt is just one of hundreds of men convicted of a horrific murder on the basis of flawed identification procedures and supposed confessions. Even after DNA profiling proved that the semen on the victim wasn’t his, US courts refused to clear him until the true killer, Willard Brown, confessed to acting alone. The details of Brown’s capture are recounted in Krimsky and Simoncelli’s chapter on familial screening, in which they reveal that police had run DNA from the semen through the North Carolinian database and had obtained a partial match with Anthony, Willard’s brother. Willard’s confession came after the police gathered DNA from a cigarette he discarded and matched it to the semen on Hunt’s supposed victim.

There’s no doubt that the world’s Haskells vastly outnumber its Hunts, but who would baulk at “condemning” any number of innocent people to having their DNA profiles placed on a database if that allowed just one more innocent man to “escape” the true hell of wrongful imprisonment?


THERE’S another story about innocence and DNA that neither book mentions because it’s too recent and too distant from America. It’s the story of a Melbourne teenager, Farah Jama, who recently spent a year in jail for a rape that he didn’t commit (and, indeed, that never happened at all). At Jama’s trial in 2008, there was no witness identification, no junk science and no confessions to the police or anyone else. Nor was his wrongful conviction due to lying complainants, dodgy lawyers, sleepy judges or bad juries. Instead, there was just one unsanitised medical trolley at a rape crisis centre that allowed a single intact spermatozoon (and some fragments) to move from one woman’s hair to a swab taken from a second woman who had passed out at a nightclub. The presence of a male DNA profile on a vaginal swab was enough to wrongly convince everyone (the police, the prosecution, the defence, Jama and, of course, the woman who’d passed out) that there’d been a horrific rape. And a match to Jama’s DNA profile (who, like Haskell, was in the database despite being cleared of the crime that led to his sample being there) was enough to convince a judge and jury that an innocent teenager was a monster with an alibi from a family of perjurers.

Jama’s case highlights two huge problems with DNA profiling that can lead to disasters. The first is that DNA is messy, fiddly stuff. The cells that contain our DNA move from our bodies to other people’s bodies, to surfaces or just through the air. As a result, crime scenes (and victims’ bodies) are typically dotted with various people’s DNA, all mushed together. Labs must do their best to isolate a specific profile through educated guesswork and processes of elimination, subjective choices that complicate the estimates of how rare that profile is in the community. Because DNA, like any complex molecule, is fragile, the profiles in a sample may be only partial ones, albeit still mixed up with others. And because DNA can’t be seen and must instead be “amplified” by the same mechanism that builds life itself, there’s no way to tell whether the bit of DNA that matches a person on a database was in the sample to start with, or whether it just floated there or was introduced at one of the many choke points in the forensic system where evidence from different sources brushes together – places such as a busy crime scene, an evidence bag, a police station, a rape crisis centre or (most disturbingly) a DNA lab.

Messiness is nothing new to crime investigation. What makes it disastrous in the case of forensic DNA relates to its second problem: that most people who use DNA know next to nothing about it. In the courtrooms that are its ultimate consumer, DNA testimony is universally recognised as a punishingly dull affair. Inevitably, everyone snoozes while a lab technician with no background in talking to lay people drones on about alleles and electrophoresis in front of a series of PowerPoint slides. In response, defence counsel resort to scattergun questions that worked once-upon-a-trial or sometimes wheel in experts of their own to go head-to-head in the boredom stakes with warnings about “allelic ladders” and “peak height ratio protocols.”

What about the lab technicians themselves, you might ask. Well, these are people who do hundreds of analyses a week following set procedures, are employed by the police, have learnt to fear defence lawyer tricks and, as a result, often seem to have tunnel vision. When a nervous police officer in the Jama case asked the lab about the possibility of contamination, a technician duly made a perfunctory check of a couple of lab records and gave the all-clear without so much as considering the possibility of a contamination before the swab reached the lab.

And the jury? Actually, Jama’s jury was terrific. They asked the question no one else did: why had the teenager’s DNA been on the database in the first place? The true answer – because Jama had been briefly accused of rape by the woman who attended the rape crisis centre with Jama’s semen in her hair the evening before the woman who passed out in the nightclub – would have cracked the case. But Australia’s rules of evidence prevent jurors from learning about (apparently) unrelated allegations, so the judge hurriedly told them that their question was irrelevant. It was a serendipity – the chance coincidence that the same doctor treated both women at the rape crisis centre – that led to the belated recognition that Jama was innocent.


GARRETT, Krimsky and Simoncelli are all alive to the potential failings of forensic DNA. Three of the 250 wrongful convictions described in Convicting the Innocent were partly due to sloppy DNA analysis revealed by more accurate or advanced DNA profiling, and a chapter of Genetic Justice is devoted to the many ways that DNA analysis can go awry. But both books underplay the implications of these flaws. Krimsky and Simoncelli argue that the science’s fallibility must be factored in when considering “whether justice is advanced by DNA,” but ignore it when considering the potential for genetic privacy to be invaded by individuals or governments. More dubiously, they claim that the risks of error only affect “inclusions” (matches, typically probative of guilt) and not “exclusions” (non-matches, typically probative of innocence), even though contamination can obviously affect either. Indeed, they had already noted the ease with which an offender could leave a trail of artificial DNA (or just some chump’s cigarette butts) at a crime scene as a genetic alibi.

By contrast, Garrett properly concedes that “it is possible that an erroneous DNA test could lead to the exoneration of a guilty person.” But he nevertheless insists that the close scrutiny given to the 250 cases in his study allows everyone to be “quite confident that these convicts are actually innocent.” He, of all people, should know better than to make such claims. For instance, neither Willard Brown’s DNA on a victim’s body nor his confession to acting alone conclusively prove that Darryl Hunt had no hand in her killing.

Neither book’s final chapter, setting out the authors’ recipes for a safer, fairer criminal justice system, contains any recommendations aimed at avoiding convictions that are founded on DNA errors. Justice Frank Vincent, who inquired into Farah Jama’s wrongful conviction and recommended a series of changed evidence-gathering processes, saw it as obvious that no one should ever be convicted on the basis of DNA evidence alone, not least because of the inevitable, albeit often small possibility, that someone else might match any DNA profile.

Vincent’s proposition was tested soon after when Ben Forbes appealed to Australia’s High Court against his conviction for a rape near a Canberra bike path. The prosecution case against Forbes rested solely on a match between his DNA profile and mixed samples from the victim’s bra and jeans, with experts testifying that the chance of a random person having that profile was less than one in a million. While that figure clearly allowed some possibility of other matches in the region, the High Court, a booster of forensic evidence from way back, dismissed Forbes’s appeal on the ground that the prosecution had agreed to keep a more damning (but wholly untested) DNA statistic from Forbes’s jury in return for their experts being allowed to (meaninglessly) label their own evidence against Forbes “extremely strong.” It’s the kind of ruling that makes you wonder whether Justice sometimes peeks from under her blindfold. The seven High Court judges would not have needed to look far. Forbes’s prior conviction for sexual misconduct on a Canberra pathway, inadmissible at his trial, was fully detailed in the court file.

And that’s always the seduction. Yes, we know the risks and we believe in justice, but sometimes we just know the truth too. We know Ben Forbes is a rapist. We know Frank Button and Farah Jama aren’t. We know who Janine Balding’s killers are. And Peter Falconio’s and Janelle Patton’s and Mersina Halvagis’s too. And precisely because we know all that, we also know that there will be more books called Convicting the Innocent written in the decades to come. •

The post Genetic injustices appeared first on Inside Story.

]]>
Thus began the Australian occupation of Antarctica… https://insidestory.org.au/thus-began-the-australian-occupation-of-antarctica/ Fri, 24 Feb 2012 04:46:00 +0000 http://staging.insidestory.org.au/thus-began-the-australian-occupation-of-antarctica/

On board the Aurora Australis as it sailed to Commonwealth Bay to commemorate the centenary of Douglas Mawson’s historic expedition, our correspondent witnesses a complex interplay of science and sovereignty

The post Thus began the Australian occupation of Antarctica… appeared first on Inside Story.

]]>

In Antarctica, you can’t just select a date, waltz in, and perform a ceremony. Ice and weather cannot be commanded. You have to submit yourself to the control of the continent, just as Douglas Mawson’s expedition did a hundred years ago. Antarctic logistics are ruled by what is called “the A-factor,” the destabilising ingredient in all Antarctic planning. But commemorations are about nothing if not dates; they are about precision in time and place. They book into our crowded calendars an exact moment for reflection. What happens, then, when you plan a historic commemoration in the continent of uncertainty?

Uncertainty and waiting are the warp and weft of Antarctic history. The men of the Australasian Antarctic Expedition spent a lot of time waiting… waiting for the wind to stop so they could work outside or hear themselves think, waiting agonisingly for the Far Eastern sledging party of Mawson, Belgrave Ninnis and Xavier Mertz to return, waiting for the black speck of their ship, the Aurora, to appear on the horizon to take them home. They were not the first Australians in Antarctica – several including Mawson himself had participated in earlier expeditions – but this was the first Australian expedition and the first of any kind to set foot on the Antarctic continent directly south of Australia.

In Antarctica, it can feel like time has not only skipped a beat, but has lost the beat altogether. Time there assumes different rhythms. There is the deeper pulse of the ice ages, the seamless months of eternal light or night, the transcendent otherworld of a blizzard, the breaking up of the sea ice, the exciting return of the Adélie penguins in spring, the schedule of the summer ships, and the intensity of the annual “changeover” at Antarctic stations. A century might signify a hundred generations in Antarctica or just one tick of the glacial clock.

It therefore seemed entirely appropriate that Antarctica itself should dictate the timing of the centennial visit to Mawson’s Huts. Blizzards at Casey station in late December had delayed the return of our ship, the Aurora Australis, and its subsequent departure from Hobart. And there was an added complication. For possibly the first summer in a century, Commonwealth Bay was filled with ice.

Just to the east of the Bay there once existed the huge tongue of the Mertz Glacier (named after Xavier, who died on Mawson’s sledging journey and still lies, perfectly preserved, somewhere on its inland slopes). That tongue of ice had seemed a constant attribute of this coastline, a dominating geographical feature that was discernible even on small-scale maps of the continent. In the lee of the glacier tongue, furious katabatic winds funnelled down onto Commonwealth Bay and maintained a polynya, the beautiful Russian word for the belts of open water found within the ring of ice that surrounds Antarctica. In February 2010 a huge iceberg from the Ross Ice Shelf – the size of the Australian Capital Territory and named B9B – was drifting steadily westwards. It collided with the Mertz Glacier tongue and sheered it off, sending it slowly spinning westwards. B9B itself became grounded about twenty-five kilometres offshore from Commonwealth Bay and corralled the sea ice near the coast. Cured hard by the wind and fastened to the land, the sea ice made it impossible for ships to reach Mawson’s Huts this summer. However, the Australian Antarctic Division, keenly conscious of the summer’s historical significance, willing to await its moment and equipped with three helicopters on board Aurora Australis, was determined to take on the A-factor.

Members of the Australian Arctic Expedition in Mawson’s Huts, Commonwealth Bay, 1911–14. Frank Hurley/Mitchell Library, State Library of New South Wales

As well as making the pilgrimage to Commonwealth Bay, our voyage was also the most significant Australian marine science expedition of the season. Douglas Mawson would have thoroughly approved. He was first and foremost a scientist and was always trying to fit a bit more science around urgent logistical or strategic goals. Even though he was very anxious to establish his base on the continent as early as possible in 1912, he still found time to gather sea temperature and salinity data on the way south. Just along the edge of where the Mertz Glacier tongue used to lie, our voyage revisited a collecting station of the original Aurora. The sea temperatures gathered at various depths by the expedition using reversing thermometers continue to provide illuminating insights today.

Our voyage leader Robb Clifton reported that It feels like history is all around us as we do this work. And we made history ourselves by sailing where no one had ever sailed before – over the vast tract of sea floor liberated by the calving of the Mertz Glacier. By colliding with the glacier tongue, iceberg B9B has initiated what the physical oceanographer Steve Rintoul calls a “natural experiment” in sea-ice production. The Mertz polynya, as one of the prime Antarctic sites of sea-ice production, releases salty, dense “bottom-water” that plunges to the ocean depths and drives the engine of ocean circulation. Was it the presence of the vast glacier tongue that made this polynya so active? What might now be the impact of the glacier’s calving on salinity, circulation and biodiversity? And how might it relate to a general trend – observed since the 1970s, possibly as a result of increased glacial meltwater – for Antarctic bottom-water to become less salty and less dense? In other words, how might global warming affect ocean circulation? I have no doubt that, if he were alive today, Douglas Mawson would be out there defending the future of his beloved ice. He would be at the forefront of the scientific effort to explain to the general public the dire implications of the climate crisis.


But his ice was not so beloved as Mawson tried to find a way through it to make a landing on the Antarctic coastline in early January 1912. He and the captain of the Aurora, J.K. Davis, were disappointed to find the pack ice so far north. As it grew heavier, they were forced to follow its edge westward, ever westward, looking for an opening to the south and a way finally to the continent itself. By the night of 2 January, Mawson was in despair: his whole expedition seemed in jeopardy, and he was facing personal failure and humiliation. Things looked so bad last night, he wrote to his fiancée, Paquita, that I could do nothing but just roll over and over on the settee on which I have been sleeping and wish that I could fall into oblivion.

Suddenly on 8 January 1912, following further days of hope, anxiety and disappointment, they gained a clear prospect of accessible land! It was a day of brilliant sunshine and a party was sent ashore in the whaleboat. Mawson later described it in The Home of the Blizzard: We were soon inside a beautiful, miniature harbour completely land-locked. The sun shone gloriously in a blue sky as we stepped ashore on a charming ice-quay – the first to set foot on the Antarctic continent between Cape Adare and Gaussberg, a distance of about 2000 miles. What did our revered expeditioners do the moment they set foot on the ice? As Archie McLean recorded, Mawson and [Frank] Wild explored and the others had a snowball fight.

A century later we had hoped also to land at Commonwealth Bay on this day, but due to the blizzards that delayed the start of our voyage, we were still in the middle of the Southern Ocean surrounded by wheeling albatrosses. There was some concern expressed in the media about missing this day of the landing. But we felt the commemorative aptness of dealing with ice and weather and also remembered that the original landing was not on one day but twelve. From the 8th to the 19th of January, Mawson and his men struggled to land the stores for their first and main base at Cape Denison. Late on that gloriously sunny and calm day of 8 January, the true character of the place – its defining elemental essence – had revealed itself. Winds such as no one had ever known before swept down onto the natural harbour they had found and forced them to retreat frozen to the ship, where they hoped that the Aurora’s anchor would hold.

Over the next few days it dawned on them that they had decided to build their home in an unusually windy corner of the windiest continent on earth and that some of the generating factors were quite local – and further, that the open water that had attracted them there was also to some extent a creation of the relentless offshore winds. A ship is naturally lured into the home of the blizzard. It was no accident that Mawson should land in such a place and thereby condemn his expedition to heroic daily scientific recording in one of the most forbidding places on earth. But harbours and bare rock were so precious, and finding that place had been so hard, that the men were determined to secure their fragile foothold with their canvas and planks and nails.

Throughout mid-January, the men of the Australasian Antarctic Expedition laboured between the blizzards, shuttling between ship and shore to land the stores and the Baltic pine timbers of the hut. On the night of 12 January, the expeditioners spent their first night sleeping on the continent. I find that moment as moving and meaningful as the first landing a few days earlier. “Night” doesn’t have much meaning in Antarctica in high summer; the sun, if you can see it, gently bounces on the horizon. But sleeping on the continent itself signals a commitment. One makes oneself vulnerable, becomes a resident, begins to inhabit the place and starts to become (if one ever can) a local. Lying down on the scarce available rock and submitting to sleep in such an alien and threatening place is to begin that transformation. So, as a storm brewed again on the evening of 12 January, Mawson and Wild went ashore to join the five men working there, pitched the tents, unpacked the reindeer sleeping bags and fired up the Nansen cooker. The seven men spent the first night ashore at Cape Denison, warmed by soup and cocoa. Thus began the Australian occupation of Antarctica.

By 16 January 1912, they were still waiting out storms and unloading stores. And this was the day, a century later, that we finally landed our own party. For five days we had waited on the edge of the sea ice twenty kilometres off Cape Denison for weather that would allow our helicopters to fly. That morning of the 16th, the cloud lifted and the white cliffs of East Antarctica sparkled in the sunlight. The wind was slight, the air crystal clear. We could see the exposed granites of the cape and, a few hundred metres inland, the dark outcrops of the moraine. Brooding above everything was the white brow of the polar plateau that climbs and recedes into a pure infinity against the light blue of the sky. And somewhere in the middle of this small coastal patchwork of white ice, black rock and aerial blue could be seen something else… At first I registered it as a different, surprising, organic colour, a pinpoint of warmth. It was the wind-bleached wood of a hut.


From a distance it seemed like a piece of driftwood scoured pale, lean and delicate by the wind and snagged between ice and rock. It glowed with a fragile lustre. I was immediately struck by its homeliness, even from the outside. Mawson’s book about this place is called, of course, The Home of the Blizzard, which honours (or laments) the ferocious katabatics that are the essence of this bay. But this was the home also of eighteen men. This was their cosy, beloved refuge, and one hundred years later it is still an inviting and reassuring presence. The swale in which it sits is also quite intimate, and the men made this their own, too, inscribing it with the daily religious duty of their scientific observations. I was surprised to find that the place felt to me like a part of Australia, not just in a patriotic sense because of its history, but also because it could almost be a winter hut in the Australian Alps among familiar granite pinnacles.

An aerial view of Mawson’s Hut on 14 January 2012. Dean Lewins/AAP Image

The low door had been dug clear of snow by our advance party. Appropriately, we needed to bow to enter the darkness of this shrine. Inside on this calm day was the Antarctic silence. But more than that, there was stillness. The air smelt musty and organic and the walls gleamed faintly, illuminated by the skylight. There were half-familiar shapes and structures to discern in the gloom: Frank Hurley’s photographs reconstituted themselves before my eyes in ageing wood, metal and paper, half-encrusted with ice. The stove stood in one corner, the acetylene generator that produced lighting sat on a high beam, and all around the walls were the beds. The hut was insulated with a two-storey layer of people. I had walked into a boys’ bunkroom! Eighteen men slept here top to toe for a year, and it still feels private, intimate, domestic.

Outside the huts we gathered for a ceremony. In number we were similar to that which landed a hundred years ago. Surrounded by a voluntary audience of Adélies, the Director of the Australian Antarctic Division, Tony Fleming, read a statement by the prime minister, Julia Gillard. The names of all the men of the AAE were then read out and honoured: the nineteen who served in Adélie Land (this included Sidney Jeffryes who joined them in the second year), the eight who established the Western Base, the five who maintained the station at Macquarie Island, and the men of the Aurora. Tony reminded us of the pre-eminence of science in the planning and practice of the expedition, and of the foundation it thus laid for a modern Antarctic Treaty System where good science is the currency of influence. “I am frequently asked,” explained Tony, “what is the enduring legacy of the Australasian Antarctic Expedition? My answer is unequivocal – an entire continent devoted to peace and science, where nations work together in a spirit of collaboration. What a wonderful legacy they have left us!” Deborah Bourke from the Antarctic Division and David Ellyard, president of the ANARE Club (formed in 1951 by veterans of the first Australian National Antarctic Research Expeditions), raised the Australian flag to applause from the people and squawks from the Adélies. I said some words about the original landing and the way it was recorded in the diaries of the expeditioners.

After the ceremony, we walked across ice and granite scree to the small eminence of Proclamation Hill for another ritual – this time the laying of a time capsule in which Australian schoolchildren had written their visions of Antarctica in one hundred years time. Thus our commemoration turned to the future – and we wondered, as the children do, about how the southern ice cap will fare in a warming world. This hill was so named when Mawson returned to Commonwealth Bay in 1931, raised the flag again and asserted British sovereignty on 5 January. Almost twenty years after his expedition, Mawson had already become a tourist to his own history. He was proud to find the huts still standing even though the ice had penetrated them. Inside they were like a “fairy cavern.”

In my history of Antarctica, Slicing the Silence, I made a bit of fun of proclamation ceremonies in front of audiences of Adélies on windy, remote Antarctic coastlines. After all, claiming something as slippery as ice is laced with comedy, and narrow nationalism appears inapt on a continent of ice where just being human is so marginal and vulnerable. There’s a slightly irreverent chapter in my book called “Planting Flags.” And now, in January 2012, I was suddenly involved in the ritual myself…

Why would Australians today raise the flag in this international place? There is no doubt that by doing so we are quietly affirming Australian sovereignty over 42 per cent of Antarctica and that the penguins are not the only creatures with a colony here. But this was also a deliberately modest ceremony. No anthem was sung, no cheers called for, no proclamation made, no mention of “territory” by the prime minister, and the emphasis of the speeches was on the science of the Australasian Antarctic Expedition and its continuities with the scientific priorities of the Treaty era. Attention was given to all the young men who were excited by this last frontier, not only Mawson. The two men who died were especially remembered. With typical Australian bashfulness at ceremonies, the formalities were completed quickly and simply. The real commemorative act, we all felt, is in continuing to do science and history in Commonwealth Bay – and across East Antarctica – and helping researchers from other nations to do it too.

When speaking internationally and cross-culturally in Antarctica there is no word more powerful for Australians than “Mawson.” Uttering that word creates a significant space for us in the conversation. Our international Antarctic colleagues expect us to be the leading researchers and custodians of that history. Curiously, perhaps, the scholarly commemoration of Mawson and his legacy has become a critical part of our international obligation in Antarctica. Nationalism is not contrary to the spirit of the Antarctic Treaty, for national endeavour is the means of contributing to the treaty system and there is national pride in becoming an influential party. Quiet, reflective nationalism is the fabric of Antarctica’s successful international governance.


While our ship was still on the Southern Ocean, the historian David Day wrote an opinion piece for the Age and the Sydney Morning Herald in which he questioned such expressions of nationalism in Antarctica. Entitled “Antarctica is no place for politicking: Mawson’s expedition was about territorial gain, not science,” Day’s essay was critical of the commemorative ceremony of 2012 that I’ve just described and argued that it fell into a familiar pattern of Antarctic behaviour: “From Mawson in 1912 to Monday’s ceremony, it has all been done in the name of territorial acquisition and retention, with science acting as a cover.” Science, argued Day, was only “the supposed purpose” of Mawson’s expedition; its real aim was territorial acquisition and economic gain.

As symbolic proof of this priority, David Day offered the following evidence: “As soon as Mawson had erected his huts at what he named Commonwealth Bay, he gathered his companions together on a nearby hill for a formal ceremony on January 30, 1912. Curiously, the ritual received no mention during the commemoration this week.” Thus Day’s commentary perceived a consistent sleight-of-hand across the past hundred years. Just as Mawson’s real strategic priorities allegedly hid behind the “cover” of science, so did this year’s commemoration underplay the true imperial dimensions of Australia’s endeavours down south.

Although I think David Day’s interpretation is wrong in both detail and analysis, as I will explain below, he is right to identify the constant tension between science and politics as characteristic of Antarctic history. The process of commemoration itself took us to the heart of that question about the importance of science in Antarctic affairs, both a century ago and today. A commemoration should be more than a symbolic gesture. It can draw the past and present into a meaningful and active dialogue, and it can thereby become a way of doing history. The very process of commemoration can demand such a detailed engagement with the day-by-day fabric of past experience that it can furnish new insights and understanding. It challenges our ethnographic eye to consider the larger meaning of everyday action. So our commemorative voyage taught us quite a bit about the priorities of the Australasian Antarctic Expedition through a close and sympathetic engagement with their words, actions and setting.

After the ceremonies, I climbed Azimuth Hill just west of the huts where the memorial cross to Ninnis and Mertz stands clearly on the skyline, surrounded by penguin colonies. Beyond it, the ice cliffs of Commonwealth Bay take your breath away. Belgrave Ninnis was swallowed by a crevasse on 14 December 1912 and Xavier Mertz died very early on 8 January 1913 in the sleeping bag next to Mawson during their desperate return from the Far Eastern sledging journey. In November 1913, after their unexpected second winter at Cape Denison, Mawson and the six other remaining men solemnly erected this wooden cross to the memory of their dead friends and their “supreme sacrifice… to the cause of science.” As I sat there among the nesting Adélies, gazing out across the sea ice to the tiny black dot on the horizon which was our ship, I was moved by this choice of words etched in wood which seems so emblematic of how the men of the expedition saw their endeavour. Their friends died not for “the glory of empire” or for “pride of nation,” but in “the cause of science.”

Are these mere words or do their actions support them? Pondering that question beneath the cross, I felt that they rang true. Ninnis and Mertz died on a crazy, unheroic but earnest quest to understand more about Antarctic geography. And the last year of their lives, like those of their companions, was devoted to the daily discipline of survival and scientific recording. The priorities of the expedition were clear, and our commemorative mapping of their daily activities had revealed them to us. No sooner had the huts been built and a “house warming feast” held on 30 January 1912 than daily meteorological recording began – on 1 February. Ninnis and Mertz built two Stevenson screens to house the recording instruments, work began on the construction of the Absolute Magnetic Hut and Magnetograph House, a tide gauge was installed, biological and geological work begun, and seals and penguins were butchered for winter stores of meat and blubber. On 6 February, geologist Frank Stillwell recorded that Bickerton spent the afternoon erecting a 5′ flag pole on top of the main hut and it gives a nice swanky appearance to the homestead. But it was not until the summer was almost over, not until the scientific infrastructure was in place, not until 25 February that Mawson set aside the time to raise a flag on that pole above the hut. The ceremony would have taken place even later had not weather delayed the departure of the exploratory sledging parties Mawson was so keen to despatch.

Therefore Mawson did not conduct his flag ceremony “as soon as [he] had erected his huts,” as David Day suggested. Nor did he “[gather] his companions together on a nearby hill for a formal ceremony on January 30, 1912.” Neither the date nor place is correct in this account. Day was misled, as many others have been (including Peter FitzSimons in his recent book, Mawson) by a quirk in the historical record. We are lucky to have so many surviving diaries of the men of the Main Base at Commonwealth Bay and the only flag ceremony they mention that first summer took place on 25 February – and it was held next to the huts, not on the nearby hill. (The proclamation ceremony on the hill took place nineteen years later, in 1931, as mentioned above.) None of the expeditioners mentions a ceremony on 30 January. The 30th of January was a memorable day for a different reason – it was the day the men had their first sit-down meal in the hut (their “house warming feast”), and also the first day they could play the gramophone.

The reason the mistake has often been made is that biologist Charles Laseron conflated the dates in his memoir, South with Mawson, which was written thirty-five years later, in 1947. The words he used in his book describing a ceremony on 30 January correspond exactly with those in his unpublished diary for 25 February. Scholars who have not been able to consult the primary sources have compounded the error by understandably relying on the easily accessible published account.

The difference in dates is not trivial or pedantic; rather, it goes to the heart of the argument about symbolism and about whether or not science was only “the supposed purpose” of the expedition and a mere “cover” for territorial behaviour. David Day remarked in his article that “Curiously, the ritual [on 30 January] received no mention during the commemoration this week.” But this was not due to some dark historical suppression of Mawson’s political behaviour; it was because the ritual did not take place until later in the establishment of the expedition. In the day-to-day challenge of gaining a physical and emotional foothold on the ice, flag planting was a less urgent priority than survival and science. At the Australasian Antarctic Expedition’s Western Base, a proclamation was not made until almost a year later, on 25 December 1912.

Of course I am not arguing that claiming sovereignty wasn’t important to Mawson. His whole career is testimony to his lifelong conviction that Australia must secure its political and economic interests in Antarctica. The expedition was unusual in putting geographical exploration ahead of the attainment of the South Pole. One had to be resolute and original to resist “the Race to the Pole” in 1910–11. But this is what Mawson did, for he had another vision. He wanted to explore new territory, and especially that vast stretch of Antarctic coastline directly south of Australia. He promoted the expedition not only as a scientific mission but also as an investment in Australia’s long-term security and prosperity. Mawson also saw an opportunity to demonstrate Australia’s frontier vigour on the world stage, “to prove that the young men of a young country could rise to those traditions which have made the history of British Polar exploration one of triumphant endeavour as well as of tragic sacrifice.”

So the Australasian Antarctic Expedition of 1911–14 was a contribution to the British Empire’s embrace of Antarctica, but it was also a distinctively Australian endeavour, a proud initiative of the recently federated nation, driven by this newfound nationalism and by a southern hemisphere sensibility about the need to know one’s backyard, to understand the shared world of stormy sea and swirling, icy air that emanated from the neighbouring Antarctic region. Exploring Antarctica was Australia’s duty, Australia’s “preserve,” Australia’s destiny.

It has often been claimed that the Australian nation was born in 1915 on a war-torn beach far away in Turkey on the other side of the world. But the heroic landing a few years earlier at Cape Denison, Antarctica – a landing also “hampered by adverse conditions” and a landing in Australia’s own region of the globe – deserves our attention and was imbued with similar symbolism and sentiment.


During that ceremony at Commonwealth Bay on 25 February 1912, Mawson used the ritual to express this complex mixture of imperial, national and scientific loyalties. The Union Jack and the Commonwealth (Australian) flag were both raised above the hut. But what the men most savoured in their diaries was not so much the flag or the proclamation but the first church service that Mawson nervously held in the hut, the celebratory dinner that followed, and the speech that Mawson gave that evening. What did he say on what Archie McLean called this day of days in so far as the history of our stay in this place is concerned? Cecil Madigan recorded: He said we were snug & comfortable etc. – we were in a much worse place than any Antarctic expedition had ever landed in – the weather was far worse – it looked as if these winds were constant and sledging would be most difficult. No other expedition had been game to land here. Perhaps it was a terrible region – we were going to prove it. The meteorological results would be very valuable – the magnetic work – the biological work – but of more practical value at present was the geographical work – we must explore.

When the Aurora sailed away from Cape Denison on 19 January 1912, Captain Davis wrote: They are a fine party of men but the country is a terrible one to spend a year in. The proclamation of territory and the assertions of nationalism were vital to strategy and morale. They had named their new home Commonwealth Bay and at dinner on 25 February Mawson wrapped himself in the Australian flag. But the science was vital, too, for its own sake – and also because, even more than planting the flag, it justified their presence for at least a year on this remote, alien continent and helped secure them to this windy place. These young men, mostly Australian, mostly in their twenties, mostly university-educated, were as eager as Mawson to explore and to apply their fresh scientific curiosity and training to new terrain. Science was their emotional anchorage, their intellectual sustenance, their daily discipline – and perhaps it might keep them sane. •

The post Thus began the Australian occupation of Antarctica… appeared first on Inside Story.

]]>
A world of our own making https://insidestory.org.au/a-world-of-our-own-making/ Fri, 17 Feb 2012 05:10:00 +0000 http://staging.insidestory.org.au/a-world-of-our-own-making/

Without realising it, we seem to have entered a new geological epoch. Brett Evans looks at how we got there and what it means

The post A world of our own making appeared first on Inside Story.

]]>

IT MAY seem hard to believe, but for the first three orbits they were so busy carrying out their scheduled tasks that none of them bothered to take a peek out the window. Only on their fourth trip around the moon did one of them chance to look up and see what they had all been missing. Commander Frank Borman can be heard exclaiming on the mission’s in-flight recorder: “Oh my God! Look at that picture over there!”

It was Christmas Eve 1968 and NASA’s Apollo 8 was just returning from the far side of the moon and coming back into radio contact with Mission Control in Houston. What Borman saw made him, and his crew – Command Module pilot Jim Lovell and Luna Module pilot Bill Anders – put aside their duties and scramble to find a camera. The creative impulse had trumped astronaut discipline.

First in black and white, and then in colour, they clicked away on a Hasselblad like tourists, and captured one of the most profound images in human history. It was the decisive moment par excellence.

From the cramped confines of their tiny craft – and further from home than anyone had ever been before – the three watched as a blue-and-white jewel, partially in shadow and set in a background of deepest space black, slid majestically from behind the desolate surface of the moon. Rising from the lunar horizon was, of course, the Earth; and no one had ever experienced it like this before.

In the end the Apollo 8 mission took 150 photographs of the Earth, but one picture in particular – NASA image AS8-14-2383 – a colour snap taken by Anders and later dubbed “Earthrise,” quickly asserted itself as the image of the mission.

The men on Apollo 8 were the first humans to slip the bounds of Earth’s pull and give themselves over to the gravitational field of another celestial body. They orbited the moon ten times and were the first of our species to gaze directly upon its far side. The work they did laid the basis for the first moon landing just a few months later in July 1969. There would have been no small step for Neil Armstrong, let alone a “great leap for mankind,” without the bravery of Borman, Lovell and Anders. But for all these achievements, it will be for those rolls of film that the crew of Apollo 8 will be best remembered. They were the first of our kind to see the whole of the Earth – and because of them we all got to see it too.

Before Apollo 8 there had been other pictures taken of the Earth from space, but they were uninspiring, in blurred black and white, captured by remote control from satellites or unmanned rockets, and unavailable to the public. “Earthrise” had been composed and executed by a human eye and a human hand. It was transformative, emotional; it was art.

The year 1968 was a famously tumultuous one in human history. Apollo 8’s crew members had photographed a planet which in the previous twelve months had witnessed the assassinations of Robert Kennedy and Martin Luther King; the Battle of Khe Sanh, the Tet Offensive, and the My Lai Massacre; the brief bloom of the Prague Spring and the student riots in Paris; the black power salutes by two American sprinters at the Mexico Olympic Games; the opening on Broadway of the musical Hair; a pitched battle between police and anti-war protesters on the streets of Chicago; and the election of Richard M. Nixon as the thirty-seventh president of the United States.

But also occurring on Earth in 1968 was something far more momentous than any of these events. It might not have been widely recognised at the time, but the planet itself was in the midst of a great change. And an early intimation of this change was about to appear in an obscure publication produced in San Francisco’s Menlo Park.


WHEN “Earthrise” was released by NASA in early 1969 a thirty-year-old counter cultural entrepreneur called Stewart Brand had reason to celebrate. Brand was a Stanford-educated biologist and former paratrooper who had “dropped out” to pursue the Haight-Ashbury dream. He befriended Ken Kesey and the Merry Pranksters, organised light shows for the Grateful Dead, and started surfing the early wave of environmentalism. A few years before Apollo 8 blasted off – in February 1966 – he had been sitting, wrapped in a blanket, on the gravelly rooftop of a three-storey apartment block somewhere in San Francisco’s North Beach. As he gazed at the city skyline he was suddenly gripped by a vision – which was not surprising, really, as he’d just dropped a hundred micrograms of lysergic acid diethylamide.

“The buildings were not parallel – because the earth curved under them, and me, and all of us: it closed on itself,” Brand wrote of this experience decades later. “I remembered that Buckminster Fuller had been harping on this at a recent lecture – that people perceived the earth as flat and infinite, and that was the root of all their misbehaviour. Now from my altitude of three storeys and one hundred miles, I could see that it was curved, think it, and finally feel it.”

But this was no ordinary acid trip; it actually brought forth a useful idea. What humanity needed, Brand decided during his drug-induced epiphany, was a colour picture of the whole of the Earth from space. “There it would be for all to see, the Earth complete, tiny, adrift,” Brand explained, “and no one would ever perceive things the same way.”

But how could he convince NASA to take such a picture? Not surprisingly, Brand chose a typically sixties method to publicise his cause: he produced a button. And on it he posed a question that was quintessentially sixties in its hint at conspiracy: “Why haven’t we seen a photograph of the whole Earth yet?”

Brand sent his buttons off to NASA, to all the members of Congress, to Soviet scientists and diplomats, and to officials at the United Nations. And he also sent one to the man who inspired the whole quixotic enterprise: Bucky Fuller himself, the part-crank, part-seer inventor of the geodesic dome, whose personal motto was “Dare to be Naive.” Brand then took his message to the people.

“I prepared a Day-Glo sandwich board with a little sales shelf on the front, decked myself out in a white jump suit, boots and costume top hat with crystal heart and flower,” he explained later, “and went to make my debut at the Sather Gate of the University of California in Berkeley, selling my buttons for twenty-five cents.”

For this act of street clowning activism Brand got kicked off the campus. The San Francisco Chronicle reported on the incident, and his one-man campaign was off to a flying start.

Whether Brand’s lobbying effort had any great influence on NASA is probably a moot point. Though NASA had not planned for the crew of Apollo 8 to capture an image like “Earthrise,” it was quick to let the world see it. When the image was published in January 1969 it caused a sensation. From his psychedelic vantage point almost three years earlier, Brand had seen more clearly than even the crew of Apollo 8 themselves what such an image would mean. As he expected, from space the Earth looked fragile, alone and breathtakingly beautiful. It looked like something that needed protection.

The timing of Apollo 8’s snap happy trip around the moon couldn’t have been better for Stewart Brand. In late 1968 he was just beginning the project that would make him famous: the Whole Earth Catalog, a sort of DIY handbook for nascent greenies who were into self-sufficiency. It was one of the very first sources of practical information about alternative energy, appropriate technology and organic farming. It was broad-ranging, a little slapdash in execution, but inclusive and fun: if you wanted to build a yurt, the catalogue would tell you how.

The cover of the first Whole Earth Catalog featured a mocked-up picture of the Earth which Brand had originally used on posters in his campaign to persuade NASA that such a photo was worth taking. By the second edition – which came out in the early part of 1969 – Brand was able to use “Earthrise” itself.

In the editorial of the first Catalog, Brand attempted to explain the motivation and function of this strange new publication. It was here that he came up with the striking and now famous line: “We are as gods and might as well get good at it.” In other words, the stewardship of the Earth is in our hands; we’d better not stuff it up.

It was Brand’s promotional genius to pair this evocative phrase with a picture of our only home seen in all its glory and isolation, as if from the perspective of God in heaven.


FROM space, the Nukuono Atoll in Micronesia looks like an amoeba under a microscope. Munich’s International Airport looks like a computer chip. The Bestibah Estuary in Madagascar looks like the tendrils of a seaweed caught in the tide, the Escondida Copper Mine in Chile’s Atacam Desert like a forensic close-up of a bloodstain on a linen shirt. On a clear night the East Coast of the United States looks like a constellation of stars.

Look up the NASA websites Visible Earth, Earth from Space or The Gateway to Astronaut Photography and you’ll find thousands and thousands of breathtaking images of the surface of the Earth – all of them taken by astronauts. Yet if we imagine a similar collection of pictures taken from space at the time of Copernicus 500 years ago, things would look very different.

When the father of modern astronomy was writing On the Revolution of the Celestial Spheres many of the events and features visible from space today did not yet exist. There was no haze of pollution rising over Asia, Greenland was still dazzling in its whiteness, there were no massive human-made lakes, and there was less land under cultivation and vastly more hectares of natural forest. The great city of Rome in the 1500s, even on the clearest of nights, would not have emitted enough light to resemble a faraway star.

The difference between the Earth of Copernicus and the Earth of our time is down to one thing: the fecund creativity and computing power of the human mind. By the twenty-first century, intelligent life had become as powerful a force upon the Earth as photosynthesis, or the movement of tectonic plates. Our species’ huge brain has enabled us to foment an industrial revolution, build and light cities, cultivate enough food to feed billions, and alter our planet’s atmosphere.

If you possess the technology necessary to take a picture of your home planet from thousands of kilometres out in space, the question arises: have you got the place as a time share with all the other species, or do you own the joint outright? Copernicus lived on a planet dominated by nature; we are living on an increasingly artificial world. In fact some scientists now argue that we are living in a completely new geological epoch. They argue that the Holocene – the warm period of the past ten or so millennia – has been superseded by the Anthropocene – literally, a new age of humans.


IN THE east-wing of a Palladian mansion on Piccadilly a series of meetings will be held over the next several years which could redefine the geological epoch we are said to be living in. Burlington House is home to the Geological Society of London and the Geological Society is home to the International Commission on Stratigraphy, the little-known committee of scientists which decides when one geological epoch ends and the next one begins. The commission is currently in the process of trying to decide whether the Holocene has run its course. Should the Anthropocene join the Carboniferous, the Jurassic, and the Pleistocene on the geological timescale? It may sound like an abstruse academic argument, but it’s not; as the Economist noted in May 2011: “It is one of those moments where a scientific realisation, like Copernicus grasping that the Earth goes round the sun, could fundamentally change people’s view of things far beyond science.”

The Anthropocene is not a new scientific idea. As early as 1873 the Italian geologist Antonio Stoppani had written of the “anthropozoic,”in which humans represented a “new telluric force which in power and universality may be compared to the greater forces of the earth.” Stoppani expounded this view at the time the Industrial Revolution was just picking up speed. It was from 1763 to 1775, after all, that James Watt had laboured to make the steam engine energy-efficient enough to become economic. But in the modern era the term is most associated with the Dutch Nobel Prize winner Paul Crutzen – the atmospheric chemist who “discovered” the hole in the ozone layer.

“I was at a conference where someone said something about the Holocene. I suddenly thought this is wrong,” he once explained. “The world has changed too much. No, we are in the Anthropocene. I just made up the word on the spur of the moment. Everyone was shocked, but it seems to have stuck.” He first used the term in print (in an article he co-authored with E.F. Stoermer) in 2000.

The key proponent of the Anthropocene in Australia is Professor Will Steffen, executive director of the Climate Change Institute at the Australian National University. When I spoke to Steffen earlier this summer he explained that defining the Anthropocene requires a thought experiment on the part of today’s geologists. They must ask: what would geologists of the distant future find in the sedimentary record of our time that would convince them that an epoch-making change had occurred sometime around the twentieth century?

Of course, anthropogenic climate change is the most obvious manifestation of the Anthropocene. If the geologists of the future examine ice cores – “assuming there will be some ice left,” Steffen jokes, a little darkly – they will show not just higher levels of carbon dioxide in the atmosphere but also increased levels of dust particles, evidence of longer drier periods in some parts of the world.

But, as Steffen points out, there would still be a strong case for redefining the geological epoch even if we had not been changing the Earth’s climate by releasing large amounts of carbon into the atmosphere. Many of our other activities are also altering the Earth in significant ways.

For a start, a lot of plants and animals won’t be turning up in the post-Anthropocene fossil record; many species have become extinct as a result of human predation and habitat change. As humans cut down native forests and increase the amount of the Earth’s surface dedicated to agriculture, many species are dying out for want of a home to live in. At the same time the fossil record will show that domesticated animals, privileged by their usefulness to humans, will have spread across the globe. The bones of cows, pigs, chickens will far outnumber the remains of wild animals.

The human genius for digging and reshaping the surface of the Earth will also be noticeable to the geologists of the future. From the megacities where most of us now live, to the roads and railways that crisscross the continents, we have scarred and transformed the planet. More importantly, humans have intervened in the planet’s water cycle on a hitherto unimaginable scale by constructing tens of thousands of large dams in the last fifty or so years. The huge lakes created by these concrete cathedrals to engineering have all but stopped the flow of sediments to the sea in many cases, leading to the accelerated erosion of the world’s great river deltas. The oceans have also been altered by human hand. They will grow in size as their levels rise owing to ice melt in places such as Greenland. They will suffer from increasing acidification due to climate change. And the marine life that lives within them will be affected by our need for protein; there have already been collapses in some fish stocks because of over-exploitation.

So when did the Anthropocene start? William Ruddiman, a scientist at the University of Virginia, would date it from the beginnings of agriculture eight thousand years ago. Crutzen would date it from the invention of the steam engine. Steffen plumps for a starting point in or around the second world war. Hiroshima, after all, marks the beginning of the nuclear age. In 1945 there will be a “golden spike” of measurable radioactivity discernible to the geologists of the future. The immediate postwar period also marks the beginning of what Steffen calls the Great Acceleration. From the end of the second world war a whole range of human activities grew in intensity. Industrial production, economic growth rates, global population, car ownership, international air travel, for example, all started to climb and just kept on going.

It could be a while before the Anthropocene is officially accepted by science, however; like the processes it studies, geology doesn’t tend to move very fast. It took decades for the discipline to decide on the definition and timescale of the Holocene. The present process could take at least five years of meetings and committees and argument. The Anthropocene Working Group of the Geological Society is currently preparing its case through peer-reviewed papers and conferences. Eventually the working group will have to convince a lot of sceptical geologists.

If the Anthropocene becomes an established scientific fact, what are the implications? Well, for starters it will mean kissing the Holocene goodbye, which is a pity because the Holocene is the period where we got our start as a species. According to Steffen, “We like the Holocene very much; the Holocene is the sweet spot for humanity.” But unfortunately, “there is a cogent argument that the Holocene state of the planet is the only one we know for sure – for sure – that humanity can really thrive in. Now it may be that we can adapt ourselves to others, but we don’t know this for sure.” Like everyone else, Steffen is not sure how benign the Anthropocene will turn out to be.

We have been clever enough to change the planet we live on – and maybe clever enough to recognise that we have done it – but will we be clever enough to understand what this new god-like status means? “Well, we have been clever, yes,” Steffen tells me, “but perhaps not wise.” The term Anthropocene is clearly both a putative scientific category, and a warning.


LATE on the afternoon of 1 November 1941, just outside Hernandez, New Mexico, an old Pontiac station wagon skidded to the side of the highway and an agitated man leapt out yelling to his companions, “Get this! Get that, for God's sake! We don't have much time!” The urgent man was a photographer and he had just seen the image that would shortly become his best-loved work. In the near distance, spread before him in the twilight, was a scene of transitory beauty. A church cemetery – its white crosses starkly lit by the fading light of a low slung Sun – was overhung majestically by a rising moon. Though he raced to set up his tripod and camera, the photographer had time to take just a single frame before the light went and the spell was broken. The resulting photograph was named “Moonrise” and sixty-five years later a print of it would sell at Sotheby’s for over $600,000.

“Moonrise” is the best-known work of the great American landscape photographer and pioneer environmentalist Ansel Adams. There has always been a strong relationship between environmental activism and photography: a beautiful picture is worth a thousand pamphlets printed on recycled paper. Adams, for example, collaborated in the creation of the Sierra Club Exhibit Format Series which helped to generate public opposition to damming the Grand Canyon. Adams believed that people would respond to the natural wonders of the world with awe and understanding if only they could experience it – even if vicariously through his photographs. “I believe in beauty,” he wrote. “I believe in stones and water, air and soil, people and their future and their fate.”

There is an obvious connection between Adams’s “Moonrise” and NASA’s “Earthrise” – both captured something awe-inspiring and transitory. But has beauty alone ever been enough to change how we think about the world?

Like Adams, his fellow American photographer J. Henry Fair is an artist and an environmental activist, but Fair’s political and artistic strategy is very different. Wanting to record what was going on behind the fences keeping him out of America’s factories, Fair took to the air – in light aircraft and helicopters. From this vantage point, he created pictures that looked like beautiful abstracts. But, of course, as the detailed captions of his photographs reveal, Fair’s images are all too real. What looks at first glance like a Photoshopped composition of rust-coloured reds, chemical greens and high voltage blues is in fact an astronaut’s view of industrial America.

Fair’s style has been dubbed “the toxic sublime”; it records the scars and suppurations created by our paper mills, power plants, coal mines and oil fields. Some of Fair’s most beautiful images are of the by-products left over from the manufacture of fertiliser. From the air they look like the surface of some strange ice planet in another galaxy; in reality, of course, they are vast green-and-white slurries called “gyp stacks” chock full of gypsum, phosphorous, and radioactive material. From such local disasters comes the fertiliser that allows us to feed a global population of seven billion.

For most of our history as farmers we relied on good old fashioned faeces to fertilise crops. Then, in the early twentieth century, the German chemist Fritz Haber developed a process that synthesised ammonia from nitrogen. A few decades later another German chemist, Carl Bosch, worked out how to upscale the process to an industrial level. In giving mankind the ability to fix nitrogen in massive quantities at an economic cost, the Haber-Bosch process also gave us the Green Revolution, which feeds a third of the world’s population. And according to Will Steffen, the Haber-Bosch process also altered the Earth’s nitrogen cycle to such an extent that it is yet another marker of the Anthropocene. But where does all this fixed nitrogen go when agriculture is finished with it? Very often, down the world’s rivers and out into the world’s oceans, where it fundamentally changes the nature of the marine environment. The famous Mississippi Delta Dead Zone, for example, is one of just dozens of such zones around the world. Fertiliser and effluent surging down the Mississippi have left thousands of square kilometres of water in the Gulf of Mexico in the permanent grip of an algal bloom-induced hypoxia – there is less oxygen in the water and life struggles to survive there.

The modern fertiliser factory may be the pre-eminent symbol of humanity’s grand bargain with the planet. By helping ourselves to perpetuate the success of our species, we change the planet itself; by bringing our science and culture to bear on a problem, we create unintended consequences that then require an even more impressive technological solution. And repairing the nitrogen cycle might be the sort of technological innovation – also known as geoengineering – that might come to define the Anthropocene era.


NOW in his seventies, Stewart Brand lives on a tugboat called Mirene, which he moors at Sausalito, just across the Bay from San Francisco. “Tugboats,” according to Brand, “are the largest thing in the world described as ‘cute’.” Since lobbying NASA all those years ago he has gone on to create a career as a countercultural maven, internet pioneer and contrarian commentator.

In 2009, at the age of seventy-one, Brand published Whole Earth Discipline: An Eco-pragmatist Manifesto. In this work he makes a small but significant change to the aphorism he coined as a young man back in 1968. “We are as gods and have to get good at it,” he says now. As for many of his fellow environmentalists – such as James Lovelock, George Monbiot and Mark Lynas – the alarming prospect of climate change has forced Brand to reassess some of his most cherished and long-held convictions.

In Whole Earth Discipline Brand utters a number of heresies against mainstream environmental opinion. He argues in favour of nuclear power because it is less carbon polluting. He advocates the use of genetically engineered crops because they will have a smaller impact on the nitrogen cycle. He maintains that megacities enhance sustainability. But Brand’s biggest leap of faith concerns geoengineering. The Anthropocene could be defined as the time when humans began to geoengineer the Earth without fully understanding the consequences of our actions. Intervening in the water cycle, fixing nitrogen far beyond the ability of nature, creating the greenhouse effect – they are all examples of unintentional geo-engineering. But now Brand, and others who think like him, believe it is time to seriously consider the use of intentional geoengineering. According to a paper published by the Royal Society, this type of geoengineering is “the deliberate large scale manipulation of the planetary environment to counteract anthropogenic climate change.”

Science has known for some time that large volcanic eruptions, which spew enormous amounts of sulphur dioxide into the upper atmosphere, can reduce the amount of solar radiation reaching the surface of the Earth so effectively that it cools the planet. It has been estimated, for example, that the eruption of Mount Pinatubo in the Philippines in 1991 cooled the Earth by 0.5 degrees Celsius for over a year.

Some scientists argue we could replicate this naturally occurring effect on the Earth’s albedo – the reflectivity of the planet – by using high-flying aircraft to pump sulphur dioxide gas into the stratosphere. Other even more outlandish schemes propose distributing trillions of reflective discs in space halfway between the sun and the Earth. My favourite involves a fleet of 1500 so-called Albedo Boats sending up huge plumes of water vapour into the sky to create large fluffy white clouds to reflect the sun’s rays away from our warming planet.

Maybe Brand is still under the influence of the technological optimism associated with his early mentor, Buckminster Fuller. But he is not alone in calling for geoengineering to be taken seriously. The man credited with coining the term Anthropocene is also willing to contemplate that such schemes might be necessary in the future. If we can’t lower the emissions of greenhouse gases by changing our behaviour, even a scientist of Paul Crutzen’s standing is willing to contemplate geoengineering as humanity’s Plan B.

Of course, the obvious response to such schemes will be: isn’t this just an example of moral hazard? If polluters think the Earth can be bailed out by a technological fix, won’t they lose any incentive to change their behaviour? And more basically: why should we believe that geoengineering could actually work? Will Steffen, for example, is not convinced. The malignant aspects of the Anthropocene can be overcome, he would argue, but by international cooperation, the development of sustainable energy sources, and teaching new ways of thinking about the Earth and how it works as a system.

At the heart of the debate about geoengineering is an often unacknowledged difference of opinion about human nature itself. One side sees human behaviour as incorrigible; the other believes we will recognise enlightened self-interest when we see it. One side sees a composition of bright colours and calls it abstract art; the other side sees the same picture and sees an oil spill in the Gulf of Mexico.


AT FIRST glance it looks like a mote of dust in a sunbeam. Yet on closer inspection, that tiny blue speck turns out to be the Earth, though the sunbeam actually is the light of the sun.

The image, known as the “Pale Blue Dot,” was captured on 14 February 1990 with a remote-controlled camera on the Voyager 1 space probe as it was leaving our solar system. Because of the way the sun’s light was scattered off the surface of the spacecraft, the Earth looks like it is sitting in its own special column of light. At the time, Voyager 1 was over six billion kilometres from Earth; nothing human-made had ever been so far from home.

Though travelling at the speed of light, it took nearly five-and-a-half hours for each pixel in the image to reach NASA. When all the information needed to assemble the completed grainy image eventually arrived on Earth the cosmologist Carl Sagan had reason to celebrate. Sagan had proposed a decade earlier that Voyager 1 should attempt to take such a photograph. His purpose was not scientific, but philosophical. “I thought that – like the famous frame filling photos of the whole Earth – such a picture might be… useful as a perspective on our place in the cosmos.”

Sagan was famously inspired by the “Pale Blue Dot” to write:

The Earth is a very small stage in a vast cosmic arena. Think of the rivers of blood spilled by all those generals and emperors so that, in glory and triumph, they could become the momentary masters of a fraction of a dot. Think of the endless cruelties visited by the inhabitants of one corner of this pixel on the scarcely distinguishable inhabitants of some other corner, how frequent their misunderstandings, how eager they are to kill one another, how fervent their hatreds. Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves.

Sagan was convinced that an image like the “Pale Blue Dot” would put all our Earthly squabbles into perspective. It would help, he hoped, to change the way humanity thinks about itself, and its relationship to the planet it lives on. Even in a geological timeframe, there are decisive moments. As we plunge headlong into the Anthropocene, maybe it’s time to confront the challenges and potential dangers of this new epoch with the same sense of urgency that grips photographers when they reach for their camera. •

The post A world of our own making appeared first on Inside Story.

]]>
At sea with Einstein https://insidestory.org.au/at-sea-with-einstein/ Fri, 16 Dec 2011 01:23:00 +0000 http://staging.insidestory.org.au/at-sea-with-einstein/

Tim Thwaites reviews an oblique introduction to one of the great figures of the twentieth century

The post At sea with Einstein appeared first on Inside Story.

]]>
AN ABIDING irritation for Australian scientists is the fact that, unlike for sports personalities or film and television stars, their achievements go largely unheralded. It’s not that they aspire to become celebrities, rather that with recognition comes support for their work.

There’s a further annoyance. When scientists are noticed, they are stereotyped as lab-coat-wearing geeks. Who knows why the media thinks it is “quirky” that a polymer chemist is a rabid Essendon supporter, or a medical researcher plays bass guitar in a heavy metal group – but it drives career scientists to distraction.

So there’s a certain irony in the fact that for much of the twentieth century, one of the most recognisable faces, the ultimate photo opportunity, the person whom everyone wanted to meet, the Nelson Mandela of his time, was a scientist, Albert Einstein. What’s more, Einstein’s public persona as the gentle, otherworldly eccentric with the hair going everywhere – the man who wore no socks – is more than a little responsible for the scientist stereotype.

Looking back eighty or ninety years, it’s hard to grasp the scale of Einstein’s celebrity between the two world wars. People turned out in their tens and hundreds of thousands just to see him and hear him speak – even though, as with Stephen Hawking, most didn’t understand a word of it. For all that, Einstein was an intensely private person for whom the unbidden “celebrity” status was hugely irritation, even painful.

The great achievement of Josef Eisinger’s Einstein on the Road is to get beneath that public surface and discover Einstein the man, writing in his travel diaries. Apart from his letters, these diaries appear to be the only truly candid thoughts to which we have access. And the picture they paint is of someone of enormous intellect and humanity in all senses of the word – a person who was immensely warm and generous, but not beyond criticising his colleagues or his fellow Jews; who at times was prescient, but not always right or even consistent in his views, and had prejudices; but who took delight in music-making, the interplay of light and cloud, and the constancy of his friends.

The price the reader pays for this unprecedented intimacy is having to plough through what is, at times, a somewhat repetitive travelogue, and to wrestle with a poorly edited set of notes which are pivotal to the context and understanding of the book – especially for the vast majority of people whose knowledge of 1920s and 30s European history and science is sketchy.

The author, Josef Eisinger, is a physicist who escaped from Nazi-occupied Vienna in 1939 and has lived in North America ever since. Still a student, he was interned in Maritime Canada at the beginning of the war. Later, he was released under the sponsorship of the Mendel family, who lived in Toronto. They were close friends of Einstein and, not surprisingly, Eisinger developed an interest in the man both professionally and personally.

Because of this association, he was contacted a few years ago by an Einstein researcher, who subsequently introduced him to the Albert Einstein archive at Princeton University, where Einstein spent his last twenty years. There, Eisinger found the travel diaries, which had never been published.

Einstein kept a diary only when travelling. But between 1922 and 1933, when he was in his forties, he travelled a lot. These diaries were written for his own use, and provide an almost unparalleled look at his day-to-day thoughts.

They don’t, however, necessarily provide a balanced view of what he was thinking. Close to the end of the text is a discussion recorded by one of Einstein’s friends in Oxford. “The conversation turned to diaries, and Einstein remarked that he had kept a diary on his recent American tour, but the most interesting things he had experienced he had never written down” – a revelation that comes as something of a letdown for the reader.

During his time “on the road” – mostly at sea, in fact – Einstein visited Japan and China and places in between; Palestine and Spain; Argentina, Brazil and Uruguay; the United States and Central America, several times; and England, several times. As well as the opportunity this afforded to interact with foreign colleagues and visit important research institutions, Einstein clearly relished the seclusion of the sea voyages and the time they gave him to think and read. Perhaps more importantly, he was able to get away from Germany, particularly Berlin, where the turn of events in politics and society, and the associated rise of the Nazi party, were disturbing him more and more. In the end, his travels allowed him to set up a future for himself and his family in America.

The book follows each of his journeys. They are enclosed by an initial chapter and an epilogue, which set the scene and provide a dénouement. But there’s more. Before the scene is set, there are – count them – two separate forewords, a preface, acknowledgements, an introduction and a timeline; later, there’s a bibliography, an index and notes. This is a well-documented, though perhaps over-documented, volume.

The greater portion of the book is in the form of indirect reporting of the contents of the diaries, bolstered by explanatory material and supporting evidence from contemporary sources. In his introduction, Eisinger is at great pains to point out that everything he attributes to Einstein – feelings, observations and comments – is based directly on what is in the diaries. The book is at its best, however, when Einstein is quoted directly, expressing his annoyance at local bigwigs (Grosskopfeten) or organisers (Affen, apes or idiots), or his delight at the sunlight in the American desert. Perhaps the material was not available, but I would have appreciated more of these direct quotes; whenever they appear, they have great resonance.

Many themes recur through the book. Einstein’s vast love of music and his great skill as a violinist, for instance, are inescapable. He devotes an inordinate amount of energy to organising opportunities to play chamber music with whoever is available, wherever he is – on the high seas, or in China or Latin America. Clearly, music is an enormous resource, refuge and recreation for his mind. He devotes equal energy to the cause of pacifism, giving impromptu speeches and carefully programmed radio broadcasts.

Einstein’s greatest scientific achievements were the four papers he published in 1905 while he was still working as an examiner in the Swiss Patent Office in Bern, including one outlining his Special Theory of Relativity. He followed this up with his General Theory of Relativity in 1916. After that, although he contributed important observations and debate, his major contributions to physics were done. After 1905, his reputation was assured. But it was only in 1919, when the English physicist Arthur Eddington announced to the Royal Society in London that he had measured the deflection of starlight by the sun’s gravitational field – as predicted by the general theory – that Einstein achieved rock-star status.

In the decade when he was travelling, much of his time was taken up with chasing a unified field theory to underpin physics. This was the Holy Grail he never found. He was constantly devising new approaches, only to recognise, or having colleagues point out, their fallacies. It’s a fascinating commentary on the science of the time that no one thought any the less of him for getting things wrong. He was allowed to make mistakes.

We tend to think of our era as one drawn together by the power of the internet – that science has never been so global. But this book and Einstein’s travels show that the scientific world between the wars was also “small” and equally active. Einstein seems to have met or corresponded with most of the great figures of physics of his time – and many more interesting people besides. Despite his horror of public engagements, dinner suits and the endless shaking of hands, he relished intellectual debate and exchanges of views.


EINSTEIN also had his biases. He believed that the world, and the science that explained it, should have a beauty – an aesthetic appeal – about it. So despite contributing to its development, he wasn’t happy with the description of light in terms of both particles and waves. And while he predicted the necessity of quantum mechanics, he never warmed to the descriptions of fundamental matter in terms of probability that his colleagues Bohr, Heisenberg and Born provided. “God does not play dice with the world,” he famously declared.

Unlike for many, however, his prejudices and ideology never seemed to obstruct his vision. If the theory or the science wasn’t to his taste, he didn’t discount it, he simply looked for a better explanation based on evidence. And, in the case of his unified theory, he never found it. Maybe this provides a lesson to contemporary politicians for whom the current state of science on climate change and alpine grazing goes against the grain.

Eisinger is punctilious, almost to a fault, in providing the reader with helpful background information. Do we really need to know the tonnage, top speed, name changes and eventual fate of every vessel Einstein sailed in? There are, however, areas where more information would have been of interest. Here, one suspects, Eisinger’s own sensibilities may have intervened.

It is well known – and quite clear from the material in the diaries – that Einstein had an eye for the ladies. As Eisinger points out a couple of times, he was known not to set great store by monogamy – although he was devoted to his wife Elsa, who accompanied him on most of his trips. Beyond those few comments, though, nothing more is said. A little more information about this side of Einstein’s character, which undoubtedly manifested itself during his travels, would have been of interest, not for prurience so much as to judge the impact, if any, that it had on his life and work.

The worst aspect of the book are the nearly 250 explanatory endnotes in 170 pages of text. To use this information the reader must flip back and forth, once or twice a page, to the information at the back of the book. Because the numbers in the text are placed at the end of sentences, even if the reference is to information at the beginning or in the middle of the sentence, the notes often refer to unexpected aspects of the text. Not only are the notes repetitive, but in several cases the information they contain is provided elsewhere in the text. In one chapter the numbering is incorrect. Some of the information – for instance, the aforementioned shipping news – is hardly essential, and much of the rest could have happily been included in the text itself, without interrupting the reader. This loose editing is all the more surprising given that such care is taken with the overall documentation.

Fewer notes, at the expense of a little more information in the text, and placed at the bottom of the page – in short, a better job of editing – would have contributed to the ease of reading what for the most part is an unusual and interesting introduction to one of the towering characters of the twentieth century. •

The post At sea with Einstein appeared first on Inside Story.

]]>
The fatherhood myth https://insidestory.org.au/the-fatherhood-myth/ Tue, 26 Jul 2011 02:33:00 +0000 http://staging.insidestory.org.au/the-fatherhood-myth/

Fathers’ groups claim many children don’t know who their real father is. But what does the evidence say?

The post The fatherhood myth appeared first on Inside Story.

]]>
A few years ago I found myself in an ABC Radio National studio discussing misattributed paternity – those traumatic and sometimes humiliating cases where a child’s biological father is not who he or she thinks he is. I had been invited onto the program, Australia Talks Back, because I was researching the social implications of new DNA-based technologies that were making it possible to uncover mistaken paternity. Most of my research had involved interviews with the entrepreneurs who had established the emerging paternity testing industry in Australia and the United States.

The first question I was asked concerned the extent of misattributed paternity. We don’t know the true figure, I said, for the simple reason that it’s impossible to test a genuinely random sample of the population. There are always people who will refuse to participate in random studies. This isn’t a problem if those who refuse are no different from those who do participate, but it would certainly be a problem for a study of this kind, because women with doubts about who fathered their children would be the most likely to find reasons not to participate.

Almost immediately an irate listener called to ask where on earth the program obtained its “experts.” He referred confidently to a British study which showed that misattributed paternity affects more than 30 per cent of the population. By this reckoning, more than three-in-ten people are mistaken about the identity of their father. Later in the program an interviewee from a fathers’ rights group reiterated the claim that misattributed paternity is widespread in Australia.

I did many similar interviews around this time, and they all followed the same course. Interviewers wanted to know the extent of misattributed paternity and I invariably said that the answer was unknown. Yet my interviewers invariably found other “experts” who declared unequivocally that misattributed paternity is widespread. Most of them said that 10 per cent of the population was mistaken about the identity of their biological fathers, but sometimes they said the figure was as high as 20 per cent or even 30 per cent.

I was a little annoyed by this experience, so I set about reading every piece of available research on the subject, including the British study that allegedly showed a rate of 30 per cent. I also kept close track of new findings in the field, and did my own research. In the process I found out quite a bit about how an urban myth was born and transmitted around the world, and how it helped create a new industry.


The stubborn figure of 30 per cent comes from the published transcript of a symposium on the ethics of artificial insemination that was held nearly forty years ago, in 1972. “We blood-tested some patients in a town in south-east England,” Dr Elliot Philipp told the symposium, “and found that 30 per cent of the husbands could not have been the fathers of their children…”

At this point Dr Philipp was interrupted by a judge, who observed that “surely the figure of 30 per cent must be a minimum?” The judge clearly understood that while blood tests could definitively exclude paternity, they could not definitively establish it. This is why experts in paternity testing generally speak of an “exclusion rate” rather than a “non-paternity rate” or “misattributed paternity rate.”

Dr Philipp agreed that the figure was “a minimum.” He then explained how he came to be doing the tests. His team was “screening some female patients by testing their husbands for their blood groups” as part of a study about the formation of antibodies. The results surprisingly showed that “30 per cent of the children could not have been fathered by the men whose blood group we analysed.” Another participant asked about the number of people who were tested. Dr Philipp replied, “Not large – between 200 and 300 women – but large enough to give us a real shock.”

As we’ve seen, this brief conversation took on a life of its own, despite the fact that Dr Philipp never published the findings of his study. As a result, his precise tests and his population sample were never identified. One participant at the symposium described the sample as “highly biased,” but we can only guess what this means. It might mean that the sample consisted of unmarried mothers, who were an easy target for medical studies at the time. Later studies leave no doubt that misattributed paternity is much more likely for unmarried mothers than married mothers.

More generally, the fact that the findings were never published means that they were never independently evaluated by other experts. Ironically, during the 1940s, 1950s and 1960s there had been refereed studies about paternity in Britain and the United States, based on blood testing and published in reputable journals. None of them came close to 30 per cent.

Dr Philipp’s findings survived the test of time simply because they are shocking. Talkback radio gave them a new lease of life, and so has the internet.


Since Dr Philipp’s study, testing for paternity has become much more sophisticated. DNA techniques invented in the 1980s mean it is now both cheaper and more accurate – so much so that the accuracy of these tests is now more than 99.99 per cent, a pretty reliable basis for establishing paternity.

As a result, an industry dedicated to paternity testing has emerged, pulling in people from all kinds of related businesses. One of the entrepreneurs I interviewed had his start in a small bird-sexing business in the outer suburbs of Melbourne. One day, he recalled, “the phone rang and I picked it up.” When the person on the other end asked whether he tested for paternity, he replied, “No, we don’t.”

The next day there was another call: “Do you do testing of paternity?”

“No, we just do animals.”

The next day there was another call: “Do you do testing of paternity?”

“Can you hold on one minute?” the entrepreneur asked. He put down the phone and said to his business partner, “Can you believe this is the third phone call I’ve had this week? Somebody wants paternity testing.”

And his partner said, “We’ll do it!”

Most of the demand for paternity tests comes from mothers who want to establish paternity in order to secure child support. Some of it comes from fathers – often divorced or separated – who want to disprove paternity in order to avoid child support. In the United States more than 400,000 paternity tests are carried out every year. In Australia the figure is about 10,000, but only half of them involve Australian citizens; the balance are “export” cases, with offshore customers sending their specimens to Australian laboratories for testing.

These figures translate into more than five times as many tests per 1000 births in the United States than in Australia. One of the main reasons for the difference is that in Australian law the “marital presumption” that a husband is the father of his wife’s children also applies to de facto couples. (This was introduced without fanfare back in 1975.) In the United States the marital presumption doesn’t apply to de facto couples, which means that the paternity of children born outside marriage must be established legally. This routinely leads to paternity testing.

In 2008 the non-paternity rate reported by paternity testing laboratories in the United States was 25.9 per cent. My survey of a selection of Australian laboratories for the same year arrived at a non-paternity rate of 23.7 per cent.

The problem with these figures is obvious. The participants are not a random sample of the population. On the contrary, they are a group of people who have doubts about the paternity of a child or children. The main thing we can say on the basis of these figures is that about three-quarters of people who have some reason to doubt paternity will find that their doubts are unfounded.

Perhaps the most surprising thing about the figures is that the non-paternity rate in Australian laboratories appears to be lower than the rate in American laboratories. Given that Americans are five times more likely to have a paternity test at all, this suggests that the extent of misattributed paternity is much lower in Australia than the United States.


For the best available evidence about the extent of misattributed paternity we need to turn to medical research. This research is motivated by a medical condition or treatment, such as cystic fibrosis or bone marrow transplantation; the discovery of misattributed paternity is an unintended consequence. Occasionally researchers – like Dr Philipp – report their unintended findings; fortunately, most of them report their findings in more detail than he did.

Some scholars have aggregated all this evidence to produce an average “non-paternity rate.” The problem with doing this is that the quality of reports varies widely, as illustrated by Dr Philipp’s findings. It makes more sense to identify the best studies – those that fully explain how they came to their conclusions.

The best British study, published in 1991, suggests a non-paternity rate of about 1 per cent. A 1992 French study indicates a rate of 2.8 per cent. A 1994 Swiss study has a maximum rate of 0.78 per cent. A 1999 Mexican study comes in at 11.8 per cent. And the best North American study, published in 2009, proposes a rate between 1 and 3 per cent. There are no published Australian studies.

These figures could all be distorted by the problem of who is willing to participate, which I mentioned earlier. Even so, there are some striking differences between countries. The rate of non-paternity in Mexico is especially high and the rate in Switzerland is especially low. Almost certainly there are underlying cultural differences in how marriage, sexuality and parenting are organised in these countries, which shape these different rates.

Following this logic, it seems likely that the Australian rate is in the same ballpark as rates in Britain and the United States. Australia has more in common with these countries in terms of how family relationships are organised than with other countries, such as Mexico and Switzerland. By this reckoning, the rate of misattributed paternity in Australia would be somewhere between 1 and 3 per cent.

More precisely, the rate of misattributed paternity in Australia is probably closer to 1 per cent. By most measures family relationships in Australia have more in common with those in Britain than in the United States. That figure is also consistent with the evidence from the paternity testing industry that Australian rates are lower than those in the United States.

There is one other source of evidence on misattributed paternity. A succession of large-scale representative sex surveys were launched in rich countries in the wake of the HIV/AIDS crisis. They included questions about multiple sexual partners, which are a necessary condition of misattributed paternity, and so they might provide a vehicle for independent estimates of misattributed paternity.

I have done my own calculations based on British surveys in 1990 and 2000, for which raw data is available. They indicate an underlying non-paternity rate for children born in 1990 of somewhere between 0.7 per cent and 2 per cent. The estimated rate differs widely according to the marital status of the mother. For the offspring of married women, the rate is between 0.3 per cent and 0.6 per cent. For cohabiting women, it is between 1.1 per cent and 2.7 per cent. For other women – single, divorced, separated or widowed – it is between 2.3 per cent and 8.1 per cent.

My calculations for the survey in 2000 indicate an underlying non-paternity rate for children born in that year of somewhere between 1.3 per cent and 3.4 per cent. The non-paternity rate rose between 1990 and 2000 for two main reasons: first, a higher proportion of women (married, cohabiting or other) with two or more sexual partners, and second, a higher proportion of births outside of marriage.

Published data from the 2001 large-scale sex survey in Australia suggests that Australian rates are in the same range as those in Britain. Australian women are slightly more likely to have had two or more sexual partners in the previous year, which implies a higher non-paternity rate. They are slightly less likely to have children outside of marriage, which implies a lower non-paternity rate. In other words, the two patterns roughly cancel each other out.

At the very least, these surveys indicate that the extent of misattributed paternity is increasing in rich countries such as Australia, largely because of the weakening hold of marriage on sexual behaviours. Yet the increase is taking place from a low base. The evidence from sex surveys is pretty much the same as the evidence from medical research. It shows that estimates of 10 per cent, 20 per cent and 30 per cent non-paternity rates are massively inflated.

The really interesting question, then, is how the urban myth of rampant misattributed paternity persists, despite all the evidence to the contrary.

At least three parties have promoted the myth. The first of these are the fathers’ rights groups who have mobilised around DNA paternity testing as part of a broader campaign against the child support system. They believe that the system is stacked against them, and that paternity testing can correct this bias a little. They cite high rates of non-paternity to support their claims of widespread paternity fraud at the expense of fathers. They are especially active on the internet, which provides a medium for the rapid spread of such claims.

Second, there are the DNA paternity testing laboratories and their agents. The industry in the United States is especially large, and includes specialist laboratories with dedicated media units and call centres, and independent brokers who recruit customers on the internet and on-sell the tests to laboratories. The US industry can’t do much to increase demand for the tests among single mothers, but it can take active steps to increase demand in the other main market, alienated fathers who are often paying child support.

These steps include promoting the view that misattributed paternity is widespread. Brokers and laboratories promote this view through the internet (including links with fathers’ rights groups) and the media (including live television shows where paternity disputes are played out in front of a studio audience).

Finally, evolutionary psychologists provide intellectual credibility for inflated non-paternity rates. Evolutionary psychology explains human behaviour in terms of our genetic code formed in deep ancestral time. It prides itself on its scientific approach, in contrast to what it calls the “standard social science paradigm.” Specifically, evolutionary psychologists argue that while men’s short-term sexual strategy is based on obtaining large numbers of partners, women’s strategy involves obtaining men of “high genetic quality.” In close connection, men are “hard wired” to take care that they do not raise the progeny of other men.

Evolutionary psychologists believe that high non-paternity rates provide independent evidence of their theory of human behaviour. They are responsible for very badly designed studies that arrive at high estimates of misattributed paternity. They also coordinate meta-studies that treat all existing research as having equal merit. Their studies provide academic legitimacy to claims of extensive misattributed paternity.


After I did that research on the true extent of misattributed paternity, I had several articles on the topic published in top-ranking international journals. But my finding that the extent of misattributed paternity is tiny is neither shocking nor, it seems, newsworthy.

Bad news travels fast; good news more slowly. No wonder that fathers’ rights groups, the paternity testing industry and evolutionary psychologists find an audience for their inflated estimates. No wonder that a snippet of conversation lives on as an urban myth. •

The post The fatherhood myth appeared first on Inside Story.

]]>
A miracle of politics and science https://insidestory.org.au/a-miracle-of-politics-and-science/ Fri, 04 Dec 2009 01:54:00 +0000 http://staging.insidestory.org.au/a-miracle-of-politics-and-science/

As the world talks about climate change, the Antarctic Treaty shows how politics and science can work together with enduring results, writes Tom Griffiths

The post A miracle of politics and science appeared first on Inside Story.

]]>

FIFTY YEARS AGO this week a remarkable agreement, the Antarctic Treaty, was signed in Washington. It was created not only by strategic national politics but also by genuine idealism and –­ of special relevance to Australian politicians coming to terms with climate change –­ by a bipartisan respect for the integrity of good, international science. As we count down to the United Nations Climate Change Conference in Copenhagen, it is worth reflecting on the negotiations that led to this enduring political document.

Forged in the Cold War era, the treaty has proven to be a resilient and evolving instrument for international cooperation. Its main object is to promote the peaceful use of Antarctica and to facilitate scientific research south of 60º latitude. The key provision of the treaty (Article IV) neither recognises nor denies any existing territorial claims to Antarctica. In polar parlance, such claims are “frozen.” This political compromise emerged from a period of escalating national rivalry over Antarctic sovereignty.

In the early twentieth century Antarctica – “the last continent” – became the proving ground of nations, an additional site of European colonial rivalry, and the place for one last burst of continental imperialist exploration, which had been such a trademark of the nineteenth century. The heroic era of Antarctic exploration – associated with the names of Scott, Amundsen, Shackleton and Mawson – was “heroic” because its goal was as abstract as 90º south, its central figures were romantic, manly and flawed, its drama was moral (for it mattered not only what was done but how it was done), and its ideal was national honour.

In the 1920s, a more pragmatic geopolitics quickened in Antarctica, and the heroics down south became harder-edged and more territorial. There was, as the Adelaide Advertiser declared in 1929, “A Scramble for Antarctica” that echoed the famous “Scramble for Africa” amongst European powers in the late nineteenth century. From the 1920s, commercial whaling intensified along the edges of the ice and seven nations consistently asserted territorial claims to sectors of the continent: Argentina, Britain, Chile, France, New Zealand, Norway – and Australia, which in 1933, under the Australian Antarctic Territory Acceptance Act, formalised the transfer from Britain to Australia of sovereignty over 42 per cent of Antarctic ice.

Assertions of Antarctic possession continued to escalate and the status of claims remained unresolved – an uncertainty that became known as “the Antarctic problem.” In early 1939 an expedition from Adolf Hitler’s Germany bombed Antarctic ice with hundreds of cast-iron swastikas, each carefully counterbalanced so that it stood upright on the surface. In the early 1940s, Britain, Chile and Argentina contested possession of the Antarctic Peninsula and nearby islands. Stamps, post offices, maps and films were weapons of war in a region which, depending on your nationality, was known as Tiera O’Higgins, Tiera San Martin, Palmer Land or Graham Land. In February 1952, Argentine soldiers fired machine-guns over the heads of a British geological party trying to land at Hope Bay on the Antarctic Peninsula. The following summer, in retaliation, British authorities deported two Argentines from the South Shetlands and ordered troops to dismantle Argentine and Chilean buildings on the islands. But British and Argentine crews still had good enough relations at Deception Island to hold soccer matches, although they disagreed as to who was the home team.

Immediately after the second world war, in 1946–47, America’s great polar explorer, Richard Byrd, led the largest ever expedition south (his third), called “Operation Highjump.” The “operation” involved 4000 personnel, a dozen icebreakers and an aircraft carrier. An official Navy directive of 1946 identified the expedition as a means for “consolidating and extending United States potential sovereignty over the largest practicable area of the Antarctic continent.” In 1950, the Soviet Union announced its renewed interest in Antarctic exploration, occupation and sovereignty. It began to seem that the Cold War might find its way to the coldest part of the planet.

In 1948, the US government proposed an international trusteeship for Antarctica consisting of the seven claimant states and the United States. The apparent idealism of this terra communis was tempered by the proposal’s goal of excluding the Soviet Union from the power bloc. The claimant nations rejected the proposal because it required the renunciation of sovereignty. The Chileans, however, suggested a compromise (known as the Escudero Plan) which allowed claims to be suspended rather than renounced. By 1950 Chile and the US, united by a desire to exclude the USSR, had agreed on this revised plan for internationalisation. But a significant world event was about to change Antarctic politics.

The International Geophysical Year of 1957–58 (known as IGY) was the biggest scientific enterprise ever undertaken, and the launching of the Soviet spaceship, Sputnik, on 4 October 1957 was its most visible achievement. Building on the tradition of International Polar Years (held previously in 1882–83 and 1932–33), IGY made a focus of Antarctica as well as those other regions – outer space and the ocean floor – made newly accessible by technology. Tens of thousands of scientists from sixty-six nations took part at locations across the globe. In Antarctica, twelve countries were involved: the seven claimant nations plus Belgium, Japan, South Africa, the United States and the Soviet Union. For fifty years the main motives for Antarctic work had been national honour and territorial conquest; now, scientific work and international cooperation became the priorities.

IGY was such a resounding success that it cried out to be institutionalised. It was also clear that any management regime for Antarctica had to include the Soviet Union. Intensive diplomatic activity following IGY culminated in a draft for an Antarctic Treaty which incorporated the compromise of the Escudero Plan. Military activity and testing of any kind of weapons were prohibited south of 60º, information was to be shared, and inspections of other nations’ bases allowed at any time. On 1 December 1959, the Antarctic Treaty was signed in Washington by the twelve nations that had participated in IGY. Science as an international social system had never before revealed itself to be so powerful.

Australia argued especially for non-militarisation, freedom of scientific research and the freezing of territorial claims, with the Australian external affairs minister, Richard Casey, helping to persuade the Soviets to accept this last, crucial provision. The Russians were initially opposed to any mention of claims at all, but a meeting between Casey and Nicolai Firubin, the Soviet deputy foreign minister – held near a Queensland beach in March 1959 – brought agreement.


BY THE TIME delegates of the twelve IGY nations gathered in Washington fifty years ago, myriad working party meetings had prepared the ground for a successful conference. Yet there was still uncertainty in the air. It was not simple. The Antarctic Treaty is a relatively brief, eloquent document, but it took the momentum and euphoria of IGY, the eighteen months of working parties and then another six weeks of demanding, cooperative work in Washington to create it. And it required courage and goodwill – and some planetary consciousness – from all parties.

Participants were walking a kind of tightrope, as the leader of the French delegation explained: “Each day that went by could bring about the failure of the Conference, but each day that passed brought to us a strengthened hope of success.” And there was indeed hope and optimism in the air – and a sense of history too. Antarctic history itself was an inspiration. Delegates to the Washington conference felt a humble continuity with the courage of explorers past. As one representative put it, “we can justly feel that it is an exceptional success to have been able to conquer so many obstacles which, to us, seemed as insurmountable as those with which the daring explorers of Antarctica had to cope.”

We look back on this achievement of 1959 and see it as remarkable that such a document of peace should emerge from the period of the Cold War. They took pride in this surprising result at the time, too. In fact that anxious political context gave the meeting some of its edge and momentum. They wanted to set a new path. And people enjoyed the wordplay about temperature just as we do now: there were references to the Cold War and the need to exclude it from the coldest continent; they marvelled that a Cold Peace might break out in Antarctica; there was a lot of talk about warmth, and about thawing. The Soviet president, Nikita Khrushchev, upon assuming power in 1958, had repudiated aspects of Stalin’s regime and had been willing to travel to the United States. His visit occurred just the month before the Antarctic conference, and there were plans for President Eisenhower to visit Moscow. There was keen anticipation about a Paris summit conference in May 1960 of “the Big Four”: Khrushchev, Eisenhower, Britain’s Harold Macmillan and France’s Général de Gaulle. Détente was in the air.

And Antarctica itself delivered a lesson. In the course of the Antarctic conference, there was a fatal accident down south. A tractor carrying three New Zealanders fell down a hidden crevasse – one man was killed and two were badly injured. A US party in the area lent valuable assistance in rescuing the injured. The event was a reminder to delegates that “even in today’s conditions,” Antarctica is perilous and human cooperation is vital. The event confirmed what the delegates felt – that even as they launched Antarctica into a new political era, the physical place remained humbling and the continuities with the heroic age were strong.

It was fortunate that these crucial meetings and negotiations about the Antarctic Treaty took place in a brief period of reduced east–west tension. A major determinant for success in the treaty negotiations was that neither the United States nor the Soviet Union, having established presences on the continent, was prepared to withdraw and leave the field to the other. A few months after the treaty was signed, Soviet and American relations disintegrated dramatically when an American U-2 spy plane was captured over Russia. Eisenhower refused to apologise for the spying and Khrushchev stormed out of the Paris summit. The construction of the Berlin Wall commenced just after the first Antarctic Treaty Consultative Meeting was held in Canberra in July 1961. A year later, the world held its breath during the Cuban missile crisis. The Cold Peace, one might say, was established during a brief interglacial in the Cold War.

Today we recognise, with hindsight, that the Antarctic Treaty was not only a significant achievement in its own right, but that it was also a precursor, a model for the rest of the world and for the future. It was the first disarmament treaty of the Cold War. It became an inspiration for the governance of other places – for management of the sea and outer space. The people who created the Antarctic Treaty foresaw some of this in 1959. They believed that it could be the start of something new. This was part of the motivation, the incentive, the inspiration for what the Argentine delegate called “the transcendental provisions of the treaty.” As the leader of the Chilean delegation reported, “Someone said, during a debate, that we were drafting a document that could mean the beginning of a new era for the world.” There is this wonderful sense amongst the delegates of rising above themselves and of their nationalities, of surprising themselves with what could be achieved through sympathetic strategy and patient talk. Even the shape of the 1959 conference expressed this transformation, beginning as it did with recitations of past national achievements and ending with hopes for a common future. There were, at the end of the meeting, as the signing took place, expressions of surprise as well as extreme satisfaction. The non-nuclear provision, in the words of the Argentine delegate, went “beyond the greatest expectations.”

Remarkably, Antarctica is a part of our globe where strategic idealism has sometimes triumphed over short-term politics. Perhaps its political culture has been formed at the threshold of two ages – one of competitive nationalism and the other of cooperative internationalism. Antarctic history offers us both the latest phase of imperial partition and the first expression of planetary awareness. Politically as well as geophysically, then, Antarctica seems, as Alan Henrikson put it, “the last place on earth, the first place in heaven.” The deep black Antarctic sky with its crystal clear air really does appear to bring other worlds closer. The Antarctic Treaty – born not just in, but out of, the Cold War – is a beautiful, workable, enduring document, both delicate and robust. It truly does seem an expression of both heaven and earth. Its creation fifty years ago reminds us that politics driven by respect for science can sometimes achieve miracles. •


Photo: Tom Griffiths

The post A miracle of politics and science appeared first on Inside Story.

]]>