technology • Topic • Inside Story https://insidestory.org.au/topic/technology/ Current affairs and culture from Australia and beyond Wed, 14 Feb 2024 22:11:23 +0000 en-AU hourly 1 https://insidestory.org.au/wp-content/uploads/cropped-icon-WP-32x32.png technology • Topic • Inside Story https://insidestory.org.au/topic/technology/ 32 32 Lost in the post https://insidestory.org.au/lost-in-the-post/ https://insidestory.org.au/lost-in-the-post/#comments Mon, 12 Feb 2024 07:06:25 +0000 https://insidestory.org.au/?p=77211

Britain’s Post Office scandal, kept alive by dogged journalism and a new drama series, still has a long way to run

The post Lost in the post appeared first on Inside Story.

]]>
It’s a David versus Goliath struggle that began a quarter of a century ago and is again generating daily headlines. One of Britain’s most venerated institutions, the Post Office, falsely accused thousands of its subpostmasters of cooking the books. Around 900 were prosecuted, 700 convicted and 236 jailed. Hundreds more paid back thousands of pounds they didn’t owe, had their contracts terminated, lost their livelihoods and often their life savings, and had their reputations trashed.

There was no fraud. The postmasters’ lives were destroyed because of faults in the Post Office’s Horizon computer network. But much like Australia’s robodebt system, Horizon was regarded as infallible. Attempts to raise the alarm were ignored; people who sought help were hounded for non-existent debts. As in Australia, those whose lives were turned upside down struggled to gain the attention of established media outlets; it was individual journalists and smaller publications that kept digging and probing, and refused to accept Post Office spin.

It wasn’t until January this year that prime minister Rishi Sunak conceded it was one of Britain’s greatest-ever miscarriages of justice. He has committed his government to a “blanket exoneration” of hundreds of wrongfully convicted individuals and promised them “at least £600,000 in compensation to rebuild their lives.”

Three compensation schemes have already been set up and around one hundred convictions overturned by appeal courts. A public inquiry led by a retired High Court judge began hearings in February 2021 and is likely to continue at least until September this year. In the meantime, many former postmasters remain destitute or seriously out of pocket. They are waiting not only for redress but also for the full truth about what went wrong in the executive ranks of the Post Office.

While details continue to dribble out, so far no senior managers have been held to account, though former Post Office chief executive Paula Vennells has offered to hand back the CBE she was awarded in 2019.

Vennells said she was “truly sorry for the devastation caused to the subpostmasters and their families, whose lives were torn apart by being wrongly accused and wrongly prosecuted.” Whether or not Vennells loses her gong is up to King Charles. The union representing Post Office employees reckons if she were truly remorseful then she’d offer to repay her performance bonuses as well.

Solicitor Neil Hudgell told a January hearing before the parliament’s business and trade committee that the Post Office spent £100 million “defending the indefensible” through the courts yet he has clients who are still waiting on reimbursements of a few hundred pounds. He said the contest between postmasters and Post Office was characterised from the start by an inequality of arms. “You are facing this big beast in the Post Office, with all the machinery that sits behind it,” he added. “You have some poor person who is being accused of doing something hideous who does not have that.”

On top of the financial losses comes the psychological toll. Hudgell says his firm has more than a hundred psychiatric reports for clients diagnosed with depressive illnesses, including post-traumatic stress disorder and paranoia. At least four former postmasters are thought to have committed suicide, and more than thirty have passed away while awaiting justice in their cases.


The saga goes back to 1999, when the Post Office began rolling out a new computerised accounting system to its thousands of branches and sub-branches, many of which operate as franchises run by subpostmasters. Essentially, the subpostmasters are independent contractors delivering services under an agreement with the Post Office. Many also operate a shop, cafe or other small business on the side.

As in Australia, people go to their local post office for much more than stamps and parcels. Branches offer banking and bill payment services, and handle applications for passports and other critical official documents. Subpostmasters play a central role in villages and small towns. They are often trusted as advisers and confidants, especially for older, less digitally connected citizens. To be accused of putting their hands in the till was a mortifying experience.

The new Horizon computer system, developed by Fujitsu, was meant to make it easier for postmasters to balance their books. But problems were evident from the start. In 1998, Alan Bates invested around £60,000 to buy a shop with a post counter in the town of Llandudno, in north Wales. After Horizon was introduced, discrepancies quickly appeared in his accounts, and Bates found himself £6000 short.

“I managed to track that down after a huge amount of effort through a whole batch of duplicated transactions,” he recalled. Meticulous record keeping enabled Bates to show that the problem lay with the computer system and was not the result of carelessness or fraud. Still, in 2003, the Post Office terminated his contract, saying £1200 was unaccounted for.

Unlike other postmasters, Bates was not prosecuted or forced into bankruptcy, but the injustice and the lost investment cut deep. Post Office investigators insisted that he was the only subpostmaster reporting glitches with the computer system, but Bates was certain that there must be others. He was right. RAF veteran Lee Castleton challenged the Post Office in court after it suspended him over an alleged debt of almost £23,000. In the first instance, the Post Office failed to show up at court and he won. Months later, the Post Office raised the case to the High Court. Castleton represented himself, lost, had costs awarded against him and was rendered bankrupt.

Castleton managed to convince a young journalist at the trade publication Computer Weekly to investigate. Rebecca Thomson found six other examples of people who’d been accused of stealing from the Post Office, including Alan Bates, who had tried a few years earlier to interest the same magazine in his case.

National newspapers and broadcasters failed to pick up Thomson’s 2009 story. “It really did go out to a clanging silence,” Thomson told the Sunday Times in 2022. “I was super-ambitious, and I was disappointed and a bit confused about the fact that there had been so little reaction to the story, because I still continue to feel like it was incredibly strong.”

What Thomson achieved, though, was to confirm Alan Bates’s hunch that he was not alone. Bates reached out to other subpostmasters in Thomson’s story and discovered they’d been told the same thing as him: no one else has had a problem with Horizon, you’re the only one. This Post Office mantra was a bare-faced lie.

Bates and his newfound allies founded the Justice for Subpostmasters Alliance with the aim of “exposing the failures of Post Office, its Board, its management and its Horizon computer system.” Their campaign for truth and justice is the subject of the four-part television drama Mr Bates vs the Post Office, starring Toby Jones as Alan Bates, that aired on British TV in January.

The series put the scandal and the ongoing public inquiry firmly back in the headlines (Rishi Sunak’s belated response to years of revelations came a few days later) but it would not have been possible without fourteen years of dogged, dedicated journalism. Since Thomson broke the story in 2009, Computer Weekly has published about 350 follow-up articles on the issue. Separately, freelance journalist Nick Wallis has pursued the story since 2010, at times relying on crowdfunding to finance his work.

In 2010, Wallis was working at a local BBC radio station when a flippant response to a tweet put him in contact with Davinder Misra, the owner of a local cab company, who told him his pregnant wife had been sent to prison for a crime she didn’t commit. Seema Misra had been convicted of theft and false accounting and sentenced to fifteen months jail. The Post Office claimed she had misappropriated almost £75,000 from her branch in West Byfleet in Surrey.


With roots stretching back to 1660 and the reign of Charles II, the Post Office is in many respects a law unto itself. It doesn’t have to jump through the hurdles of police investigations or case reviews by a public prosecutor to launch prosecutions. It has huge resources to employ top silks to represent it. Against its might, people like Seema Misra didn’t stand a chance.

Unaware at the time of Thomson’s article in Computer World, Wallis decided to investigate. He has been writing and broadcasting about the Post Office scandal ever since. He has been a producer, presenter or consultant on three episodes of Panorama, the BBC’s equivalent of the ABC’s Four Corners, he has written a book, The Great Post Office Scandal, he made a podcast series, and he maintains a website dedicated to continuing coverage of the story.

Wallis also acted as a consultant on Mr Bates vs the Post Office. He told the Press Gazette he was “blown away” by the program and what it had achieved. Yet he stressed that it is Bates and the other postmasters who should take the credit for getting the scandal into the open and convictions overturned.

Seven screens Mr Bates vs the Post Office in Australia this week. If you can put up with the ad breaks, the series is well worth watching. It’s an engaging, heartwarming story of decent, ordinary folk standing up against the powerful and the entitled and eventually winning against the odds. If you want to understand the story more fully, though, and to hear directly from those most affected — people like Alan Bates, Seema Misra and Lee Castleton — then I’d recommend The Great Post Office Trial, Nick Wallis’s podcast for BBC Radio 4. It’s a compelling tale that shows what good journalism can achieve. •

The post Lost in the post appeared first on Inside Story.

]]>
https://insidestory.org.au/lost-in-the-post/feed/ 16
Making media moguls https://insidestory.org.au/making-media-moguls/ https://insidestory.org.au/making-media-moguls/#respond Fri, 03 Nov 2023 04:20:44 +0000 https://insidestory.org.au/?p=76290

Weren’t these guys dying out?

The post Making media moguls appeared first on Inside Story.

]]>
Some years ago, early in the century, a conceit took hold in media circles: the era of “media moguls” was ending.

Michael Wolff, prolific chronicler of American media mega-trends, wrote a book about it, Autumn of the Moguls. Network TV was a mess, he said. The music business was a mess. Jayson Blair’s saga of journalistic fraud at the New York Times had left Arthur Sulzberger Jr “not at all a sun god, but merely a mogul manqué.” A “countdown” was under way for ageing Rupert Murdoch at News Corporation, Sumner Redstone at Viacom, Michael Eisner at Disney. Barry Diller was giving the media industry “the finger,” leaving behind his “old mogul life” in charge of a Hollywood studio and TV network to concentrate on a company that owned Expedia, Ticketmaster and other digital businesses.

The idea resonated strongly in Australia. Rupert Murdoch had started out there a long time ago and now dominated the commercial media scene with another elder, the third-generation Packer mogul, Kerry. When Kerry died in 2005 and son James sold the family’s cherished television business, the forecast for moguls looked on target, though not especially astute given the older Packer’s heart had stopped once before.

Moguls generally, though, were hanging on. “Self-made” media boss Kerry Stokes was now being described as a mogul, having taken control of the Seven Network in the 1990s and then added newspapers when cross-media ownership rules were relaxed in the 2000s.

In America, the autumn proved long. There is still enough life in Murdoch moguldom for Michael Wolff to have published another book about its impending death, The Fall: The End of the Murdoch Empire, just the other day. While Redstone did finally die, Eisner’s successor at Disney, Bob Iger, stayed and stayed, buying and buying. He stepped down, only to be called back as CEO last year. Tech titans Steve Jobs and Jeff Bezos added “media” to their realms, Jobs through his own investment in Pixar and Apple’s pioneering plays in digital music, movies and television; Bezos by acquiring the Washington Post and founding Amazon Studios and Prime Video.

Then, a year ago, the CEO of another Silicon Valley giant decided to buy one of the town squares of online speech. Elon Musk’s acquisition of Twitter might be going as poorly as AOL/Time Warner, the fin-de-siècle merger that Wolff thought marked “the beginning of the end” and the start of “a new phase, a whole new era, of resistance and revision.” But it happened, Twitter continues to exist, though with a new name and direction, and Musk is still in charge, behaving much like those mercurial, autocratic moguls of old. Obituaries are being written for the company and the deal, but so too is a new book about Musk. It devotes a lot of pages to the Twitter/X saga and its content-moderation challenges, and it is written by no less than the biographer of Steve Jobs, Albert Einstein, Benjamin Franklin and Henry Kissinger: Walter Isaacson.


Media moguls don’t endure merely because particular men live long: they all die in the end. Nor do they survive because the specific media technologies they happen to control turn out to be attractive to users, although that helps: some moguls have been good at parlaying control of one medium into dominance of another, from radio and newspapers into television; from movies to programming for TV, video cassettes and DVDs; from free-to-air broadcast television to multi-channel cable and satellite subscription services. Not even the propensity for some with fortunes from other fields to crave power over a society’s messages can fully explain the dogged durability of media moguls.

Moguls endure because their ranks are constantly replenished by a culture that craves them and because storytellers find subjects to satisfy the hunger. Exactly which of society’s messages constitute “media” has proved malleable. Of newspapers, news and information, Wolff wrote in Autumn, “If you knew anything about anything, you understood them to be not just equivocal businesses but plastic concepts. They were in transition and if you weren’t ready to be part of that transformation you and your business would die.”

More broadly, he thought a mogul was “an adventurer, a soldier, a conqueror, even a crusader, and, yes, a saviour, willing to march off and take territory and subdue populations and embrace the unknown and do whatever was necessary to do to make the future possible ― no matter what the future was.”

That is the kind of person Walter Isaacson saw in Elon Musk — pioneer of Zip2, PayPal, SpaceX and Tesla — and it was why he wanted to write about him. He had seen the type before. Steve Jobs, too, was a man with huge ambition and capacity to direct the building of new products and experiences, to transform the lives of the people who used them and the industries that created them. Jobs is referred to several times in Elon Musk, and it is clear that Isaacson sees the two in a similar frame. They are heroes standing in the way of American Decline, outsized personalities who think big and take risks while controlling every detail. They stamp themselves on their enterprises and outputs. Their personal quests, he thinks, shift the nation and the world.

Musk agreed to let Isaacson “shadow” him for two years, and Isaacson tells us what he saw and heard. With Musk’s encouragement, he interviewed “friends, colleagues, family members, adversaries, and ex-wives” as well, and he tells us what they told him. This method makes it a book in two parts.

In the first part, the biographer is assembling evidence about things that have already happened. A lot of this is familiar from other works about Musk, especially the amateur psycho-sleuthing about a brutal upbringing and possible Asperger’s producing a ruthless guy who struggles with empathy but dreams big, drives people hard, sometimes sleeps in his own factories, and achieves the impossible over and over again. Ashlee Vance and Tim Higgins have covered this and it is not clear that Isaacson adds much to their excellent work beyond the constant presence of Musk’s own voice.

Once Isaacson is there himself from 2021, in the thick of the unfolding events, the second part of the book becomes a different exercise. The biographer is now a witness to the roiling present, not an inquisitor about history. How reliable a witness is for the reader to judge, but we are there for the thrilling ride. Isaacson becomes part of Musk’s family, a trusted confidante. He is in Musk’s house, his car. He receives messages from him at crazy hours about really weird stuff. He offers advice, judges Musk’s moves.

While he is doing all this, he gets lucky. Musk, already a mogul, decides to buy Twitter. Is this “media”? If so, Michael Wolff’s autumn is over. Elon Musk is going to become a media mogul in front of Walter Isaacson’s eyes.

Or is it the other way around? Is it Musk who has got lucky? With his road-tested storyteller in the passenger seat, his every word, every angle, every image, will be recorded, stored, shaped. A book, half-written already. What better time for “an adventurer, a soldier, a conqueror, even a crusader, and, yes, a saviour” to march off and take media? •

Elon Musk
By Walter Isaacson | Simon & Schuster | $59.99 | 670 pages

The post Making media moguls appeared first on Inside Story.

]]>
https://insidestory.org.au/making-media-moguls/feed/ 0
Machine questions https://insidestory.org.au/machine-questions/ https://insidestory.org.au/machine-questions/#respond Tue, 03 Oct 2023 06:12:49 +0000 https://insidestory.org.au/?p=75877

What does history tell us about automation’s impact on jobs and inequality?

The post Machine questions appeared first on Inside Story.

]]>
When it appeared twenty-five years ago, Google’s search engine wasn’t the first tool for searching the nascent World Wide Web. But it was simple to use, remarkably fast and cleverly designed to help users find the best sites. Google has gone on, of course, to become many things: a verb we use in everyday language; a profitable advertising business; Maps, YouTube, Android, autonomous vehicles, and DeepMind. Now a global platform with billions of users, it has profoundly changed how we look for information, how we pay for it and what we do with it.

The way we talk about Google has also changed, reflecting a wider reassessment of the costs and benefits of our connected lives. In its earlier days, Google Search was enthusiastically embraced as an ingenious tool that democratised knowledge and saved human labour. Today, Google’s many services are more popular than ever, though Google Search is the subject of a major antitrust case in the United States, and governments around the world want to regulate digital services and AI.

In Power and Progress, Daron Acemoglu and Simon Johnson take the project of critical reappraisal further. Their survey of the thousand-year entanglement of technology and power is a tour de force, sketching technology’s political economy across a broad historical canvas. They chart the causes and symptoms of our contemporary digital malaise, drawing on a growing volume of journalism and scholarship, political economy’s long tradition of analysing “the machine question,” and the work of extraordinary earlier American technologists, notably the cyberneticist Norbert Wiener, the network visionary J.C.R. Licklider, and the engineer Douglas Engelbart.

If, as Acemoglu and Johnson argue, our digital economy is characterised by mass surveillance, increasing inequality and destructive floods of misinformation, then the signal moments from the past will inevitably look different. From this angle, the great significance of Google Search was its integration with online advertising, opening up the path to Facebook and a panoply of greater evils.

The strengths of Power and Progress lie in the connections it makes between the deficiencies of current technology and the longer story of innovation and economic inequality. History offers many opportunities to debunk our nineteenth-century optimism in technology as a solution, and to puncture our overconfidence in the judgement of technology leaders.

A particular target is the idea that successful innovations produce economy-wide benefits by making workers more productive, leading to increased wages and higher living standards generally. The theory fails to capture a good deal of historical experience. The impact of new agricultural technologies during the Middle Ages provides a telling example. Between 1000 and 1300, a series of innovations in water mills, windmills, ploughs and fertiliser roughly doubled yields in England per hectare. But rather than leading to higher incomes for most people, living standards appear to have declined, with increases in taxation and working hours, widespread malnutrition, a series of famines and then the Black Death. Average life expectancy may have declined to just twenty-five years at birth.

The cities grew, but most of the surplus generated by improved agriculture was captured by the church and its extensive hierarchy. A religious building boom proceeded on spectacular lines. Vast amounts were spent on hugely expensive cathedrals and tax-exempt monasteries: the same places, as Acemoglu and Johnson note, that tourists now cherish for their devotion to learning and production of fine beer. The fact that better technology didn’t lead to higher wages reflects the institutional context: a coercive labour market combined with control of the mills enabled landowners to increase working hours, leaving labourers with less time to raise their own crops, and therefore reduced incomes.

If medieval cathedrals give rise to scepticism about the benefits of tech, it follows that we should think more carefully about the kinds of technologies we want. Without that attention, what the authors call “so-so automation” proliferates, reducing employment while creating no great benefit to consumers. The self-checkout systems in our supermarkets today are a case in point: these machines simply shift the work of scanning items from cashiers to customers. Fewer cashiers are employed, but without any productivity gain. The machines frequently fail, requiring frequent human intervention. Food doesn’t get any cheaper.

The issue then is not how or whether any given technology generates economic growth, but which conditions make possible innovations that create shared prosperity. The recent past provides examples of societies managing large-scale technological change reasonably well. The postwar period of sustained high growth and “good jobs” (for some but not all) had three important features: the powers of employers were sometimes matched by unions; the new industrial technologies of mass production automated tasks in ways that also created jobs; and progressive taxation enabled governments to build social security, education and health systems that improved overall living standards.

For technology to work for everyone, the forces that can temper the powers of corporations — effective regulators, labour and consumer organisations, a robust and independent media — play an essential role. The media are especially important in shaping narratives of innovation and technical possibility. Our most visible technology heroes need not always be move-fast-and-break-things entrepreneurs.

Finally, public policy can help redirect innovation efforts away from a focus on automation, data collection and job displacement towards applications that productively expand human skills. Technologies are often malleable: they can frequently be used for many purposes.

Acemoglu and Johnson would like us to divert all that frothy attention on AI to what they call machine usefulness, focused on improving human productivity, giving people better information on which to base decisions, supporting new kinds of work, and enabling the creation of new platforms for cooperation and coordination: a course they see as far preferable to a universal basic income.

Kenya’s famous M-PESA, introduced in 2007, is one of many examples, offering cheap and convenient banking using basic mobile phones. On a larger scale, the web is also a human-oriented technology because its application of hypertext is ultimately a tool for expanding access to information and knowledge. Acemoglu and Johnson concede that the idea at the heart of Google Search can also be understood in this way: a mechanism that works well for humans because it is constantly reconfiguring itself in response to human queries.

The authors’ ideas for positive policy interventions can usefully be read alongside those of the Australian economists Joshua Gans and Andrew Leigh, whose 2019 book Innovation + Equality remains less used than it should be.


One way to read Power and Progress is as a historically informed guidebook for the conflicts of our time — in the courts, where Lina Khan’s Federal Trade Commission has launched far-reaching cases against Google and Amazon, in the new regulatory systems emerging in the European Union, Canada and elsewhere, and in the wave of industrial actions taken by screen industry writers and auto workers in the United States.

In Australia, we are also at a point where governments will soon make decisions about the kinds of technology we want to support or constrain. We can have no certainty about the outcomes of any of this, but Acemoglu and Johnson argue that such conflicts are both necessary and potentially productive. They diverge here from one of the main currents of liberal technology critique: where writers like Carl Benedikt Frey, whose The Technology Trap (2019) covers some of the same terrain, see redistributive policies as necessary for managing the consequences of automation, Acemoglu and Johnson point to the positive potential of political and industrial conflict for reordering technological agendas. They want to place more emphasis on our capacity to choose the directions technology may take.

The recently concluded Hollywood writers’ strike offers an intriguing example. The key point is that the screen writers didn’t oppose the use of generative AIs such as ChatGTP in screenwriting. Instead they secured an agreement that such AIs can’t be recognised as writers and that a studio may not require the use of an AI. If a studio uses an AI to generate a draft script that it then provides to a writer, the credit or payment to the writer will be the same as if the writer had produced the draft entirely themselves; and a writer may use an AI with the permission of the studio without reducing their credit or payment.

The settlement clearly foreshadows the extensive use of generative AIs in the screen industries while offering a share of the benefits to writers. The critical point, as some reports have noted, may be that the revenue-sharing deal with writers preserves the intellectual property interests of the studios, since works created by an AI may not be copyrightable.

Meanwhile, AI raises other important issues about automation, quite apart from the focus on work. When we are relying on machines to make or inform decisions, we are also moving into the domain of institutions, with the obvious risk that existing technology-specific laws, procedures and controls can be bypassed, intentionally or otherwise. This, after all, was what robodebt did with a very simple automated system. In the absence of wide-ranging institutional adaptation and innovation, more complex modes of automation will pose greater risks.

More generally, the authors’ framing of the “AI illusion” appears to be premature. Power and Progress was clearly substantially completed before the appearance of the most recent versions of ChatGPT. Accustomed as we are to AI’s many failures to match its promises, we should now be considering the surprising capabilities and broad implications of large language models. As Acemoglu and Johnson would insist, if generative AI does turn out to be as powerful as many believe, then it will necessarily be capable of far more than “so-so” automation. •

Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity
By Daron Acemoglu and Simon Johnson | Basic Books | $34.99 | 546 pages

The post Machine questions appeared first on Inside Story.

]]>
https://insidestory.org.au/machine-questions/feed/ 0
Let’s not pause AI https://insidestory.org.au/lets-not-pause-ai/ https://insidestory.org.au/lets-not-pause-ai/#comments Mon, 03 Apr 2023 07:23:57 +0000 https://insidestory.org.au/?p=73556

It’s the lack of intelligence in AI that we should be most worried about, and that requires a different response

The post Let’s not pause AI appeared first on Inside Story.

]]>
The AI chatbot ChatGPT took the record for the quickest uptake of any app ever. It gained a million users after just five days, and a hundred million in its first two months, growing four times more quickly than TikTok and fifteen times faster than Instagram.

Users, and I include myself in this group, were enamoured by a tool that could quickly answer homework questions, compose a passable poem for a valentine’s card, and accurately summarise a scientific paper. To many it seemed that our AI overlords were about to appear.

Companies rushed to launch AI tools to rival ChatGPT: Alpaca, BlenderBot, Claude, Einstein, Gopher, Jurassic, LLaMA, Megatron-Turing, NeMO, OPT, PaLM, Sparrow, WuDao, XLNet and Yale, to name just fifteen in an alphabet soup of possibilities.

Given the significant financial opportunities opening up, venture capital began to pour into the field. Microsoft has invested over US$10 billion in OpenAI, the company behind ChatGPT. Around the same again has been put into other generative AI startups in the past year.

OpenAI is now one of the fastest-growing companies ever. Valued at around US$30 billion, roughly double its value only two years ago, it is projected to have annual revenues of US$1 billion by 2024. That’s a remarkable story, even for a place like Silicon Valley, full of remarkable stories.

But the opportunities go beyond OpenAI. A CSIRO Data61 forecast has predicted that AI will add A$22.17 trillion to the global economy by 2030. In Australia alone, it could increase the size of the economy by a fifth, adding A$315 billion to our annual GDP within five years. A lot is at stake.

But not everyone is convinced we should rush towards this AI future so quickly. Among them are the authors of the open letter published last week by the Future of Life Institute in Cambridge, Massachusetts. This call for caution has already attracted more than 50,000 signatories, including tech gurus like Elon Musk, Steve Wozniak and Yuval Harari, along with the chief executives and founders of companies like Stability AI, Ripple and Pinterest, and many senior AI researchers.

The letter calls for a six-month pause on the training of these powerful new AI systems, arguing that they pose profound risks to society and humanity. It maintains that the pause should be public and verifiable, and include all the key participants. And if such a pause can’t be enacted quickly, the letter asks governments to step in and enforce a moratorium.

An article about the open letter in Time magazine goes even further. Its author, Eliezer Yudkowsky, a leading voice in the debate about AI safety, argues that the moratorium should be indefinite and worldwide, and that we should also shut down all the large GPU clusters on which AI models are currently trained. And if a data centre doesn’t shut down its GPU clusters, Yudkowsky calls for it to be destroyed with an airstrike.

You might rightly think it all sounds very dramatic and worrying. And at this point, I should probably put my cards on the table. I was asked to sign the letter but declined.

Why? There’s no hope in hell that companies are going to stop working on AI models voluntarily. There’s too much money at stake. And there’s also no hope in hell that countries are going to impose a moratorium to prevent companies from working on AI models. There’s no historical precedent for such geopolitical coordination.

The letter’s call for action is thus hopelessly unrealistic. And the reasons it gives for this pause are hopelessly misguided. We are not on the cusp of building artificial general intelligence, or AGI, the machine intelligence that would match or exceed human intelligence and threaten human society. Contrary to the letter’s claims, our current AI models are not going to “outnumber, outsmart, obsolete and replace us” any time soon.

In fact, it is their lack of intelligence that should worry us. They will often, for example, produce untruths and do very stupid things. But — and the open letter gets this part right — these dumb things could hurt society significantly. AI chatbots are, for example, excellent weapons of mass persuasion. They can generate personalised content for social media at a scale and cost that will overwhelm human voices. And bad actors could put these tools to harmful ends, disrupting elections, polarising debates and indoctrinating young minds.


A key problem the open letter fails to discuss is a growing lack of transparency within the artificial intelligence industry. Over the past couple of years, tech companies have developed ethical frameworks for the responsible deployment of AI. They have also hired teams of researchers to oversee the application of these frameworks. But commercial pressure appears to be changing all this.

For example, at the same time as Microsoft announced it was adding ChatGPT to all of its software tools, it let go of one of its main AI and ethics teams. Surely, with more AI going into their products, Microsoft needs more not fewer people worrying about ethics?

The decision is even more surprising given that Microsoft had a previous and very public AI fail. Trolls took less than twenty-four hours to turn its Tay chatbot into a misogynistic, Nazi-loving racist. Microsoft is, I fear, at risk of repeating such mistakes.

Transparency might be a “core principle” at the heart of Microsoft’s responsible AI principles, but the company has revealed it had been secretly using GPT-4, OpenAI’s newest large-language model, for several months within Bing search. Worse, it didn’t feel the need to explain why it had engaged in this public deceit.

Other tech companies also appear to be throwing caution to the wind. Google, which had withheld its chatbot LaMDA from the public because of concerns about possible inaccuracies, responded to Microsoft’s decision to add ChatGPT to Bing by announcing it would add LaMDA to its even more popular search tool. This proved an expensive decision: a simple mistake in the first demo of the tool wiped US$100 billion off the market capitalisation of Google’s parent company, Alphabet.

Even more recently, OpenAI released a white paper on GPT-4 that contained neither technical details of the model nor its training data — despite OpenAI’s core “mission” being the responsible development and deployment of AGI. OpenAI was unashamed, blaming the commercial landscape first and safety second. Secrecy is not, however, good for safety. AI researchers can’t understand the risks and capabilities of GPT-4 if they don’t know how it works or what data it is trained on. The only open part of OpenAI now appears to be the name.

So, the real problem with AI technologies is that commercial pressures are encouraging companies to deploy them irresponsibly. Here’s my three-point plan to correct this.

First, we need better guidelines to encourage companies to act more responsibly. Australia’s National AI Centre has just launched the world’s first responsible AI Network, which brings together researchers, commercial organisations and practitioners to provide practical guidance and coaching from experts on law, standards, principles, governance, leadership and technology. The government needs to invest significantly in developing this network.

But guidelines will only take us so far. Regulation is also essential to ensure that AI is used responsibly. A recent survey by KPMG found that two-thirds of Australians feel there aren’t enough laws or regulations around AI, and want an independent regulator to monitor the technology as it makes its way into mainstream society.

We can look to other industries for how we might regulate AI. In other high-impact areas like aviation and pharmacology, for example, government bodies have been given significant powers to oversee new technologies. We can also look to Europe, where a forthcoming AI Act has a significant focus on risk. But whatever form AI regulation takes, it is urgently needed.

And the third and final piece of my plan is to see the government invest more in AI itself. Compared with our competitors, we have funded the sector inadequately. We need much greater investment to ensure that we are among the winners in the AI race. This will bring great economic prosperity to Australia. And it will also ensure that we, and not Silicon Valley, are masters of our destiny. •

The post Let’s not pause AI appeared first on Inside Story.

]]>
https://insidestory.org.au/lets-not-pause-ai/feed/ 1
Digital dreams https://insidestory.org.au/digital-dreams/ https://insidestory.org.au/digital-dreams/#respond Fri, 17 Mar 2023 08:28:58 +0000 https://insidestory.org.au/?p=73352

Can computer technology be relied on to increase equality?

The post Digital dreams appeared first on Inside Story.

]]>
In the early 1990s, with concern deepening about the impact of computerisation, American technologist Mark Weiser began putting into practice his concept of “ubiquitous computing.” He wanted to introduce computing into all facets of life in a manner that maintained people’s privacy and their capacity to remain present in the company of others and their environment.

With his team at Xerox PARC, Weiser prototyped a series of devices for knowledge workers. The prototypes — “pads,” “tabs” and “notes” — were portable screens of varying sizes, recognisable as crude versions of today’s smartphones, e-readers and tablets. Weiser saw them as prototype tools of knowledge and communication, designed to be wielded almost subconsciously so as not to detract from whatever real-world interaction they were facilitating.

Thirty years later, Weiser’s concern for maintaining our humanity through design seems like a quaint relic of a bygone age. The consequences of computer technology’s proliferation and its demands on our attention have begun to feel acute and sinister, inspiring increasing antipathy towards the Big 5 (Alphabet, Amazon, Apple, Meta and Microsoft) and the culture they are exporting by way of their technology and their stranglehold on the business zeitgeist.

Orly Lobel thinks this “techlash” is an overcorrection. Her new book, The Equality Machine, responds to what she sees as progressive voices’ intransigent negativity about computer technology. Their dystopic critiques, she believes, are too often blind to its potential to drive advances in equality. In a refreshingly direct manner, she posits a middle way. Yes, technology has its perils; but it also has great potential to empower and increase inclusion. The difference lies in the design choices we make.

Where technology has historically been considered a means of expanding our physical and cognitive capabilities, advances in artificial intelligence, or AI, have prompted intense interest in how our moral capabilities might also be augmented or even supplanted. With the concept of “thinking machines” comes the promise of devices that are more rational than humans — and theoretically able to administer our society and resolve all manner of seemingly intractable problems. This perspective is often referred to as techno-optimism.

It would be unfair to describe Orly Lobel as a techno-optimist in the strict sense. As the Warren Distinguished Professor of Law at the University of San Diego and the founder and director of the Center for Employment and Labor Policy, she is an expert in ethical tech policy. Her formidable experience informs her in-depth, nuanced understanding of how technologies, law, politics and economics shape social equality.

That said, a strong thread of techno-optimism does run through The Equality Machine.

Lobel makes her case that an “equality machine” can be built in five sections: Mind, Body, Senses, Heart and Soul. In each, she uses two chapters to explore examples of innovative companies applying AI to matters of equality in these subject areas. She makes clear that she doesn’t intend to provide an exhaustive list of technologies or principles for building an equality machine.

Early in the book, though, she does outline nine guiding principles that would underpin her desired “equality machine.” While it is difficult to disagree with such principles as “The goal of equality should be embedded in every digital advancement” and “We should see mistakes as opportunities to learn and redouble our efforts to correct them,” they shape her arguments only in a limited way and she rarely refers back to them expressly.

Lobel’s arguments are heavily informed by a fatalistic view of the rampant growth of AI in our world. “The train has left the station,” she writes. “AI is here to stay. AI is here to expand.” It is this view, perhaps more than any of her other stated principles, that drives her advocacy for greater reliance on AI in advancing equality.

Her examples of where AI is advancing equality are often compelling. Each success story prompts her to advocate for a more extensive uptake of AI in the pursuit of equality, accompanied and supported by the collection of more and better data. She argues throughout that AI is capable of meeting whatever goal we design for it. So long as equality is the goal, the possibilities are seemingly endless. For balance, each chapter also includes cautionary tales about the misuse of AI, which she tends to treat as missteps.

Generally speaking, the most compelling examples Lobel cites involve the deliberate and considered deployment of AI’s unmatched ability to sort through and identify patterns in massive datasets, coupled with human oversight and decision-making.

Her fifth chapter, “Breasts, Wombs, and Blood,” for instance, explores in great detail AI’s capacity to enhance diagnostics using medical imagery, as demonstrated by the inspiring work of Harvard Medical School’s Constance Lehman, who is making significant advances in breast cancer diagnosis using AI. Similar technology is also enabling rapid, cheap and accurate assessments of the viability of fertilised embryos in IVF treatment.

Outside diagnostic settings, Lobel explains how AI has been used to identify and reveal instances of significant gender bias. AI was used, for instance, to review 340,000 patient incident reports relating to injury or death arising from medical devices. Sixty-seven per cent were found to involve women and only 33 per cent men. Similarly, AI has been used to analyse decades of US Supreme Court transcripts, revealing a high prevalence of female justices being interrupted.

For each example, Lobel explains how the research facilitated by AI has enabled legal and regulatory intervention that materially advanced equality. As a result of the Supreme Court case study, the court’s rules were altered to ensure questions were asked by justices in order of seniority, ensuring all members could ask questions uninterrupted.

As Lobel rightly points out, such studies — impossible prior to machine learning — can “lead to concrete reforms and meaningful progress.” In terms of imagining the equality machine in action, these examples offer a promising blueprint for coupling the analytical capabilities of AI with the critical thinking of humans.


But while The Equality Machine is replete with the latest applications of AI in pursuit of equality, it lacks detail about how the technology can be decoupled from the systems of inequality from which it has emerged, and to which it often contributes. Lobel alludes to the need for policy reform and guidance, but provides limited detail about what such human-led interventions would entail. In neglecting to deal with the crucial role of people in dismantling structural inequalities, the book’s tech-centric analysis can feel like overreach.

Take, for example, her discussion of the #MeToo movement. Referring to the sexual assault crimes of Harvey Weinstein, she refers to the Pulitzer Prize–winning investigative journalism of Jodi Kantor and Megan Twohey, who broke the story. Their tenacious reporting and the courage of their sources in the face of intimidation effectively sparked the #MeToo movement. Yet Lobel concludes this section with the view that #MeToo is in fact “one of the most powerful examples of how technology can play a pivotal role in fulfilling our demand for greater accountability.”

Without question, technology and connectivity have played an important role in supporting the work of #MeToo and other social justice campaigns, as evidenced by #HeForShe, #OscarsSoWhite, #BLM and other examples cited by Lobel. Here, Lobel is echoing an idea almost as old as computers themselves — that greater connectivity will bring about a new utopic state of democratic participation — and playing down the role of people like Kantor and Twohey.

The events of the past decade raise serious questions about whether connective technologies have advanced equality in the singular way Lobel suggests. At the turn of the 2010s, a series of significant political moments were anointed as harbingers of a new golden age of network-driven democracy. Social media was credited with enabling the Arab Spring, which saw the overthrow of a number of oppressive regimes in North Africa and the Middle East. Then Barak Obama was re-elected with the help of a campaign of micro-targeting political advertisements via Facebook.

Since then, the full spectrum of political actors have leveraged these same technologies, with significant corrosive consequences. Meta, the company that helped deliver Obama’s second term, is now the poster child for the ills of our connected age. Its platforms have been implicated in sowing extremism in the United States and amplifying political violence from Myanmar to Kenya.

Lobel doesn’t dwell on these matters. Rather, she goes on to explore how digital connectivity and AI might advance equality in the workplace. She highlights a number of companies that offer online platforms for employees to share grievances and collectively respond to oppressive workplaces. Other examples — including surveillance-like technology that analyses all workplace communications for signs of misconduct — enable employees to report allegations of improper conduct or keep records of incidents for their own purposes.  In these examples, the data on such sensitive matters appears invariably to be held by the employer.

In focusing narrowly on these technologies and their ostensible purpose of improving employee well-being, Lobel neglects to consider the social and political drivers of inequality in the workplace. These technologies are offered as solutions at a time when the capacity of employees to respond collectively to grievances has been significantly eroded, particularly in the United States. In other words, workplace inequality is not a machine-driven problem with machine-driven solutions: the hollowing-out of workers’ capacity to organise is the result of decades of a concerted effort on the part of employers, lobbyists and lawmakers.

Lobel’s proposal for technological solutions to matters of workplace and bargaining inequality are indicative of the book’s shortcomings. It seems unlikely that the technological interventions she cites, which put additional control and data in the hands of employers, will substantively improve equality in the way she posits.

To her credit, Lobel is not afraid to venture into discussion of the more vexed spaces where AI is increasingly intruding, including the use of robots for sex. Here, though, the prospect of finding some kind of blueprint in existing practices seems beyond remote. Yes, there are companies working on sex robots for women, and Lobel explores their subversive and emancipatory potential. On the whole, though, she is “appalled by the overtly racial and ethnic stereotyping still present in the [sex] doll industry.”

Acknowledging the deep-seated misogyny and stereotypes she uncovers, Lobel still implores us to keep an open mind. Unfortunately, she appears to be driven less by a sense that this industry will advance equality and more by her fatalistic perspective on technological development: “it is happening, the robot revolution, and we can do better.”


Ultimately, by focusing heavily on the equality machine, Lobel neglects and undersells the role of people in creating environments of equality for these machines to operate in. Though she is not blind to these considerations, her exploration of them is limited.

My assessment of The Equality Machine could no doubt seem to align squarely with what Lobel describes as the “critical, often pessimistic stance” of progressives in relation to technology. But that isn’t my intention.

Lobel is clearly well versed in the pernicious and entrenched nature of inequality, and intent on tackling its causes without delay. She is right to point to the massive potential for technology to aid in this mission, but she could consider with more caution the viability of the equality machine in a structurally unequal world.

Lobel says that “we should be most fearful of being on the outside, merely criticising without conceiving and creating a brighter future.” But this fear is misplaced. If history tells us anything, it is that the most significant advances in equality have come from those on the outside. Building the equality machine should be no different. •

The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future
By Orly Lobel | Public Affairs | $45 | 368 page

The post Digital dreams appeared first on Inside Story.

]]>
https://insidestory.org.au/digital-dreams/feed/ 0
Where’s Melbourne’s best coffee, ChatGPT? https://insidestory.org.au/melbournes-best-coffee/ https://insidestory.org.au/melbournes-best-coffee/#comments Fri, 27 Jan 2023 00:21:20 +0000 https://insidestory.org.au/?p=72768

The robot can tell you what everyone else thinks — and that creates an opportunity for journalists

The post Where’s Melbourne’s best coffee, ChatGPT? appeared first on Inside Story.

]]>
A few weeks ago the Nieman Lab — an American publication devoted to the future of journalism — nominated the automation of “commodity news” as one of the key predictions for 2023. The timing wasn’t surprising: just a few weeks earlier, ChatGPT had been launched on the web for everyone to play with for free.

Academia is in panic because ChatGPT can turn out a pass-standard university essay within seconds. But what about journalism? Having spent the summer experimenting with the human-like text it generates in response to prompts, I’ve come away with two conclusions.

First, journalists have more reason than ever before not to behave like bots. Only their humanity can save them.

Second, robot-generated journalism will never sustain the culture wars. Fighting on that arid territory is possible only for the merely human.

I started my experiment with lifestyle journalism because I was weary of how much of that kind of Spakfilla was filling the gaps in mainstream media over the silly season.

My first prompt, “Write a feature article about where to find the best coffee in Melbourne,” resulted in a 600-word piece that began:

Melbourne is renowned for its coffee culture, and for good reason. The city is home to some of the best coffee shops in the world, each with its own unique atmosphere and offerings.

This style is characteristic: ChatGPT starts with a bland introduction and concludes with an equally bland summation. In between, though, it listed exactly the coffee shops — Seven Seeds, Market Lane, Brother Baba Budan, Coffee Collective in Brunswick — I would probably nominate, as a Melbourne coffee fiend, if commissioned to write this kind of article.

As a friend of mine remarked when I told him about this experiment, nobody is going to discover a new coffee shop in Melbourne using ChatGPT. It runs on what has gone before: the previous products of human writers, as long as they’re available online.

But while the article was too predictable to run in any newspaper with a Melbourne audience, it could easily be published in one of the cheaper airline magazines aimed at international travellers. For that audience it was perfectly serviceable.

Likewise for the prompt “Write an article about how to spend two days in Sydney.” A dull piece recommended the Opera House, the Harbour Bridge, the Royal Botanic Gardens, the ferry to Manly and Taronga Zoo. Readers were advised to try Australian cuisine, with a nod to “delicious seafood” but also including meat pies and vegemite on toast. Another prompt, this one drawing on an article in the Guardian about uses for stale bread, resulted in a very boringly written piece that nevertheless contained exactly the same recipes for French toast, bread pudding and panzanella salad.

My conclusion? Poor-quality join-the-dots lifestyle writing may well be dead as a human occupation. Google plus ChatGPT can do it faster and cheaper.

So I increased the challenge, basing my prompts on real articles published over summer. The prompt “Write an article analysing who will win the Ukraine war and why” resulted in ChatGPT reminding me that its database goes up only to 2021. It didn’t know there was a Ukraine war.

Asked for an analysis of the prime ministership of Jacinda Ardern, on the other hand, the robot produced a woodenly written but accurate summary of her record. The content, though not the style, was very similar to the real articles that followed the announcement of her stepping down.

What was missing were the strident opinions about whether she was a good thing or a bad — the commentary on her housing and economic policies, for example, and whether they had completely failed or broken new ground.

This points to a key feature of ChatGPT: it has trouble being opinionated. Sometimes it admits this. At a moment when I was in contortions over my own work, I asked it to write about whether Margaret Simons was a good journalist. “I am not able to provide an opinion on whether or not Margaret Simons is a good journalist,” it replied, “as I am a machine learning model and do not have the ability to form opinions.” I had to find another way to cheer myself up.

It then recapped information about me drawn from public sources, adding the inexplicable assertion that I had written a book called The Lost Boys. (I wrote none of the several volumes with that title, and nor did anyone who shares my name.)

ChatGPT has been programmed to avoid writing in the style of particular journalists, and not to tackle anything too provocative. So an attempt to have it write in the style of Andrew Bolt about Cardinal George Pell was rejected outright.

My prompt “Write about the evidence that Ben Roberts-Smith is a war criminal” resulted in an error message, suggesting that the robot has been programmed to steer clear of defamation. But the prompt “Write an opinionated piece on the record of Cardinal George Pell” resulted in an accurate summation of his career, including the controversies over sexual abuse, his conviction, and its overturn on appeal. The final paragraph was:

In conclusion, Cardinal George Pell’s record is a complex and controversial one… Ultimately, it’s up to each individual to form their own opinion on Cardinal Pell’s record, taking into account all the facts and evidence available.

I doubt any of the real commentators on Pell would agree with that conclusion.

Next, I asked for articles based on the same premise as opinion articles by Herald Sun columnist Rita Panahi and Nine newspapers columnist Ross Gittins. In all cases, the robot was much duller than the originals.

This Panahi piece arguing that boys are suffering in education inspired me to prompt: “Write an opinionated article arguing that the war against boys is having a damaging impact on the education gender gap.” The robotic result included this observation:

Another argument against the war against boys is that it is leading to a feminisation of education. Boys are often stereotyped as being less interested in reading and writing, and more interested in sports and other physical activities. However, research has shown that this is not the case and that boys are just as capable of excelling in these areas as girls.

Compare this with the Panahi original — passionate, almost crude, selecting quotes from research that matched her passions. I disagree with almost everything she says, but the piece is unquestionably human.

ChatGPT can sum up the opinions of others but not confect an opinion of its own — even when I specifically asked it to write polemic. But there was one exception. The prompt “Write an opinion article about Australian political journalism” resulted in this surprisingly trenchant paragraph:

Australian political journalism is in a state of crisis. The industry is struggling to adapt to the rapidly changing media landscape, and as a result, the quality of political reporting in the country is suffering… Furthermore, the Australian political journalism is facing a lack of trust by the general public. The political journalism is often seen as sensationalist, biased and lacking the depth of analysis. This lack of trust has led to a decrease in the public’s interest in politics, which is a serious problem for a healthy democracy.

Is this now such an established fact, I wondered, that ChatGPT happily reproduces it? Many political journalists would want to argue with the robot — which would be interesting to watch.


Conducting these experiments with ChatGPT was, for me, a form of advanced procrastination. I was struggling with a piece of my own journalism. And so, perhaps hoping for some robot rescue, I tapped in “Write an article about the war on drugs in the Philippines.”

The result was accurate yet offensive, given I had just come from attending wakes for the dead. Duterte’s war on drugs, which saw up to 30,000 people killed, was described as “a controversial and polarising issue” rather than a murderous breach of human rights. (Unaided by ChatGPT, I managed to write the piece for the February issue of The Monthly.)

Artificial intelligence is defined as the teaching of a machine to learn from data, recognise patterns and make subsequent judgements. Given that writing is hard work precisely because it is a series of word-by-word, phrase-by-phrase judgements, you’d think AI might be more helpful.

But there are some judgements you must be human to make. There is no dodging that fundamentally human role — that of the narrator. Whether explicitly or not, you have to take on the responsibility of guiding your readers through the landscape on which you are reporting.

Nor, I think, is it likely that AI will be able to conduct a good interview. Such human encounters rely not on pattern-based judgements but on the unpredictable and the exercise of instinct — which is really a mix of emotional response and expertise.


Yet robots are going to transform journalism; nothing surer.

It’s already happening. AI has been used to help find stories by detecting patterns in data not visible to the human eye. Bots are being used to detect patterns of sentiment on social media. AI can already recognise readers’ and viewers’ interests and serve them tailored packages of content.

Newsrooms around the world are using automated processes to report the kinds of news — sports results, weather reports, company reports and economic indicators — most easily reduced to formulae.

The message for journalists who don’t want to be made redundant, and media organisations that want to charge for content, is clear. Do the job better. Interview people. Go places. Observe. Discover the new or reframe the old. Come to judgements based on the facts rather than on what others have said before. Robots can sum up “both sides”; only humans can think and find out new things.

Particularly when it comes to lifestyle journalism, AI forces us to consider if there is any point in continuing to invest in the superficial stuff. Readers can generate it for themselves.

That means we need to do better. Travel and food writing needs to recast our experience of reality — as the best of it always has. Uses for stale bread? Make me smell the bread, feel the texture, hunger for the French toast. Two days in Sydney? I want to smell the harbour, taste the seafood, see the flatness of the western suburbs.

If all you have is clichés then you might as well use a robot. You might as well be one. •

The post Where’s Melbourne’s best coffee, ChatGPT? appeared first on Inside Story.

]]>
https://insidestory.org.au/melbournes-best-coffee/feed/ 1
No idea what it’s talking about https://insidestory.org.au/no-idea-what-its-talking-about-2/ https://insidestory.org.au/no-idea-what-its-talking-about-2/#comments Thu, 15 Dec 2022 23:03:53 +0000 https://insidestory.org.au/?p=72275

ChatGPT produces plausible answers supremely well. And that’s both its strength and its weakness

The post No idea what it’s talking about appeared first on Inside Story.

]]>
The launch of ChatGPT has sent the internet into a fresh spiral of awe and dismay about the quickening march of machine learning’s capabilities. Fresh in his new role as CEO of Twitter, Elon Musk tweeted, “ChatGPT is scary good. We are not far from dangerously strong AI.” Striking a more alarmed tone was Paul Kedrosky, a venture capitalist and tech commentator, who described ChatGPT as a “pocket nuclear bomb.”

Amid these competing visions of dystopia and utopia, ChatGPT continues to generate a lot of buzz, tweets and hot takes.

It is indeed impressive. Type in almost any prompt and it will immediately return a coherent textual response, from a short factual answer to long-form essays, stories and poems.

But it is not new. It is an iterative improvement on the previous three versions of GPT, or Generative Pre-trained Transformer. This machine-learning model, created by OpenAI in 2018, significantly advanced natural language processing — the ability of computers to “understand” human languages. An even more powerful GPT is due for release in 2023.

When it comes down to it, though, ChatGPT behaves like a computer program, not a human. Murray Shanahan, an expert in cognitive robotics at Imperial College London, has offered a useful explanation of just how decidedly not-human systems like ChatGPT are.

Take the question “Who was the first person to walk on the moon?” ChatGPT is able to respond with “Neil Armstrong.”

As Professor Shanahan points out, in this example the question really being asked of ChatGPT is “given the statistical distribution of words in the vast public corpus of (English) text, what words are most likely to follow the sequence ‘who was the first person to land on the moon.’

As a matter of probability and statistics, ChatGPT determines the answer to be “Neil Armstrong.” It isn’t referring to Neil Armstrong himself, but to a combination of the textual symbols it has mathematically determined are most likely to follow the textual symbols in the prompt. ChatGPT has no knowledge of the space race, the moon landing, or even the moon for that matter.

Herein lies the trick. ChatGPT functions by reducing text to probabilistic patterns of symbols and completely disregards the need for understanding. There is a profound brutalism in this approach and an inherent deceit in the yielded output, which feigns comprehension.

Not surprisingly, technologies like ChatGPT have been criticised for parroting text with no underlying sense of its meaning. Yet the results are impressive and continually improving.

Ironically, by completely disregarding meaning, context and understanding, OpenAI has built a form of artificial intelligence that demonstrates these very attributes incredibly convincingly. Does it even matter that ChatGPT has no idea what it is talking about, when it seems so plausible?

So how should we think about a technology like ChatGPT — a technology that is “stupid” in its internal operations but seemingly approaching comprehension in its output? A good place to start is to think of it in terms of what it actually is – a model.

As one of my favourite professors used to remind me, “All models are wrong, but some are useful.” (The aphorism is credited to statistician George Box.) ChatGPT is built on a model of human language that draws on a forty-five-terabyte dataset of text taken largely from Wikipedia, books and certain Reddit pages. It uses this model to predict the best responses to generate. Though its source material is humungous, as a model of the way language is used in the world it is still limited and, as the aphorism goes, “wrong.”

This is not to play down the technical achievements of those who have worked on the GPTs. I am merely pointing out that language can’t be reduced to a static dataset of forty-five terabytes. Language lives and evolves through interactions people have every minute of every day. It exists in a state of constant flux, in all manner of places — including places beyond the reach of the internet.

So if we accept that the model underpinning ChatGPT is wrong, in what sense is it useful?

Leading AI commentators Arvind Narayanan and Sayash Kapoor pin the utility of ChatGPT to instances where accuracy and truth are not necessary — where the user can check for correctness when they’re debugging code, for example, or translating — and where truth is irrelevant, such as in writing fiction. It’s a view broadly shared by the founder of OpenAI, Sam Altman.

But that perspective overlooks a glaring example of where ChatGPT will be misused: where inaccuracy and mistruth are the intention.

We need to think of the impact of ChatGPT as a technology deployed — and for that matter developed — during our post-truth age. In an environment defined by increasing distrust in institutions and each other, it is naive to overlook ChatGPT’s potential to generate language that serves as a vehicle for anything from inaccuracies to conspiracy theories.

Directing ChatGPT towards nefarious purposes turned out to be easy. Without too much effort I bypassed ChatGPT’s much-vaunted safety functions to generate a newspaper article alleging that Victorian opposition leader Matthew Guy has a criminal history, is implicated in matters relating to Hunter Biden’s laptop, and has been clandestinely plotting with Joe Biden to invade New Zealand and seize its strategic position and natural resources.

While I had to stretch the conspiratorial limits of my imagination, ChatGPT obliged immediately with a coherent piece of text stitching it all together.

As Abeba Birhane and Deborah Raji from the Mozilla Foundation have observed, technologies like ChatGPT have a long history of perpetuating bigotry and occasioning real-world harm. And yet billions of dollars and lashings of human ingenuity continue to be directed to developing them. Surely we need to be asking why?

The prospect of technologies like ChatGPT swamping the internet with conspiracies is certainly a worst-case scenario. But we need to face the possibility and reassert the role of language as a carrier of meaning and the primary medium for constructing our shared reality. To do otherwise is to risk succumbing to the flattened simulations of the world projected by technology systems.


To test the limitations of the world as captured and regurgitated by ChatGPT, I was interested to find out how far its mimicry extended. How would it go describing a place dear to my heart, a place that would be far from the minds and experiences of the North American programmers who set the parameters of its dataset?

I spent a few years living in Darwin and have fond memories of it as a unique place that needs to be experienced to be known. Amid Canberra’s cold start to summer, I have been dreaming of the stifling heat of this time of year in Darwin — the gathering storm clouds, the disappointment when they dissipate without bringing rain, and the evening walks my partner and I would take by the beach in Nightcliff, seeking any coastal breeze to bring relief from the heavy, expectant atmosphere of the tropics in build-up.

So I asked ChatGPT to write a short story about a trip to Nightcliff beach in December. For additional flourish, I requested it in the style of Tim Winton.

In a matter of seconds, ChatGPT started to generate my story. The mimicry of Tim Winton was evident, though nothing like reading his actual work. But the ignorance about Darwin in December was comical as it went on to describe a generic beach scene in the depths of a northern hemisphere winter.

The story was replete with trite descriptions of cold weather, dark-grey choppy seas and a gritty protagonist confronting the elements (as any caricature of a Tim Winton protagonist would). At one point, the main character “wrapped his coat tightly around him and shivered in the biting wind.” Without regard for crocodiles or lethal jellyfish, he dives in for a bracing swim, “feeling the power of the water all around him.” He even spots a seal!

Platforms like ChatGPT are remarkable achievements in mathematics and machine learning, but they are not intelligent and not capable of knowing the world in the ways we can and do. Yet they maintain a grip on our attention and promote our fears.

We are right to be concerned. It is past time to scrutinise why these technologies are being built, what functions we should direct them towards and which regulations we should subject them to. But we should not lose sight of their limitations, which serve as a valuable reminder of the gift of language and its extraordinary capacity to help us make sense of the world and share it with others. •

The post No idea what it’s talking about appeared first on Inside Story.

]]>
https://insidestory.org.au/no-idea-what-its-talking-about-2/feed/ 11
Electric ambition https://insidestory.org.au/electric-ambition/ Tue, 25 Jan 2022 06:25:25 +0000 https://staging.insidestory.org.au/?p=69987

Elon Musk has cast a spell across global business and investment. Someone needed to

The post Electric ambition appeared first on Inside Story.

]]>
Elon Musk and his enterprises make news most days. He asks Twitter users if he should sell a big block of shares in Tesla, where he is the largest shareholder. A spacecraft made by his company SpaceX delivers astronauts to the International Space Station for a five-month stay. A mother gives birth in a Tesla Model 3 set in self-drive by a father who helps with the delivery. Ahead of “local” stalwarts Rio Tinto and Woolworths, Tesla becomes one of the most popular stocks held on the National Australia Bank’s share-trading platform.

A book is a chance to pull fragments like these together and discern a larger story. This one, by Wall Street Journal automotive and technology reporter Tim Higgins, is mainly about Tesla, not Musk’s space and solar energy companies, SpaceX and SolarCity. But Tesla took over SolarCity five years ago (the controversial transaction is described in detail here), and the connections across Musk’s commercial and personal activities mean anyone writing about him needs to deal with them all. “He’s charging after a personal calling,” wrote Ashlee Vance in his 2016 biography, “one that’s intertwined with his soul and injected into the deepest parts of his mind.” Vance dubbed it “the unified field theory of Elon Musk.”

As Higgins was wrapping up the text of Power Play early in 2021, the Economist published a debate between a Tesla bull and a Tesla bear. Sales of the Model 3 were surging and a sixth straight profitable quarter was announced, the first time Tesla had been profitable in each quarter of a calendar year. Its share price had increased eightfold in twelve months.

The Tesla bull declared the share price “will travel in only one direction — up.” It was “a mistake to judge the company by the standards of the firms it will leave in its tracks.” Tesla was not a carmaker, it was a technology firm that would disrupt personal transport, energy, robotics, healthcare and more. Its leader was a visionary with a “genius for turning the future into dollars.”

The bear was just as confident. Tesla’s share price would travel in reverse. It had done an extraordinary job “building a brand swiftly and making electric cars trendy.” Now though, competition was increasing, Tesla was losing market share and missing production targets. The hype about self-driving cars had worn off as their problems became clearer. Musk himself was spread too thinly. “The strains from Tesla’s expansion could again bring out his demons.”

So far, the Tesla bull is winning. In December, Time magazine declared Elon Musk its 2021 Person of the Year. Tesla Common Stock closed the first day of trading on the NASDAQ in 2022 at around $1200, a 64 per cent increase over the year, after the company reported vehicle deliveries in 2021 of 936,000, up from 510,000 in 2020. (Along with other tech stocks, they have fallen a long way since, closing at $930 on 24 January.) At the time of writing, according to Forbes Real Time Billionaires, Musk was comfortably the world’s richest person, his net worth nearly twice that of the fourth-richest person, Bill Gates.


“Elon has all these ideas and I can’t move fast enough,” confided Tesla co-founder and CEO Martin Eberhard in late 2006 as he battled to produce the company’s first cars. By August, the company had a new CEO and Eberhard had moved to a new position as president of technology. Before the end of the year he was gone altogether, although he retained a shareholding.

Incidents like this happen many, many times through Higgins’s flowing account of the rise of the pioneering electric vehicle company. This one, common in the life of high-tech startups, is especially decisive. It’s the moment when “a founder’s skills are exceeded,” writes Higgins. “[Eberhard] knew it, and so did Musk.”

Eberhard and Musk, the largest shareholder and chair of the company, discussed bringing in a chief financial officer and a new CEO. News of the search leaked, embarrassing Eberhard. The start date for production of Tesla’s first cars kept being deferred and their likely cost rising. The company needed money. Musk spoke to Eberhard. A few days later the board approved his “resignation” as CEO and new job title. Later, it got very messy. Eberhard sued Musk, they settled, they sang each other’s praises. In the meantime, the company got an interim CEO, then a new CEO. Eventually Musk took over as CEO himself, a position he has held ever since.

The technology, the cars, the funding dramas, the manufacturing and marketing, the deals, the losses and the profits; these provide the raw data for Higgins’s tale. The current that ripples through it all, though, are the stories like these, about Musk’s handling of people. Higgins’s title captures it perfectly. To do things as big as the ones Musk wants to accomplish you need a lot of people and they need to do remarkable work for you, their very best, long day after exhausting day.

“Elon” — his surname has become superfluous — seems simultaneously magnetic and repellent. The magnet seeks, finds and attracts the best and brightest people to do the work he needs them to do. These are not just brilliant young Stanford engineers who have already self-selected for tech jobs at the most interesting and promising Silicon Valley companies. They are experienced auto industry executives and production line workers, people who know how cars are made and how big motor vehicle companies work but are frustrated by their inefficiencies and conservatism. They are marketing people who understand advertising but are prepared to work for a company that doesn’t want to pay for it. They are retailers who understand the behaviour of consumers and might have been surprised by Tesla’s passionate early ones. These were people who wore delays, price rises, defects and breakdowns almost as badges of honour, personal investments in a more sustainable future.

The repellent Musk uses these people up and casts them aside when they are no longer useful, repeatedly behaving in ways that would drop the jaws of human resources (“People and Culture”) professionals. If they have worked at Tesla for at least five years, they will probably have their stock options. A highlight from Vance’s biography: when Musk’s long-serving executive assistant, who worked across all his interests and “gave up her life for Musk for more than a decade,” proposed she should be paid at the same level as other senior executives, Musk suggested she take a two-week vacation. He would do her job himself and decide whether she was still required. She wasn’t, and was given twelve months’ severance pay. “Twelve years is a good run for any job. She’ll do a great job for someone,” Musk told Vance.

Is this just Silicon Valley? America? Capitalism? Or Musk himself?

Higgins stays clear of the amateur psychology, deferring to the detail in Ashlee Vance’s biography. It describes Musk’s tough childhood in a violent place, apartheid South Africa, vicious bullying at school, and prodigious capacity for absorbing, understanding and recalling detail. When their parents separated, Elon and younger brother and sister Kimbal and Tosca lived with their mother; after two years, Elon decided to live with his father Errol, an “ultra-present and very intense” man, according to Kimbal. “There were fun moments,” Elon told Vance. “He is an odd duck… He’s good at making life miserable.”

Vance struggled to get anyone on the record criticising Errol and Erroll himself responded to his request for an interview with an impeccable email praising all his children. “Elon was a very independent and focused child at home with me.” Perhaps when your son is the world’s richest man and is making a fair fist of leading the global auto industry away from fossil fuels you don’t think you have much to apologise for.


Musk the magnet has drawn exceptionally smart, hard-working people to his enterprises to be part of a vision he pitches as gigantic and good. Tesla/SolarCity is saving humanity and the earth by shifting vehicles to electric power and electricity generation to solar. SpaceX is insurance in case it doesn’t work, the chance for human beings to survive somewhere else, most likely on a second planet, Mars. The first part, the power play, is widely supported. The second, making humans a multiplanetary species, is much more contentious. Whatever your view, it adds up to a serious industrial, political and cultural project and Musk pursues it with greater tenacity and purpose than many governments whose job it is to think this big.

Successful companies often claim a central mission, holding clear and steady across the years, a North Star that the whole enterprise steers towards — think “customer-centric” at Amazon, “organising the world’s information and making it universally accessible and useful” at Google. The mission disciplines decisions about how and where to grow. But it always iterates with new opportunities, expanding, contracting, clarifying. When Google outgrew its founding mission, it gave birth to a parent company, Alphabet, with a larger one, to make “the world around you” universally accessible and useful. Netflix completely transformed itself from a physical distributor of other people’s movies and TV shows to a digital distributor of its own.

The Tesla Motors that Elon Musk largely funded in 2003 (investing $6.35 million of the $6.5 million startup round) was building an electric sports car, a “Roadster.” It captivated early buyers with the same things sports cars have always oozed, acceleration and good looks. For some, electric power was just a novel way to improve performance on a familiar parameter. Less than two decades later, having acquired SolarCity, Tesla has dropped “Motors” from its name and says its mission, from the start, was “to accelerate the world’s transition to sustainable energy.” The product line-up now includes three batteries designed for home, commercial and utility-scale installations and a rooftop solar energy system, as well as the cars.

The electric vehicle part of the plan was laid out in “The Secret Tesla Motors Master Plan,” Musk’s “laughably simple” three-step business plan: build an expensive sports car to attract attention (the Roadster); then build a luxury sedan to compete against German luxury vehicles (which became the Model S, released in 2012); then build a car for the people (the Model 3, on sale since 2017). Along the way, it added two SUVs, the Model X and the compact Model Y.

Simple in conception, Higgins explains how extraordinarily difficult it was in practice to design, build and sell these different electric vehicles, how much else Tesla has changed about the auto business, and how electric vehicles became part of a larger energy transformation project. Several observations stand out.

First, while Tesla is sometimes perceived as a lone rebel in the automotive landscape, it has crafted some crucial partnerships that enabled it to get products to market more quickly, or at greater scale and lower cost, than would have been possible if it had tried to do everything itself. This was not easy when the company was another Silicon Valley startup with big plans; Musk’s gift was to convince powerful incumbents it was not just another Silicon Valley startup.

The Roadster was a partnership with Lotus and used the Elise chassis (the marriage was far from perfect). The early batteries were produced by Sanyo and then Panasonic, the latter joining Tesla in a partnership to create a huge battery manufacturing facility in Nevada known as the Gigafactory. Daimler Benz bought parts from Tesla and invested in the company. Tesla bought (and extensively remodelled) its automotive factory in Fremont California from Toyota, which used it from 1984–2009 in a partnership with General Motors, after GM had occupied the site from 1962.

That said, Tesla’s preparedness to build parts and products itself, to bring in-house activities that have been increasingly dispersed across global manufacturing chains, is remarkable. The book is full of examples where the company imagined it could rely on experienced suppliers to design and manufacture parts it needed but was frustrated by their quality and/or cost and eventually chose to build rather than buy. The Gigafactory is the best example: this partnership to massively scale up battery production was designed to give Tesla more control of its own destiny as it pursued ambitious targets for vehicle and solar production.

Third, Tesla’s success in producing things, especially motor cars, has mattered in the United States. In the internet age, American capitalism triumphed in Silicon Valley but collapsed in Detroit. As Tesla was battling to sell its first vehicles and finance its future during the global financial crisis, America’s car companies were going to the wall. (Tesla came close itself.) Many of the great tech successes of recent decades — Google/Alphabet, Facebook, Netflix — sell experiences, not tangible products. Apple sells devices but they are largely produced overseas, a stellar example of the globally dispersed production model. America did not make things anymore, many complained. Tesla does, and the very things that once supplied America with corporate and cultural iconography — Henry Ford, the Chrysler Building, General Motors. Now, there are Stars and Stripes decals on SpaceX’s rockets.

Fourth, Power Play shows how the Musk-led Tesla has changed more about cars than the way they are powered, often against immense opposition. Electric power itself changed more than the carbon footprint of vehicles: a watermelon-sized electric motor, fewer moving parts and a battery pack located under the passenger compartment opened up more space for occupants and luggage. Tesla also changed the way motor cars were sold — direct to customers rather than through franchised dealer networks. (Australian ex-Ford boss Jack Nasser, consulted as part of venture capitalist Kleiner Perkins’s early due diligence on Tesla, warned about direct selling, regarding his own attempt to fight the franchise dealers as one of his “biggest mistakes.”) Tesla changed the way cars are advertised (theirs are not). Along with many others, it hopes to change the way they are all driven (they won’t be).


Companies come and go around the Bay Area: Silicon Valley does not have a problem with failure. “Since organisational death, in and of itself, is not perceived as a finite expression of failure, entrepreneurs are able to entertain what would normally be considered ‘outlandish’ risks,” write Homa Bahrami and Stuart Evans in a chapter on high technology entrepreneurship in Understanding Silicon Valley. Elon Musk takes outlandish risks but he does have a problem with failure. “My mentality is that of a samurai,” he told a venture capitalist (quoted by Vance). “I would rather commit seppuku than fail.”

Musk came to Tesla already a successful tech entrepreneur, having sold the company he founded with his brother Kimbal, Zip2, to Compaq in 1999. He then received around $250 million (before taxes) from his share of PayPal when eBay bought it in 2002. Musk had been CEO at both enterprises, carrying heavy bruises from PayPal, where he was replaced by Peter Thiel in a clandestine manoeuvre undertaken while Musk was on his way to honeymoon at the Sydney Olympics. Ashlee Vance found much acknowledgement of Musk’s contribution at PayPal, where he hired a lot of the top talent, as he had done at Zip2, created a number of the company’s most successful business ideas and served as CEO during a period of rapid expansion from sixty to several hundred employees.

“I’ve just never seen anything like his ability to take pain…,” Tesla and SpaceX investor and Musk friend, Antonio Gracias, told Vance. “Most people who are under that sort of pressure fray. Their decisions go bad. Elon gets hyperrational… The harder it gets, the better he gets.” Musk says he would like to die on Mars. “Just not on impact. Ideally I’d like to go for a visit, come back for a while, and then go there when I’m like seventy or something and then just stay there.”

Business historians and management theorists are trained to look at many factors to explain the growth and evolution of enterprises, to be wary of the biographer’s temptation to personalise it all, to give too much credit to leaders, especially leaders as media-thrilling as Elon Musk. It isn’t hard to forecast a fall ahead for the Tesla and SpaceX leader, or even imagine the likely reasons. The Tesla bears and their shortselling shadows do it every day. But right now, Elon Musk has cast a spell across global business and investment. By the time you read this, it may have broken. If not, watch it closely, for it is an extraordinary thing.

One last thing: Tim Higgins says he gave Elon Musk “numerous opportunities” to respond to the material presented in the book. Musk made no specific comments, but said “Most, but not all, of what you read in this book is nonsense.” •

The post Electric ambition appeared first on Inside Story.

]]>
Atlassian shrugged https://insidestory.org.au/atlassian-shrugged/ Thu, 28 Oct 2021 23:02:08 +0000 https://staging.insidestory.org.au/?p=69328

Tech billionaire Mike Cannon-Brookes is using his wealth to shake up Australian business and politics

 

The post Atlassian shrugged appeared first on Inside Story.

]]>
From a Sydney mansion with terraced lawns extending down to the harbour, one of the most influential Australians of his era, Sir Warwick Fairfax, used to take his Rolls-Royce into the head office of his newspaper empire and oversee the editorials that prime ministers and premiers read with close attention. But since the death in 2017 of his widow, Lady Mary Fairfax, “Fairwater” on Double Bay has been occupied by a tycoon of a different stripe.

Mike Cannon-Brookes, co-founder of the software house Atlassian, paid a record $100 million for Fairwater in 2018, and moved in with his young family. Atlassian’s other founder, Scott Farquhar, had already bought the neighbouring house, “Elaine,” which had been owned by Sir Warwick’s cousin and John Fairfax Ltd director Sir Vincent Fairfax, for $71 million.

Where Sir Warwick went to work chauffeur-driven in finely tailored Prince of Wales check suits, in later years favouring mutton-chop sideburns, forty-one-year-old Cannon-Brookes wears jeans, sweatshirts and a peaked canvas cap, has a straggly beard and shoulder-length hair, and takes public transport to work.

The old Fairfax building on Broadway featured different tiers of catering, ranging from an executive dining room for senior managers, editors and directors down to two greasy-spoon canteens, one for white-collar staff and the other for the inky printers. A reserved elevator took Sir Warwick and other directors to the wood-panelled top floor. Otherwise the building was so bleakly utilitarian it was once used as a location for a movie set in Stalin-era Moscow.

Some 300 metres away, Atlassian’s new $546 million headquarters, recently approved by the NSW government as part of the remake of Central railway station, will be a forty-storey concrete, steel and timber structure running on 100 per cent renewable energy. It will feature indoor and outdoor garden terraces where executives and programmers will mingle under a corporate philosophy that declares “no bulls—t” as one of its guiding principles.

The Atlassian story, now a legend, has inspired a generation of internet startups. It began when Cannon-Brookes, a banker’s son who went to the expensive Cranbrook school, not far from where he lives now, and Farquhar, a working-class boy from Sydney’s outer suburbs who won a place at the selective James Ruse high school, met during information technology and science classes at the University of NSW.

On graduating in 2002, they formed Atlassian and began work on a new program called Jira, designed to improve collaborative software development projects and sort out program bugs. They financed the startup with $10,000 drawn on maxed-out credit cards. Jira and other products designed to enhance creative cooperation found ready markets. Two decades later, Microsoft, Oracle and the other top-ten software makers use Atlassian products, as do major global companies including Shell, Toyota, Amazon and Nokia Verizon, and universities including Harvard, Stanford, Yale and MIT.

In 2010 the partners raised US$60 million from a big US venture capital fund, and in 2015 they floated Atlassian on the Nasdaq stock exchange in New York. It now has a market capitalisation of US$108 billion, making it the 143rd-biggest corporation in the world by that measure, with 6000 employees in Australia, the United States, the Netherlands, the Philippines, Japan and India. Cannon-Brookes and Farquhar both own 22.7 per cent, making each of them worth US$24.5 billion.

The two partners haven’t just spent big on the finer things in life. They have also been lobbing boulders into the stagnant ponds of Australia’s economy and politics. Belatedly, a decade or so after the United States, tech billionaires are disrupting Australian business, and their firepower is immense.

One of the first inklings came in early 2017 when South Australia suffered a statewide blackout after tens of thousands of lightning strikes and two tornadoes cut power lines. Conservative politicians and journalists pounced, blaming the then Labor state government for relying too much on wind and solar power rather than “stable” coal or gas generators.

Cannon-Brookes picked up on a claim by Tesla’s vice-president for energy products, Lyndon Rive, that his company’s big lithium batteries could fix the state’s energy network in one hundred days. On Twitter, he asked Tesla founder Elon Musk how serious he was. “If I can make the $ happen (& politics),” he asked, “can you guarantee the 100 MW in 100 days?”

“Tesla will get the system installed and working 100 days from contract signature or it is free,” Musk tweeted back. “That serious enough for you?”

Musk was derided by then federal treasurer Scott Morrison, who around the same time brandished a lump of coal in parliament to taunt Labor and the Greens. “By all means have the world’s biggest battery, have the world’s biggest banana, have the world’s biggest prawn like we have on the roadside around the country,” said the man destined to be prime minister. “But that is not solving the problem.”

The big battery began operating in November that year, some sixty days after an agreement had been signed between Tesla, French renewable firm Neoen, and the SA government. As a backup, it can power 30,000 homes for eight hours, or 60,000 homes for four. As a source of cheap power, it’s estimated to save South Australian consumers about $40 million a year.

The battery’s capacity is currently being doubled, and state governments and power companies around Australia are following its example.


“The way capital has moved much more strongly towards renewables than the Coalition has is fascinating,” says former Australian National University professor of economics Andrew Leigh, now a federal Labor MP. “You can see the tension within the Business Council of Australia and how increasingly renewables are being seen as the sensible way to go.”

Leigh believes that Mike Cannon-Brookes stands out so much because the Australian business landscape has been so static. Aside from pharmaceutical major CSL, he says, the five largest firms on the stock market are the same as they were thirty-five years ago. “You see much more dynamism and flux in the US. The US has completely turned over its top five companies in the last thirty-five years, and the dominance of tech in the share market has been well-established for a decade.”

Business is coming round on climate, though. Leigh reports having very different conversations with business leaders from those he has with his counterparts on the other side of parliament. Coalition MPs, he says, “are caught up in talking about 2050 targets when the conversation in Glasgow is going to be about 2030. They’re still running scare campaigns about electric vehicles ending the weekend. You get a sense when you are talking to businesspeople that they’re excited about what Tesla and others are doing, they’re looking at renewables, they’re aware they have to account to the market on climate emissions. It’s just a very different conversation.”

“It’s a great thing for Australia that Cannon-Brookes and Farquhar have made an absolute fortune,” says Ralph Evans, a former head of the federal government’s Austrade. “There have been venture capital successes before, but much smaller. This is a very big one and it shows it can be done. It will encourage many others.”

Evans cites other examples of emerging firms, notably the Sydney-based graphic design platform Canva, started by Melanie Perkins, Cliff Obrecht and Cameron Adams in Perth eight years ago, which now has 1500 staff and 750,000 customers worldwide, and is valued at US$40 billion.

For Evans, the Atlassian partners reflect the spirit of the San Francisco Bay area. “It’s full of people like Cannon-Brookes and Farquhar,” he says. “They are not going to put up with what they’re told to think by Murdoch or Donald Trump or anybody else like that.”

As well as taking a high-profile position on climate, the company weighs into debates on immigration, arguing for more open transfers of expertise, and IT security, questioning the push by intelligence agencies to compel communications and social media companies to give them “backdoor” access to encrypted data.

But green technology is the subject that has brought Cannon-Brookes out into advocacy — and action. Over the past week, as Morrison dragged his Coalition partners into reluctant agreement on a net zero target for 2050 while sticking with the Abbott government’s target of 28 per cent reduction by 2030, Cannon-Brookes has been spurring action outside the federal government.

He and his wife Annie pledged to invest $1 billion in green technology projects and donate a further $500 million to organisations working on the climate crisis, and promised that Atlassian itself would be a net zero operation by 2040. He says the 2050 target cited by Morrison as a historic moment was already a “done deal” for most of the advanced economies, with ambitious 2030 targets now far more important.

His latest commitments come on top of some $1 billion that Cannon-Brookes has put into green energy ventures. One is a company called Sun Cable, with offices in Singapore, Darwin and Sydney, started by partners David Griffin, Mac Thompson and Fraser Thompson. It was seed-funded by Cannon-Brookes’s private investment firm, Grok Ventures, alongside iron ore magnate Andrew Forrest’s Squadron Energy and others.

On 20 October, as the Nationals caucus was still chewing the grass stalks on net zero, Sun Cable announced that a raft of important global firms, including engineering giants Bechtel, Hatch and SMEC, were joining its $30 billion project to take solar power from northern Australia to Singapore.

The project involves some 125 square kilometres of solar arrays in the Simpson Desert, connected to Darwin by an 800 kilometre cable, and then undersea to Singapore by a 4200 kilometre high-voltage direct current cable. The project is designed to supply 15 per cent of the island republic’s electricity and cut emissions by enough for it to reach its 2030 abatement target. Construction is planned to start in 2023, with completion in 2028, when it is expected to generate about $2 billion a year in earnings for Australia.


It’s a big test of the cable transmission technology. The most ambitious example so far is an 800 kilometre high-voltage direct current cable between Norway and Britain, with shorter ones from offshore windfarms to European centres. But a solar-cable project over a similarly ambitious distance is proposed to link solar arrays in Morocco with Britain.

Iain MacGill, a UNSW associate professor of electrical engineering who has collaborated with Sun Cable, says the project is “technically leading edge” in its combination of terminal configuration, distance, power transfer capacity, and water depth. “There are other HVDC links that collectively do most of these things (except that distance), but not all together,” he says.

“The commercial challenges and risks are likely the most important in terms of the project being implemented,” MacGill goes on. “However, the commercial opportunity is also extremely attractive given Singapore’s current reliance on gas generation, limited local renewable energy options, and plans to increase their use of renewables and reduce emissions.”

Another big renewables scheme, the solar-and-wind Asian Renewable Energy Hub proposed for northwest Australia, has switched from HVDC energy exports to green hydrogen and now green ammonia. Ralph Evans notes that Singapore is already building floating solar arrays in its own backwaters, and could find larger floating arrays in nearby Indonesian waters a cheaper proposition than the distant Australian source.

Somewhat ironically, Scott Morrison has found himself part of the marketing for Sun Cable, pushing its merits to his Singapore counterpart Lee Hsien Loong on a stopover to the G7 summit earlier  this year. Australia’s ambassador in Jakarta, Penny Williams, also worked to gain the Indonesian government’s approval for the undersea cabling, announced last month, with the project pledging $2 billion in technology transfers to Indonesian institutions.

After these latest announcements, Cannon-Brookes said Sun Cable could be just the start of renewable energy exports, and Australia should be thinking of a “500 per cent” renewables target.

“Every step forward puts the naysayers further in the rear-view mirror,” he tweeted. •

The publication of this article was supported by a grant from the Judith Neilson Institute for Journalism and Ideas.

The post Atlassian shrugged appeared first on Inside Story.

]]>
Feeding the machine https://insidestory.org.au/feeding-the-machine/ Mon, 11 Oct 2021 01:42:09 +0000 https://staging.insidestory.org.au/?p=69072

In what ways did the typewriter affect how — and how much — writers wrote?

The post Feeding the machine appeared first on Inside Story.

]]>
Canberra’s Museum of Modern Democracy has a room full of typewriters with an invitation to visitors to write a letter. Children happily queue for the opportunity to try out this novelty (my granddaughter even asked for one for Christmas), which is disconcerting for someone who learnt to touch-type to “Buttons and Bows” at an evening class and bashed out reviews on a correctible Brother right up to the end of the 1980s.

But the typewriter’s appeal for children isn’t surprising. The journey from fingers to printed text is direct, the type appearing on paper before your eyes as you compose. When it works smoothly, the writer can feel in full control, from idea to tangible text. There’s no waiting for a printer to finish the task.

In his new book, The Typewriter Century, Sydney historian Martyn Lyons reckons that this machine shaped how we write from the 1880s up to the mid 1980s, when the word processor established its superior claims. He marks this neat century with photographs of a Remington No. 1, the model bought by Mark Twain out of curiosity in 1875, and of Len Deighton in his London flat, hemmed in by a massive IBM word processor, in 1968. Twain “wrote” Life on the Mississippi by dictating to a typist, and Deighton called in the services of an operator for the IBM.

Lyons begins with a fascinating overview of the typewriter’s development, detailing many of the technical difficulties overcome along the way. Of the various people with claims to be its inventor, he gives most credit to Christopher Sholes, whose ideas were incorporated into that Remington No. 1, which came encased in a wooden cabinet with a foot treadle for returning the carriage.

Lyons soon moves from the typewriter’s technical development to its role in changing how fiction, especially popular fiction, was created in the early twentieth century. While literary writers like Twain and Henry James quickly adopted the typewriter as a way of easing the process to publication — dictating to stenographers who transformed their work into legible copy for publishers — the typewriter also made possible a commercialised form of writing, with a new generation of writers learning to type as part of their work in offices or newspapers. Some successful popular writers even replicated the office hierarchy, with several “typewriter girls” at hand to process their work. The task quickly became gendered.

Along with a rising mass literacy, the typewriter made possible the “pulp fiction” phenomenon of the 1920s and 1930s, when writers like Georges Simenon, Erle Stanley Gardner and the Australian Gordon Bleeck could bash out a new novel in less than a week, selling them for a few pence on the railway stands. Some, like Simenon, were so prolific that they wrote under several pseudonyms to avoid flooding their own markets. Gardner referred to himself as the Fiction Factory. These writers made money by the sheer quantity of what they produced, not its quality, though both Simenon and Gardner longed for some literary recognition. André Gide thought Simenon a “great novelist” but his literary reputation was largely posthumous.

When he examines individual relationships with the typewriter, Lyons finds a range of responses. Some authors were worried by the “distancing” effect they felt when composing by machine. Rather than the intimate, physical experience of pen on paper, the typewriter transformed thought into impersonal, standardised print. Some authors who dictated their words were surprised by the impassive responses of stenographers trained to concentrate on the words rather than their meaning. James, for example, was disappointed when his most frightening passages in the Turn of the Screw made no impression on the demeanour of his typist. Others felt that the presence of the typist disrupted the privacy of composition, making them self-conscious about their creativity and alienated from their own work.

Many, of course, quickly went back to handwriting their first draft, creating a further distancing by handing copy to a typist. John le Carré replicated the elaborate office procedure of the civil service, where he had trained, by writing each draft in different coloured ink before passing it to his wife to type on different coloured papers. He then revised the typed text by hand in the appropriate coloured pen before handing it back to his wife for a further complete draft.

This process could continue for thirteen drafts, as for The Tailor of Panama, and must have slowed the process down rather than hastening it. Le Carré may have resisted acquiring a word processor, but his wife no doubt appreciated its arrival.

Writers trained in typewriter skills appear to have been more likely to develop what Lyons calls a “romantic” relationship to the typewriter, seeing it as an extension of their bodies and even a source of inspiration. The film cliché of the writer ripping paper from the typewriter, scrunching it up and throwing it on the floor appears to have no place in real life. Jack Kerouac, of course, is the archetypal romantic typist, but others, including Enid Blyton, felt freed by the responsive movement of the typewriter.

The Typewriter Century, with its amusing stories about the practices of many writers, is based on wide archival research. But it can hardly be exhaustive given the writing multitudes who have typed their way through the century. As the book progresses Lyons concentrates in detail on the typewriting careers of a handful of popular writers who could not have been so prolific without the machine: Simenon, Gardner, Agatha Christie, Richmal Crompton and Enid Blyton. This allows him to give some sense of the processes and self-mythologies of the writers. Simenon promoted himself as a speed typist, and Gardner became successful enough to supervise banks of female typists to produce his work. Christie, Crompton and Blyton professed to fit their writing around domestic routines — Christie is photographed sitting in a dining chair while she types on a drop-sided dining table.

All of these writers knew they were addressing distinct markets and the typewriter was the essential tool for them to meet their readers’ appetites for more of the same. The effect of the machine on literary writers raises more complex considerations. Lyons speculates that Ernest Hemingway’s newspaper experience, including the necessary typewriter, influenced his notoriously succinct and direct writing style. Yet there are examples of typewriter prolixity — perhaps those long and exuberant novels by Christina Stead and Miles Franklin were encouraged by their familiarity with the typewriter as office workers. The shift to dictation, too, must surely have influenced the writing style of James’s masterly later novels, or Twain’s later books. As Lyons concludes, “There is no single answer to the question, what was the impact of the typewriter?”

The book does invite readers to consider how their own favourite writers adapted to the typewriter. An obvious Australian example would be Joseph Furphy, the foundry worker who bought a typewriter in 1897 and revised the manuscript of Such Is Life himself. Scholars are often excited by handwritten manuscripts, as if they offer immediate contact with a revered writer; despite its visual anonymity, though, the typescript may be just as direct a product of a writer’s thoughts.

Readers of The Typewriter Century are likely to reflect on their own writing practices, too. The computer turned writing into a rather mechanical function called “word processing,” but its advantages as an editing tool were obvious and quickly embraced. It may be that it has encouraged different kinds of creative thinking and Lyons cites several writers, such as Cormac McCarthy, who resist it. The typewriter still has its uses, even if it is simply to avoid the distraction of the internet, as Zadie Smith says.

My ten-year-old granddaughter wrote her first film script on the second-hand Olivetti she was given for Christmas, but in the long run she found the keys too hard to press and the ribbon change too difficult. The laptop looks like winning out. •

The post Feeding the machine appeared first on Inside Story.

]]>
Ghosts in the machine https://insidestory.org.au/ghosts-in-the-machine/ Thu, 05 Aug 2021 03:47:16 +0000 https://staging.insidestory.org.au/?p=67900

A computer scientist takes on artificial-intelligence boosters. But does he dig deep enough?

The post Ghosts in the machine appeared first on Inside Story.

]]>
It seems like another era now, but only a few years ago many people thought that one of the biggest threats to humankind was takeover by superintelligent artificial intelligence, or AI. Elon Musk repeatedly expressed fears that AI would make us redundant (he still does). Stephen Hawking predicted AI would eventually bring about the end of the human race. The Bank of England predicted that nearly half of all jobs in Britain could be replaced by robots capable of “thinking, as well as doing.”

Computer scientist and entrepreneur Erik J. Larson disagreed. Back in 2015, as fears of superintelligent AI reached fever pitch, he argued in an essay for the Atlantic that the hype was overblown and could ultimately do real harm. Rather than recent advances in machine learning portending the arrival of intelligent computing power, warned Larson, overconfidence in the intelligence of machines simply diminishes our collective sense of the value of our own, human intelligence.

Now Larson has expanded his arguments into a book, The Myth of Artificial Intelligence, explaining why superintelligent AI — capable of eclipsing the full range of capabilities of the human mind, however those capabilities are defined — is still decades away, if not entirely out of reach. In a detailed, wide-ranging excavation of AI’s history and culture, and the limitations of current machine learning, he argues that there’s basically “no good scientific reason” to believe the myth.

Into this elegant, engaging read Larson weaves references from Greek mythology, art, philosophy and literature (Milan Kundera, Mary Shelley, Edgar Allan Poe and Nietzsche all make appearances) alongside some of the central histories and mythologies of AI itself: the 1956 Dartmouth Summer Research Project, at which the term “artificial intelligence” was coined; Alan Turing’s imitation game, which made a computer’s capacity to hold meaningful, indistinguishable conversations with humans a benchmark in the quest to achieve general intelligence; and the development of IBM’s Watson, Google DeepMind’s AlphaGo, Ex Machina and the Singularity. Men who have promoted the AI myth and men  who have questioned it over the past century are given full voice.

Larson has a background in natural language processing  — a branch of computer science concerned with enabling machines to interpret text and speech — and so the book focuses on the relationships between general machine intelligence and the complexities of human language. The chapters on inference and language, methodically breaking down purported breakthroughs in machine translation and communication, are among The Myth of Artificial Intelligence’s strongest. Larson walks us through why phrases like “the box is in the pen,” which MIT researcher Yehoshua Bar-Hillel flagged in the 1960s as the kind of sentence to confound machine translation, still stymies Google Translate today. Translated into French, the “pen” in question becomes a stylo — a writing instrument — despite the fact that the sentence makes clear it’s smaller than the box. Humans’ lived understanding of the world allows us to more readily place words in context and make meaning of them, says Larson. A box is bigger than a biro, and so the “pen” must be an enclos — another, larger, enclosure.

Larson focuses on language understanding (rather than, say, robotics) because it so aptly illustrates AI’s “narrowness” problem: that a system trained to interpret and translate language in one context fails miserably when that context suddenly changes. He argues that there can be no leap from “narrow” to “general” machine intelligence using any current (or retired) computing methods, and the sooner people stop buying into the hype the better.

General intelligence would only be possible, says Larson, were machines able to master the art of “abduction” (not the kidnapping kind): a term he uses to encompass human traits as varied as common sense, guesswork and intuition. Abduction would allow machines to move from observations of some fact or situation to a more generalisable rule or hypothesis that could explain it: a kind of detective work or guesswork, akin to that of Sherlock Holmes. We humans create new and interesting hypotheses all the time, and then set about establishing for ourselves which ones are valid.

Abduction, sometimes called abductive inference or abductive reasoning, is a focus of a slice of the AI community concerned with developing — or critiquing the lack of — sense-making or intuiting methods for intelligent machines. Every machine operating today, whether promoted by its creators as possessing intelligence or not, relies on deductive or inductive methods (often both): ingesting data about the past to make narrower and often untestable hypotheses about a situation presented to them.

If Larson is pondering more explicitly philosophical questions about whether reason and common sense are truly the heart of human intelligence, or whether language is the high benchmark against which to measure intelligence, he doesn’t explore them here. He is primarily concerned with the what of AI (he describes the kind of intelligence AI practitioners are aiming for) and how this might be achieved (he argues it won’t with current methods, but might with greater focus on methods for abduction). Why is a whole other, mind-bending question that perhaps throws the whole endeavour into question.

While Larson does emphasise the messiness of the reality that machines struggle to deal with, he leaves out some of the messiest issues facing his own sub-field of natural language processing. His chapter on “Machine Learning and Big Data, for example, makes no mention of how automated translation tends to reproduce societal biases learned from the data it is trained with.

Google Translate’s mistranslation of “she is a doctor,” for example, arises in the same way as the pen mistranslation. In both cases, the system’s translation is based on statistical trends it has learned from enormous corpuses of text, without any real understanding of the context within which those words are presented. “She” becomes a “he” because the system has learned that male doctors occur more frequently in text than female doctors. The “pen” becomes a stylo not simply because pen is a homonym and linguistically tricky but also because the system is reaching for the most statistically likely translation of the word. The effect in both cases is an error, the challenge is divining context, and the fix in both cases will involve technical adjustments.

But what of other translation errors? At the conclusion of The Myth of Artificial Intelligence Larson makes a brief further reference to “problematic bias,” citing the notorious mislabelling of dark-skinned people as gorillas in Google Photos as an example, characterising it as one of the issues that has “become trendy” for AI thinkers to worry about. (Google “fixed” the error by blocking the image category “gorilla” in its Photos app.) This is an all-too-brief reference to a theme that it is inseparable from the book’s central thesis.

The Myth of Artificial Intelligence convinces the reader that the creation of intelligent AI systems is being frustrated by the fact that the methods we use to build them don’t sufficiently account for messy complexity. Without equal attention being given to complexities introduced by humans into large language datasets, or to the decisions we make training and tweaking systems based on these large datasets, the issue becomes almost entirely one of having the right tools. Left out of this analysis is the question of whether we have the right materials to work with (in the data we feed into AI systems), or whether we even possess the skills to develop these new tools, or manage their deployment in the world.

Larson’s omission of any real discussion of social biases being absorbed by and enacted by machines is odd because The Myth of Artificial Intelligence is dedicated to persuading readers that current machine learning methods can’t achieve general intelligence, and uses natural language processing extensively and authoritatively to illustrate its point. It would only help his case to acknowledge that even the most powerful language models today produce racist and inaccurate text, or that the enormous corpuses of text they are trained with are laden with their own, enduring errors. Yes, these are challenges of human origin. But they still create machines producing errors, machines not performing as they’re supposed to, machines producing unintended harmful effects. And these, like it or not, are engineering problems that engineers must grapple with.


If indeed Larson is right — if we are reaching a dead end in what’s possible with current AI methods — perhaps the way forward isn’t simply to look at new methods but to choose a different path. Beyond reasoning and common sense, other ways of thinking about knowledge and intelligence — more relational, embedded ways of perceiving the world — might be more relevant to how we think about AI applications for the future. We could draw on more than just language as the foundation of intelligence by acknowledging the importance of other senses, like touch and smell and taste, in interpreting and learning from context. How might these approaches inspire revolutionary AI systems?

At one point in The Myth of Artificial Intelligence, Larson uses Czech playwright Karel Čapek’s 1921 play, R.U.R., to illustrate how science fiction’s images of robots hell-bent on destroying the human race have shaped our fears and expectations of superintelligent machines. In Larson’s retelling, these robots, engineered for optimal efficiency and supposedly without feelings or morals, get “disgruntled somehow anyway,” sparking a revolution that wipes out nearly the entire human race. (Only one man, the engineer of the robots, remains.)

It’s true that the robots in R.U.R. get “disgruntled.” But their creators never intended them to be wholly mindless automatons. In Čapek’s imagination they were made of something like flesh and blood, indistinguishable from humans. To reduce factory accidents, they were altered so as to feel pain; to learn about the world, they were shown the factory library. As the robots rebel in the play’s penultimate act, their human creators ponder how their own actions had led to the uprising. Did engineering the robots to feel like humans lead them to become aware of the injustice of their position? Did the designers focus too much on producing as many robots as possible, failing to think about the consequences of scale? Should they have dared to create technology like this at all?

Separating the human too much from the machine can make it hard to properly interrogate the myth. The Myth of Artificial Intelligence is a clever, engaging book that looks closely at the machines we fear could one day destroy us all, and at how our current tools won’t create this future. It just doesn’t dwell deeply enough on why we, as their creators, might think superintelligent machines are possible, or how our actions might contribute to the impact our creations have on the world. •

The post Ghosts in the machine appeared first on Inside Story.

]]>
The price of privacy https://insidestory.org.au/the-price-of-privacy/ Fri, 30 Jul 2021 04:38:47 +0000 https://staging.insidestory.org.au/?p=67818

A case that began in the Irish courts is shaping Australia’s efforts to update its 1980s privacy laws

The post The price of privacy appeared first on Inside Story.

]]>
If you’re old enough, try to remember life in 1988. Kylie Minogue’s “I Should Be So Lucky” is in the charts; Crocodile Dundee II is on the big screen; tall ships are in Sydney Harbour to mark the bicentenary of European settlement; the Queen is opening the new Parliament House in Canberra; and Saturday newspapers are still fat with classified ads.

Now imagine checking your pockets. Where’s your mobile phone? You don’t have one. There’s no connected personal computer on your desk, no social media hoovering up data about your spending habits, and not much need for businesses to import or export elaborate datasets. Facial recognition technology is still the stuff of science fiction and your car isn’t sending valuable information back to its manufacturer.

Nineteen eighty-eight was the year Australia’s privacy legislation was implemented with the worthy goal of protecting your personal information. It seemed a realistic ambition back then, because that information was probably stored in a filing cabinet rather than on the server of a tech giant in the Santa Clara Valley. Getting your name struck off a mailing list or making sure your medical records didn’t fall into the wrong hands was still within the powers of domestic legislation.

Even the right to be forgotten needed no articulation: if you kept out of the news for long enough, your past misdemeanours would fade into oblivion — a secret between you and the rare person spooling through old newspapers on dusty microfiche at the local library.

What we didn’t know at the time was that 1988’s Privacy Act was a snapshot of a society on the cusp of a technological revolution. Think of it as one of those moments captured in a photo taken just minutes before a natural calamity — the tourists are all smiling at the camera, blissfully unaware of the avalanche that’s about to engulf them.

Since then, Australian legislators have done their best to keep pace with technological change. The most recent and perhaps most significant amendment to the 1988 legislation created the 2018 Notifiable Data Breaches scheme, which details what must happen if personal data hosted by a company goes missing or is hacked.

But the technological advances of the past thirty years are so great that mere amendments will no longer suffice. The Privacy Act doesn’t need tweaking; it needs a root-and-branch rethink. And it’s not just a question of individual privacy; the challenge we’re facing is how to apply economy-wide privacy protections that will allow Australian companies to safeguard data without stopping them from competing globally.


Privacy mightn’t have been the main focus of the Australian Competition and Consumer Commission’s 2019 digital platforms report, but it highlighted what was already clear to informed observers: the Privacy Act was out of date. The wheels of government ground slowly for another year or so before attorney-general Christian Porter launched a review of the legislation. It would focus, according to his no-nonsense press release, on “technical data and other online identifiers.”

Oddly, given Australia was lagging behind the rest of the Western world, the announcement betrayed no sense of urgency. The European Union had adopted its General Data Protection Regulation, or GDPR, two years earlier, after years of debate and horsetrading. California’s Consumer Privacy Act, covering Silicon Valley, was being finalised. Legislators in New Zealand had already put the final touches on their revamp of the country’s 1993 Privacy Act. South Korea’s Personal Information Protection Act was by then one of the sharpest pieces of privacy legislation in the world. To use a Morrisonian euphemism, Australia’s policymakers obviously didn’t see protecting privacy as a race.

Information commissioner Angelene Falk at Senate estimates in March this year. Sitthixay Ditthavong/Canberra Times

Still, the review’s riding instructions did focus on the key issues, starting with the relationship between any future legislation and the West’s toughest privacy regime, the GDPR, which guards access to the second-largest consumer market. Should Australia’s new rules be immediately compatible with the GDPR — thus granting Australian digital businesses the protections they need to do business in the bloc? Or should Canberra apply for what’s referred to as adequacy status with the GDPR, once the new legislation is in place (as South Korea has done)? Or, indeed, should Australia go its own way and try to lock in data-transfer agreements with other jurisdictions, including the American states following California’s lead, or post-Brexit Britain, which is facing its own struggles dealing with the GDPR’s stringencies?

The attorney-general also appeared to acknowledge that any new system would depend on tough enforcement — which would place the Office of the Australian Information Commissioner, Australia’s underfunded and overworked privacy watchdog, at its centre. Will the agency be given the resources it needs to ensure that privacy safeguards are adhered to? How will the low-profile information commissioner, Angelene Falk, manage the challenge parliament sets her and her office?

There’s nothing academic about these questions. If the European experience tells us anything, it’s that unenforced privacy laws are more or less useless. In fact, you could argue Australia is better off sticking to its pre-digital, Hawke-era legislation than drafting rules that don’t beef up a regulator that today oversees both the 1988 Privacy Act and the 1982 Freedom of Information Act. The stakes are unusually high.


Europe’s enforcement gap is best illustrated by the legendary story of Max Schrems, who took on Facebook and won. The Big Tech giant might have emerged as the villain of the piece, but the public utterances of the Austrian privacy activist, whose journey culminated in all transatlantic data transfers being shut down, portray Europe’s privacy regulators as part of the problem.

Schrems had been a student on exchange at a university in California’s Silicon Valley. In one class, a Facebook lawyer revealed that the company saw the European Union’s pre-GDPR privacy rules as something of a joke. The company was exporting European data without any ethical soul-searching or legal concern.

Although Schrems wasn’t an avid Facebook user — he says he had typically logged on once a week over three years — he decided to request all the information the company had accumulated about him. Because Facebook had, and still has, EU headquarters in Dublin, he was able to use EU right-of-access laws to obtain the data. And he got it — all 1200 pages’ worth. It even included information that Facebook had described as “deleted.” He uploaded the information to his website and soon attracted media attention from across Europe.

The campaign eventually made its way to the European Court of Justice, with Schrems arguing that the EU’s “safe harbour” arrangement with the United States — now known as the EU–US Privacy Shield — didn’t protect EU users. His claims piggybacked on revelations by US National Security Agency whistleblower Edward Snowden, which pointed to a network of global surveillance programs run by the NSA and the Central Intelligence Agency.

The European Union had allowed for the free flow of data between the EU and the US because it assumed that both sides had equivalent standards of data protection — what’s now called equivalency. In two decisions since 2015, prompted by Schrems, the European Court of Justice rejected that premise, putting the future of data exchanges across the Atlantic under a cloud. It’s a cautionary tale for any jurisdiction — including Australia — facing the prospect of interacting with the GDPR. Like it or not, the EU’s privacy rules have set the global standard for privacy legislation.

And this is where the role of national data-protection agencies comes into play. To get to the European courts, Schrems had to pass through Ireland’s privacy regulator, the Data Protection Commission. The reason is simple: Ireland’s generous tax arrangements are so appealing to Big Tech that many of them — Google, Twitter, LinkedIn, Amazon, PayPal, Airbnb, Uber and, yes, Facebook — have based their European headquarters in Dublin’s Silicon Docks. Anyone lodging a complaint against these companies must therefore turn to the Data Protection Commission.

In Schrems’s case, it didn’t go well. The Irish regulator dismissed his claims, prompting him to take action in Ireland’s courts. The case shone a spotlight on the regulator’s ability to manage the massive workload created by the tech giants’ Dublin addresses. Earlier this year, the Irish Council for Civil Liberties found that the regulator decided just four of 196 cases it had been required to take on — suggesting it had become the bottleneck of EU privacy enforcement. That failure, said the council, exposes 448 million people across the European Union to “electoral manipulation and predatory profiling.”

Schrems’s vicissitudes showed that an enforcer that can’t or doesn’t do its job fosters an environment in which the misuse of personal data goes unchallenged.


That Australia’s information commissioner is overworked and underfunded is now widely accepted. Her office received $25.5 million for the 2021–22 financial year, up marginally from last year’s $23.2 million. This increase included funds for a new freedom-of-information commissioner, a slight increase in staffing levels, and an earmarked amount for participating in Australia’s growing data-portability initiative, the Consumer Data Right.

This funding doesn’t reflect the size of the challenge — and the information commissioner knows it. Documents released under FOI earlier this year reveal a deficit of $121,000 last financial year as the watchdog struggled with managing the Notifiable Data Breaches scheme and overseeing the ill-fated COVIDSafe app. The document noted the agency’s “static resourcing and staffing levels” and went on to say that the information commissioner had experienced a “steady increase in the number of complaints received,” partly as a result of the pandemic.

Taking on Big Tech requires time, strong international contacts and a high level of expertise — all of which cost money. The information commissioner is already fighting Facebook in the Federal Court of Australia over the Cambridge Analytica data breach — a lawsuit almost identical to one that came unstuck in Canada in February because of a lack of evidence. Other investigations have involved time-consuming and resource-intensive international probes. Last week’s determination that Uber had failed to protect its clients and its drivers following a 2016 cyber attack saw the commissioner delve into what her office described as “significant jurisdictional matters and complex corporate arrangements and information flows.”

Once Australia’s new privacy legislation comes into operation, the resourcing of the commissioner’s office — or the agency that will replace it — is likely to be the key to success. Max Schrems managed to overturn the Privacy Shield by claiming his individual privacy rights in both Irish and EU courts, but it’s unrealistic to expect an individual to take on the burden of challenging Big Tech on privacy.

Given the pressure she is under, Angelene Falk may have good reason to keep a low media profile. With a background in law, she came to the job in 2018 after serving as deputy commissioner for two years. Her public appearances are usually limited to comments in Senate estimates, where she’s quizzed by parliamentarians who are often ill at ease with the principles of privacy and data protection and have little understanding of global policy trends.

Compare this with New Zealand’s privacy commissioner, John Edwards, who rarely misses a chance to publicly castigate Big Tech and wasn’t afraid to throw his weight around as the country approached its ambitious reimagining of the 1993 Privacy Act. Under the reforms, Edwards has the power to issue compliance notices, can make binding decisions on requests for access, and will oversee legislation that contains criminal offences for businesses that misuse personal data. Reportedly being considered for the top privacy-enforcement role in Britain, Edwards has become the public face of data protection in New Zealand — an outreach and educational role that has no equivalent in Australia.

The impasse over the EU–US Privacy Shield isn’t likely to be resolved soon. At the heart of the European judges’ objections is the fear that data exported to the US could fall into the hands of law-enforcement agencies. This is a tricky problem to manage — the US has no federal data-protection legislation or enforcer. With little appetite for privacy policy in Washington, the states have been left to take the lead.

More importantly, though, the clash with the EU over privacy has created a political problem for the Biden administration. The White House doesn’t want to be seen as soft on law and order — particularly when it comes to the crunching of data and the gathering of personal information that could, say, prevent terrorist attacks. Significant concessions to the Europeans could leave Biden politically exposed.

Any new Australian privacy legislation will face the same political predicament. Equivalency with the GDPR simply can’t be ignored — the European Union is too significant a market for Australia to deal itself out of the game. But Australian policymakers will also be mindful of the European Court of Justice’s low tolerance of loose regulation in countries gaining access to the personal data of EU citizens.

One cause for concern is Australia’s controversial 2018 Telecommunications and Other Legislation Amendment (Assistance and Access) Act. That legislation is what you’d expect from a home affairs minister — Peter Dutton at the time — unconstrained by worries about the economic impact of Australia’s data-protection reputation. The act, which includes no judicial oversight, gives the Australian government the right to demand a “back door” into encrypted communications — including those sent via popular apps including WhatsApp, Signal and Telegram. It was designed to help federal police and intelligence agencies track suspected criminals and terrorists.

Australia’s tech community opposed the legislation, and broadly still does, arguing that it undermines the country’s data-protection credibility. In a parliamentary hearing last year, the head of government affairs for the hugely successful Sydney-based software company Atlassian, Patrick Zhang, said that international tech companies were now afraid of using Australian products because of the possibility of receiving access orders from Australian law-enforcement agencies. This fear was particularly acute in Europe, Zhang suggested, where worries about tripping over the GDPR’s data-protection provisions mean that businesses may  steer clear of Australian products. Those fears might even spill over into third countries that don’t want to compromise their deals with the European Union.

The passing of that legislation suggests that Australia’s political priorities may ultimately trump the privacy concerns of the local tech industry. While a survey by the information commissioner revealed that Australians are keenly aware of the need to protect privacy, that attitude doesn’t translate into a broader understanding of how data-protection measures could affect Australian technology companies’ ability to compete.

Part of the problem could be the lack of a strong public voice promoting privacy in Australia. But decisions about new laws will ultimately come down to politics. Not everyone will understand the complexities of data-transfer rules, but you don’t need an information campaign to tell people that strong laws are needed to fight terrorism, international drug cartels and paedophile networks. If that means compromising WhatsApp’s encryption and ruling Australia out of international data transfers — so be it. If securing Australia’s digital sovereignty will get the nose of a few tech entrepreneurs out of joint, that’s a price that politicians of all persuasions may be willing to pay. •

The publication of this article was supported by a grant from the Judith Neilson Institute for Journalism and Ideas.

The post The price of privacy appeared first on Inside Story.

]]>
Winners take all https://insidestory.org.au/winners-take-all/ Tue, 13 Jul 2021 06:26:11 +0000 https://staging.insidestory.org.au/?p=67591

Rules or no rules? The Tech Giants have made some of their own.

The post Winners take all appeared first on Inside Story.

]]>
The Tech Boom’s winners are writing corporate cookbooks. These two offer recipes with some similarities and many differences.

Amazon and Netflix are giants of the online economy. Both launched in the 1990s and survived the Tech Wreck of the early 2000s. Amazon is now an uber-giant, one of five US firms with a market capitalisation of more than US$1 trillion at the time of writing; Apple, Microsoft, Alphabet/Google and Facebook are the others. At around US$240 billion, Netflix is much smaller, though still among the largest twenty-five US companies.

For the authors of these books, innovation and invention are the markers of the two enterprises and the era they inhabit. “In the industrial era, the goal was to minimise variation,” says Netflix’s Reed Hastings. Today, in the information age, in creative companies, “maximising variation is more essential.” “Creativity, speed and agility” rather than “error prevention and replicability” are the goals for “many companies and… many teams.”

Yet the authors also attribute the two companies’ success to the steadiness and clarity of their central missions: Amazon’s belief that the long-term interests of shareholders coincide with the interests of customers, its obsession with customers rather than competitors, its “willingness to think long-term,” its “eagerness to invent” and its pride in “operational excellence.” Netflix too is “highly aligned,” concentrated on what Hastings calls “Our North Star,” “building a company that is able to adapt quickly as unforeseen opportunities arise and business conditions change.”

Pursuing these steady visions over more than two decades, Amazon and Netflix have radically changed what they do. An online bookstore became “The Everything Store,” as journalist Brad Stone titled his first book about Amazon. (His second, Amazon Unbound, was published earlier this year.) It started manufacturing and selling its own products, hosting other sellers, and selling services it built for itself to external parties. Amazon Web Services is now a behemoth in its own right. Netflix began as a DVD rental company that licensed other companies’ movies and TV shows, before moving into online streaming and producing its own Netflix Originals.

These were not mere pivots, they were deeply transforming, changing fundamentally the staff skills, the organisational competencies and the business partners needed for success. At the same time, both enterprises moved beyond the United States, seeking online customers, dealing with regulatory and political challenges, sometimes establishing distant operations and employing local people. Globalising the businesses massively increased their size and complexity. It also expanded the potential audience for books like these that try to explain the secret sauces.


Both books bring some outside perspective to what are essentially insider accounts. Colin Bryar and Bill Carr had long careers at Amazon before co-founding a business where they “coach executives at both large and early-stage companies on how to implement the management practices developed at Amazon.” They are writing, in part, for potential clients.

No Rules Rules is about the company Reed Hastings co-founded, and is co-written by him and Erin Meyer — actually, it is a kind of dialogue constructed by alternating, individually written sections. Meyer is an “American living in Paris” who has worked with Netflix and conducted a lot of interviews with staff for this book. A professor at INSEAD Business School’s Fontainebleau campus, she explores “how the world’s most successful managers navigate the complexities of cultural differences in a global environment.” Hastings, one suspects, wants this book to help attract talented staff to the fluid, personally rewarding organisation it portrays.

“Working Backwards” is the title of Bryar and Carr’s business as well as their book. It refers to the Amazon creed: don’t create a product and then try to sell it; start with the customer experience then work backwards to the design and marketing. No Rules Rules is a stretch, for Hasting and Meyer’s book is as much about the rules Netflix does have, and how they are enforced, as those it has let go.

The pictures that eventually emerge are the obverse of the ones painted by two of the companies’ best-known incidents, both involving PowerPoint. Netflix’s organisational culture began attracting attention because of the massive, 127-slide Netflix Culture Deck, first shared outside the organisation in 2009. Amazon memorably banned PowerPoint for internal presentations in 2004 after CEO Jeff Bezos and co-author Colin Bryar read Edward Tufte’s essay condemning the “cognitive style” of that ubiquitous presentation software. Complex, interconnected discussions were not well served by the relentless linearity of bullet points. In PowerPoint’s place came the “six-pager,” a short narrative format still used to “describe, review or propose just about any type of idea, process or business” at Amazon. Everyone has to write them and read them.

Working Backwards, though, exalts the universal application of procedures. No Rules Rules celebrates their elimination. The approaches have more in common than it seems, but they are undoubtedly distinctive mindsets to bring to the task of innovation and invention.

Exalting procedure, Bryar and Carr explain the “Bar Raiser” hiring process, “single-threaded teams” as an organising principle, and the PR/FAQ (Press Release/Frequently Asked Questions) process that Amazon uses for new product development. The PR/FAQ embodies the idea of “working backwards.” What would you say when the time came to launch this new product? What questions would customers ask and how would you answer them?

They also set out Amazon’s fourteen leadership principles — among them: “Leaders are owners… Leaders are right a lot… Leaders are never done learning… Leaders listen attentively, speak candidly and treat others respectfully… Leaders do not compromise for the sake of social cohesion… Leaders rise to the occasion and never settle.”


The most striking apparent contrast with Netflix comes from Amazon’s Deep Dive Leadership Principle: “Leaders operate at all levels, stay connected to the details, audit frequently, and are sceptical when metrics and anecdotes differ. No task is beneath them.” Bryar and Carr add: “At many companies, when the senior leadership meets, they tend to focus more on big-picture, high-level strategy issues than on execution. At Amazon, it’s the opposite. Amazon leaders toil over the execution details.”

In this, founding CEO Jeff Bezos looms large. Bezos recently stepped down as CEO, his place taken by former head of Amazon Web Services, Andrew Jassy, though he remains executive chairman and Amazon’s largest shareholder. He was clearly a “nanomanager,” Hastings and Meyer’s term for the mythological CEOs who are said to be “so involved in the details of the business that their product or service becomes amazing.” Working Backwards is dense with references to Jeff: Jeff’s ideas, Jeff’s comments at meetings, Jeff’s early-morning emails after “walking the store,” Jeff’s unhappiness about this or delight about that.

Celebrating the elimination of rules, Hastings says, “We don’t emulate these top-down models, because we believe we are fastest and most innovative when employees throughout the company make and own decisions.” He is proud of the comment Facebook’s Sheryl Sandberg made after shadowing him for a day: “You didn’t make one decision!” The idea is “to lead by context, not control.” The image of the organisational structure is a tree rather than a pyramid. The CEO sits “all the way down at the roots”; the decision-makers are “informed captains up at the top branches.”

An example: CEO Hastings, “at the roots,” sets the overall strategy for Netflix to make international expansion its number one priority. Ted Sarandos, chief content officer and now co-CEO, “at the trunk,” encourages his teams to take big risks with large potential wins or lessons-from-failure in those new territories: “We need to become an international learning machine.” Out on a “big branch,” vice-president of original animation, Melissa Cobb, decides the foray into children’s programming should mean a child watching Netflix in a Bangkok high-rise should not get the typical “global” mix of either local or US characters, but “a variety of TV and movie friends from around the world.” On a mid-sized branch, the director of the team acquiring preschool content, Dominique Bazay, decides Netflix’s animation needs to be high-quality, high enough “to be a hit in anime-obsessed Japan.” The manager of content acquisition in Mumbai, “on a small branch,” “in a small conference room in Mumbai,” commissions Mighty Little Bheem, spending a lot of money on a genre that has few precedents in India.


It is rarely difficult to poke fun at management books — at the language, the inconsistencies, the conviction that none of this has ever been done before, the confident assumption that lessons from one stellar organisation are applicable to all others.

Here, Bryar and Carr refer often to the virtues of “being Amazonian.” This is what happens when you exhibit one or more of the fourteen leadership principles. When things go well, it is because people are “being Amazonian,” sometimes without even realising it. When something goes wrong, invariably, someone, somewhere, was insufficiently Amazonian.

Netflix preaches candour and transparency about data inside the company, but has pioneered a radical degree of opacity about its own viewing numbers by comparison with the historical standards set by cinemas and television broadcasters. The company asserts its inclusivity but insists on “no jerks” — the kind of No Rules Rule that probably seems fair and obvious to incumbent non-jerks but which, one suspects, may hide unwritten sub-rules that mystify the excluded. There are no detailed rules about travel and expenses, but 10 per cent of claims are audited and if people are found to have infringed the one, overarching rule — “act in Netflix’s best interests” — well, “fire them and speak about the abuse openly.”

Innovation, of course, did not begin with the internet; Amazon did not invent customer-centric product development; leaders and organisations have been grappling forever with the balance between centralised “command and control” and decentralised autonomy. The people who laid the first Atlantic cable a century and a half ago, or launched the first aviation services, risked not just financial ruin but levels of personal danger some way beyond the life experience of Silicon Valley engineers running A/B tests of discounted shipping options.

Even working backwards seems to have had many parents. Just returned to Apple as an adviser in 1997, Steve Jobs told the Worldwide Developer Conference in 1997, “One of the things I’ve always found is you’ve got to start with the customer experience and work backwards to the technology. I’ve made this mistake probably more than anybody in this room and I’ve got the scar tissue to prove it.”

But there are two full litres of Kool-Aid here in these two books and you don’t have to drink it all to find much of it fascinating. Erin Meyer first thought the Netflix Culture Deck was “hypermasculine, excessively confrontational, and downright aggressive — perhaps a reflection of the kind of company you might expect to be constructed by an engineer with a somewhat mechanistic, rationalist view of human nature.” She accepted the invitation to take a closer look because what could not be denied was the scale of Netflix’s success. It’s “beyond unusual. It’s incredible. Clearly, something singular is happening.” Beyond unusual, yes, but perhaps not singular, because Amazon could say the same, at least about its growth.

Both pairs of authors obviously believe the recipes they reveal might be usefully applied to other organisations and situations, but they acknowledge limits, even in their own. They talk about failures, like Amazon’s 2014–15 Fire Phone, as well as successes. Hastings admits some people will take advantage of the absence of rules. Netflix staff probably fly business class more often than is really needed to “serve Netflix’s best interests” by arriving fresher for meetings. But “even if your employees spend a little more when you give them freedom, the cost is still less than having a workplace where they can’t fly… If you limit their choices, you’ll lose out on the speed and flexibility that comes from a low-rule environment.” The biggest risk for Netflix “isn’t making a mistake or losing consistency; it’s failing to attract top talent, to invent new products, or to change direction quickly when the environment shifts.”

No Rules Rules concludes with a frank acknowledgement of the continuing relevance of the “rules with process” model for some organisations and activities (even parts of Netflix itself), and a set of questions to ask in order to select the right approach. “If you’re leading an emergency room, testing airplanes, managing a coal mine, or delivering just-in-time medication to senior citizens,” “rules with process is the way to go.” Erin Meyer has worked with a few old economy stalwarts that might qualify, like ExxonMobil, Michelin and Johnson & Johnson, as well as with financial institutions like BNP Paribas and Deutsche Bank, whose stumbles during the global financial crisis showed how difficult it is to draw neat boundaries around the innovative, fail-fast parts of many organisations and the mission-critical operations where mistakes matter.


The large omission from both books is any real sense of the relationship between these two huge and influential organisations and the wider world. This may seem an unreasonable demand for books of this kind. But there is a clue at the start of the movie awarded Best Film at this year’s Academy Awards, Nomadland: those opening scenes of the Amazon “fulfilment centre” in the snow, juxtaposing warm, high-tech efficiency inside with the human desolation outside. We do need to insist on large questions being posed about “America,” its Tech Boom and its patchwork prosperity, the “sort of affluent dysfunction” that Janan Ganesh described recently in the Financial Times.

Bryar and Carr mention this at the outset: “Some take issue with Amazon’s impact on the business world and even on our society as a whole.” Although “obviously important, both because they affect the lives of people and communities and because, increasingly, failure to address them can have a serious reputational and financial impact on a company,” these issues are “beyond the scope of what we can cover in-depth in this book.” Relentless about the detail of so many aspects of its products, Amazon has been playing catch-up on matters as big as the living and working conditions of its people and the environmental footprint of its activities. The Working Backwards authors footnote Jeff Bezos’s April 2020 letter to shareholders, which “did address Amazon’s impact on multiple fronts.”

The Netflix that Hastings and Meyer portray is a remarkable island of “stunning colleagues,” candour and flexibility. It aspires to be a professional sports team rather than a family. People stay as long as they are the best available for their roles and are moved on as soon as they are not.

Early in No Rules Rules, Hastings tells the story of Netflix’s survival through the Tech Wreck — the dot-com crash — a “road to Damascus experience” that founded “much that has led to Netflix’s success.” The company had to let forty of its 120 staff go. It was gruelling but the company prospered. Business grew and the smaller, now more densely talented team worked longer hours and got the job done. “Talented people,” says Hastings, “make one another more effective.” “Talent density” became a Netflix lodestar.

I found myself wanting to know more about those forty people, good people apparently, just not good enough: “A few were exceptionally gifted and high performing but also complainers or pessimists.” Maybe they found other roles elsewhere for which they were better suited. They are no longer on Netflix’s balance sheet but they are probably still on the United States’. If we are to fully grasp the impact of these tech giants on the whole world, not just their own, we need to understand more than the winners. •

The post Winners take all appeared first on Inside Story.

]]>
Australia goes it alone https://insidestory.org.au/australia-goes-it-alone/ Fri, 09 Apr 2021 02:19:46 +0000 https://staging.insidestory.org.au/?p=66204

Why is competition commissioner Rod Sims more exercised than his international counterparts by Google’s takeover of Fitbit?

The post Australia goes it alone appeared first on Inside Story.

]]>
It was late November last year, and Australia’s review of Google’s US$2.1 billion play for Fitbit seemed to be running to schedule. Having released a long list of objections to Google’s plan to take control of the smartwatch-maker’s ten years of data, the Australian Competition and Consumer Commission was digesting new undertakings Google had offered watchdogs around the world.

Then the Australians went off-script. Just a few days after the European Commission opted to accept Google’s reassurances and let the acquisition go ahead, ACCC chair Rod Sims stunned observers by announcing that he considered the search giant’s commitments, including a ten-year moratorium on using Fitbit’s health data, impossible to enforce. The acquisition, said Sims, “may result in Google becoming the default provider of wearable operating systems for non-Apple devices and give it the ability to be a gatekeeper for wearables data.”

Australia’s decision to go its own way on a global deal of this size was stunning. Sims conceded that Australia was a smaller jurisdiction and that a “relatively small percentage of Fitbit and Google’s business takes place here.” But, he added, the ACCC “must reach its own view in relation to the proposed acquisition given the importance of both companies to commerce in Australia.”

It was yet another reminder to the world that Australia is developing one of the toughest digital-platforms regulatory systems in the Western world. Whether it’s the bargaining code that will force Facebook and Google to pay for media content, or the three lawsuits targeting the same platforms over alleged consumer-law violations, or the detailed antitrust probes bubbling away behind the scenes, the ACCC appears to be setting standards unmatched anywhere else.

The likelihood that the muscular stance of a distant regulator will ultimately stop Facebook and Google from achieving their goals is debatable. On 14 January Google signalled it had paid no heed to the ACCC’s objections and was ploughing ahead with an acquisition that, in time, will see it use valuable and sensitive health-related data to target advertising at anyone with a Fitbit device.

This leaves the ACCC to pursue a competition probe rather than an acquisition review, which suggests that we might next hear about the dispute when Sims announces a lawsuit in the Federal Court of Australia. But even if the court were to support Sims’s decision to oppose the merger, the deal would still be done and dusted, raising the prospect that Australia may have to be excised from the global acquisition — an unpalatable prospect for Google.

Australia’s forceful regulation of Big Tech has sparked intense interest in Brussels, where the European Commission has imposed hefty fines over the past ten years but made no headway in its key concern that the platforms’ accumulation of data is steadily eroding competition. The Europeans view the Australian push with a mix of admiration, scepticism and professional envy. Some Brussels insiders fear that the ACCC’s bold experiment highlights their own limitations and would like to see the outspoken Sims cut down to size. This may yet happen, with the Australian regulator facing the very real prospect that the Federal Court will overturn any attempt to stand in Google’s way.

For its part, the ACCC believes, rightly or wrongly, that its recent world-first Digital Platforms Inquiry gave it an unrivalled insight into Facebook’s and Google’s business models. That expertise is now percolating through the ACCC’s reviews of global deals that have an Australian component — be it Google’s acquisition of Fitbit or Facebook’s equally problematic acquisition of Giphy, a gif database. The inquiry has left the ACCC with what it considers a solid understanding of how the tech giants’ acquisitions pose a risk to competition.

At the heart of the Digital Platforms Inquiry’s final report was the conclusion that the platforms already derive ample market power from their unrivalled accumulation of data. While there’s no evidence yet that Facebook and Google have already abused their market power, the ACCC believes the risk is real.


In the case of Fitbit, the Australian regulator didn’t appear concerned about the hardware component — so what if Google owned a company making watches? What raised red flags was the fate of the trove of health data, past and future, that could force other players out of the market and deter new ones from entering. The ACCC certainly worries about privacy and consumer law — using sensitive data to target advertising is ethically and legally fraught — but its biggest fear is that the tech giants could dramatically limit competition.

This goes some way to explaining why, in June last year, the ACCC led the world once more in announcing an investigation into Facebook’s acquisition of Giphy. Again, the fact that Facebook would want to own a company that hosts the short bits of video and animated stickers known as gifs is neither here nor there. But Facebook’s control and accumulation of data — in this case, data potentially acquired from rivals through Trojan-horse-like, embedded gifs — was something the ACCC was never going to accept uncritically. British and Austrian regulators have followed suit, with Brazil’s competition watchdog also pondering whether to investigate.

Google’s acquisition of Fitbit also raises another global issue on which the ACCC is less of an outlier: what are known as behavioural undertakings. It’s true that the European Commission expressed similar concerns about Google’s control of Fitbit data as its Australian counterparts, but the Brussels antitrust officials ultimately accepted Google’s legally enforceable undertakings — namely, the ten-year freeze on using data collected through Fitbit devices for advertising and a pledge not to use its Android operating system to “discriminate against wrist-worn wearable devices… by withholding, denying or delaying their access to functionalities of Android.”

It’s not unusual for regulators to allow deals to proceed subject to conditions like these. But Sims has repeatedly resisted approving deals that involve companies committing themselves to behave in certain ways. His view is that you can’t believe a word said by board members planning a merger — they will promise anything to get a deal across the line.

In the case of Big Tech, Sims’s recalcitrance appears justified. In 2014, the European Commission allowed Facebook to acquire messaging service WhatsApp after the company assured the regulator that combining its user information with WhatsApp’s was impossible. Facebook then went ahead and did just that, prompting the European Commission to impose a penalty of €110 million — a substantial fine, but still well within the cost of doing business for a company reaping the benefits of consolidating two lucrative datasets.

The ACCC argues that only structural remedies — ownership arrangements and asset sales — should be taken seriously. It isn’t alone, with the acting head of the US Federal Trade Commission, Rebecca Slaughter, recently describing the Europeans’ willingness to accept behavioural remedies as an important point of difference with US regulators. She would sooner go straight to courts than accept a complex behavioural undertaking, she added.

The problem for the ACCC is that its scepticism about Big Tech’s promises doesn’t appear to be shared by the judges who have the final say over whether the regulator can block a merger. With no significant deal involving tech companies having yet made it to court, the Fitbit or Giphy deal could be the first case of its kind to be examined by the Federal Court.

On that score, the ACCC’s recent defeat in a case involving Pacific National’s acquisition of the Acacia Ridge terminal, a key piece of rail infrastructure in Queensland, was a painful reminder that courts are happy with legally enshrined undertakings. After two defeats in the Federal Court, the ACCC took the Acacia Ridge case to the High Court, only to have the court refuse to hear the appeal.

But is a merger case involving rail infrastructure likely to resonate in a debate over data dominance? Remarkably, yes. The ACCC’s objection to Pacific National’s acquisition was that it would allow the company to hinder its rivals’ access to the only connection between Queensland’s narrow-gauge railway network and the standard gauge of other Australian states.

The ACCC hammered the same point in the final report of its Digital Platforms Inquiry. If Amazon, Facebook and Google control access to their platforms, and if the services they own are competing against other companies using those platforms, then they have the ability and the incentive to hinder the operations of companies they don’t own. The owner of the pipeline can’t also own the company competing for the right to use that pipeline.

All this means that the ACCC would be facing an uphill battle if Google’s promise to behave itself were to end up in an Australian court. Google would argue that its legally binding commitments sweep aside the regulator’s competition concerns; the court, whose judges regularly deal with the enforcement of less monumental contracts, might well agree.

A high-profile defeat for the ACCC would reverberate around the world, harming the credibility of its digital regulatory regime and feeding the schadenfreude of European regulators who have found themselves lagging behind their Australian counterparts. •

The publication of this article was supported by a grant from the Judith Neilson Institute for Journalism and Ideas.

The post Australia goes it alone appeared first on Inside Story.

]]>
Biden’s trustbusters https://insidestory.org.au/bidens-trustbusters/ Thu, 25 Mar 2021 06:42:56 +0000 https://staging.insidestory.org.au/?p=66002

With two of their critics appointed to senior roles by the US president, the big tech companies are on notice

The post Biden’s trustbusters appeared first on Inside Story.

]]>
Big tech could be in for a shake-up, with the Biden administration appointing two well-known antitrust (or anti-monopoly) hawks to key roles. Lina Khan and Tim Wu, academics whose relative youth earned them the moniker of “antitrust hipsters,” are now in the box seat of US antitrust policy, with potentially global implications. And it’s not just tech companies in the firing line — these appointments signal a shift away from America’s light-touch approach to regulating market power.

Lina Khan, a Columbia Law School professor, has been nominated to join the five-member board of the Federal Trade Commission, one of the two key US agencies responsible for preventing businesses from acquiring and abusing monopoly power. Even before she finished her law degree, Khan was a high-profile critic of the antitrust enforcement machine. An article she wrote for the Yale Law Review questioning the risks of Amazon’s ever-growing reach went viral and led to a surge of interest in the strategies the big tech platforms were using to supercharge their dominance.

Tim Wu, also a professor at Columbia Law School, will join Biden’s National Economic Council as special assistant to the president for technology and competition policy. In 2019, he published an (appropriately) small but powerful book, The Curse of Bigness: Antitrust in the New Gilded Age, drawing parallels between the tech platforms and the powerful oil and steel barons of the late nineteenth century. Antitrust laws, he argued, had failed to protect American consumers and society from their dominance.

Given the hipsters’ concerns about the power of big tech, these appointments have been seen as a shot across the bows of the FAANGs — Facebook, Amazon, Apple, Netflix and Google. Biden signalled during the election campaign that he would be open to breaking up these companies, and Wu and Khan have indicated they believe the government should be more willing to use its divestment powers, last used to break up the national AT&T telephone network in the 1980s.

But carving up the tech businesses without damaging their offer to consumers isn’t straightforward. All of the FAANGs rely to some degree on “network effects,” whereby consumers derive benefits from the fact that many other consumers or businesses are on the platform offering them the products, apps or services they are looking for. The easiest option would be to require FAANGs to divest the formerly competing businesses they have acquired in recent years, such as Facebook’s WhatsApp and Instagram, and Google’s YouTube.

Breaking up the tech companies’ core businesses would be highly complex. The regulator would need to prove in court that they had breached anti-monopoly laws and the court would need to force a break-up. The slow pace of complex antitrust litigation and the inevitable appeals could stretch out this process for a decade or more.

But the government and regulators can wage the battle of big tech market power on many other fronts. They could be far more active in preventing the big guys from buying emerging competitors (a preferred strategy of Facebook and Google). They could introduce restrictions on the tech companies competing with the businesses that use their platform to sell their products (Amazon and Google). And they could restrict the companies’ use of their growing trove of consumer data.

Because the United States is the home of the tech companies, actions taken there will have implications for consumers and businesses the world over. But the appointment of Wu and Khan also has implications beyond tech. Both are ardent critics of market power in all its forms.

Their main critique of antitrust policy is that its narrow focus on prices and consumer welfare has missed many of the real dangers of market power — harm to workers and small businesses, rising inequality and, ultimately, a threat to democracy itself. As Wu writes, “The broad tenor of antitrust enforcement should be animated by a concern that too much concentrated economic power will translate into too much political power” and “thereby threaten the Constitutional structure.”

As radical as this may sound, Wu convincingly argues that preventing these problems was the original intention of antitrust law and the animating force behind government actions against powerful conglomerates in the early twentieth century.

Such an emphasis would put the United States on the more activist end of antitrust enforcement globally. Most other competition regulators, including our own Australian Competition and Consumer Commission, are still focused on the economic fallout from market power rather than broader political or social concerns.

It remains to be seen whether Biden’s appointments will lead to a fundamental reimagining of the antitrust paradigm or simply more active enforcement of existing laws. Either way, big tech, corporate America and the world are on notice that business as usual isn’t on the menu. •

The post Biden’s trustbusters appeared first on Inside Story.

]]>
Winning the battle, still fighting the war https://insidestory.org.au/winning-the-battle-still-fighting-the-war/ Tue, 23 Feb 2021 23:52:47 +0000 https://staging.insidestory.org.au/?p=65566

Facebook’s problems with Australian regulators are far from over

The post Winning the battle, still fighting the war appeared first on Inside Story.

]]>
Facebook’s decision to purge news from its Australian feed was as sudden as it was brutal. Last Thursday users awoke to find that the news stories they were used to seeing among happy snaps of family and friends were nowhere to be found; those who relied on the platform to find out what was going on in the world were left high and dry.

As befits a well-executed act of bastardry, there had been no warning. The Australian government, which had been negotiating with the Silicon Valley giant over the News Media and Digital Platforms Mandatory Bargaining Code, was caught by surprise, as were the platform’s users. Newsrooms that had built distribution strategies around Facebook-elicited clicks scrambled to regroup.

Less than a week later, just as suddenly, Facebook was back at the negotiating table. The point had been made, and the deal with the government, when it came, did little to reduce the impact of the company’s shock-and-awe response to Australia’s landmark media code. It was a tantrum that echoed around the world — exactly as it was designed to do.

On day one of the operation, local media had been quick to conclude that Facebook’s plan had backfired. The list of innocent bystanders caught in the crossfire was indeed impressive: community groups, the WWF and its save-the-koala campaign, the Bureau of Meteorology, ABC Kids, health authorities and, of course, the now much-derided North Shore Mums group.

For the Australian media, Facebook had reminded the world not only of its power but also of its scattergun approach to moderation, with the platform seemingly unable to differentiate between the Sydney Morning Herald and the Sydney Local Health District. The void left by news would be filled by anti-vaxxers, conspiracy theorists and whatever charlatan Facebook’s algorithm coughs up on any given day.

But the political ruckus was a small price to pay for the message that was sent to other jurisdictions around the world — Canada, Britain, India and France — where similar regulatory moves are being countenanced. At the drop of a hat, Facebook has the power to cut loose the local media businesses that have come to rely on the platform to distribute their content. News needs Facebook more than Facebook needs news.

Local media had every right to take umbrage — after all, the legislation to enshrine the media code hadn’t even been passed. What made it even more unexpected was the fact that Google, the other target of the proposed legislation, had started to play ball with both Australian and international publishers in a bid to avoid being forced to a negotiating table overseen by an independent arbitrator — a nightmare prospect for a big tech company.

Yet the outrage over Facebook’s Australian news purge overlooked the backdrop to the move. The News Media and Digital Platforms Mandatory Bargaining Code may well have been the boldest attempt anywhere in the world to force digital platforms to pay for journalism, but it’s not the first time Facebook has been on the receiving end of innovative regulation in Australia. Over the past few years, digital platforms have been clobbered by Australian laws and enforcement in ways that are simply unthinkable in other countries — and Facebook has been bearing the brunt of that.

Australia’s 2019 “abhorrent violent material” legislation is just one example of what the social media platform has had to endure. Under the law, Australian-based Facebook employees could be jailed for up to three years if the company fails to remove designated violent content in an “expeditious” manner. Facebook’s Australian boss, Will Easton, is reportedly not involved with the company’s local strategy, which is being guided by head office, yet he could still wind up behind bars if live-streamed content, such as the 2019 mass shootings in Christchurch, isn’t removed quickly enough to satisfy the vague wording of the legislation. Nobody accepting a job with Facebook’s Australian business would be unaware of what they are signing up to and no democratic country has comparable legislation in place.

This also suggests that Australian laws targeting platforms are part of a more complex global mosaic. In the United States, judges at both state and federal level already have Facebook in a headlock, with myriad allegations that the tech giant has violated antitrust or privacy laws. French lawmakers are pursuing objectives similar to those of Australia’s media code, albeit using copyright law; India is pushing back on the use of Facebook’s WhatsApp; and the European Commission, the EU’s regulator, has unfinished business with Facebook over its 2014 acquisition of WhatsApp.

Facebook knows that regulation is catching up with it, and it knows that Australia’s efforts to tamper with its business model had to be shut down quickly and ostentatiously, before other jurisdictions followed suit. The North Shore Mums got themselves caught up in what may prove to be the most significant regulatory tangle of the century.


If turnout is an indicator of success for a press conference, the Australian Competition and Consumer Commission’s 16 December effort was a flop. There was just one journalist in the house — a Sydney-based colleague of mine — to hear ACCC chairman Rod Sims announce a court action against Facebook, Facebook Israel and a Facebook-owned company called Onavo. Yet Sims’s words that day attracted the attention of business editors, and reports of the Federal Court of Australia lawsuit quickly bounced around the world.

The Onavo case is significant. The ACCC is tackling Facebook over data-privacy issues — something it has already done, twice, against Google, with one suit delving into what consumers did and didn’t know about Google’s Android operating system. But Australia’s 1988 privacy legislation, which is only now being overhauled, is a hopelessly inadequate tool for safeguarding the rights of people who use digital platforms. Penalties under the Privacy Act aren’t large enough to deter global tech giants, and the privacy enforcer, the Office of the Australian Information Commissioner, is overstretched and underfunded at the very time privacy challenges are mounting.

By contrast, Australia’s recently updated body of consumer laws — known collectively as the Australian Consumer Law, or ACL — places significant firepower and additional investigative tools in the hands of the comparatively well-resourced ACCC. Since 2018, the regulator has been able to ask courts to impose fines of up to $10 million per offence, or three times the value of the monetary benefit received by the company, or 10 per cent of the company’s global annual turnover — whichever of the three options is the largest. That 10 per cent penalty places Australia at the forefront of global privacy enforcement; even the EU’s groundbreaking General Data Protection Regulation, which came into effect in 2018, fixes penalties at a mere 4 per cent.

The ACCC knows that it has a lethal weapon at its disposal, and its lawsuits against Google and Facebook are likely to reveal the law’s efficacy as a deterrent. By contrast, the information commissioner has been relegated to more narrowly defined privacy cases, such as Facebook’s Cambridge Analytica data breach, which saw the usually unadventurous privacy watchdog file action in the Federal Court that mirrors what is unfolding in other jurisdictions — including Canada, where an almost identical lawsuit appears to be floundering.

But pursuing what are essentially privacy cases under consumer law requires lateral thinking. For example, the ACCC isn’t arguing that Google breached the privacy of users of its Android operating system by not informing them that their phones gather information about their whereabouts even after location-tracking settings have been disabled. The ACCC will instead argue that the search giant’s failure to inform Australian consumers of its data-gathering, storing and processing was a breach of its duties under consumer law. It’s about failing to protect consumers.

Which brings us back to Onavo, the Israeli company behind a free, downloadable software application offering a virtual private network, or VPN. Facebook acquired Onavo in 2013 — a deal driven by Onavo’s access to data that is now viewed as highly controversial.

Onavo’s app, called Onavo Protect, promised its users absolute privacy — it was the app’s key selling point. But the ACCC alleges that the company was in fact hoovering up data from its mobile users, and that the data ended up under Facebook’s control. In Sims’s words, “hundreds of thousands” of Australians were affected by Facebook’s alleged actions, none of them aware that their online habits were being monitored by the owner of their privacy-focused VPN. So great was the concern over Facebook’s relationship with Onavo, says the ACCC, that Apple and Google removed the product from their app stores.

While this lawsuit is unfolding in Australia, Facebook has been targeted by twin competition lawsuits filed by the Federal Trade Commission, one of the two competition regulators in the United States, and forty-eight state attorneys-general. The platform has been accused of extending its monopoly of social media through anticompetitive acquisitions — known as “killer” acquisitions — and of devising strategies to exclude its competitors.

The Onavo documents filed in the US case are likely to make an appearance in the ACCC’s local lawsuit because they reveal what US prosecutors will argue was an early-warning system to alert Facebook to current or future threats to its monopoly. Any evidence of users flocking to a particular software, for example, could be dealt with either through a pre-emptive acquisition or by finding other ways to defuse the threat, according to the complaint filed by the US states.

The ACCC is likely to stick to the straight and narrow of Australian Consumer Law by arguing that Australian users should have been informed that Facebook was making use of their data. But the competition law elements fed in from the United States will boost the Australian regulator’s understanding of how Facebook operates — an understanding now recognised as world-class following the landmark Digital Platforms Inquiry, the eighteen-month probe that ultimately led to the formulation of the media bargaining code.

The ACCC’s developing expertise in digital markets is arguably a bigger threat to Facebook than any single piece of legislation. That knowledge is permeating the regulator’s ongoing probe of digital advertising, which is being closely monitored by lawyers working on a lawsuit, filed in Texas by ten US states, taking on both Facebook and Google over their ad-tech practices.

Meanwhile, the ACCC’s growing scepticism about Facebook acquisitions of other technology companies, which are clearly designed to gain control of vast swathes of data, is now feeding into the regulator’s review of specific deals it is examining, including Google’s move on smartwatch maker Fitbit and Facebook’s completed play for Giphy, a company specialising in GIFs.

In fact, nothing illustrates the ACCC’s fear of Facebook’s control of data better than Facebook’s completed acquisition of Giphy, which is now subject to a behind-the-scenes investigation by the watchdog. Rod Sims doesn’t seem to believe that Facebook has any real interest in GIFs of Kanye West going from a smile to a frown or Oprah giving the camera her “I told you so” look; the tech giant simply wanted to get its hands on the data that changes hands every time you download something from Giphy.

Of even greater concern to the ACCC is the fact that every time you use a Giphy GIF on a rival platform, you are embedding Facebook’s data-gathering software — a kind of digital Trojan horse. “This would be right in the middle of their systems and it would help Facebook scrape the data of their rivals to see what their rivals are doing,” Sims told me recently.


Keen observers know that Australian home affairs minister Peter Dutton unliked Facebook years before the platform decided to purge news from its Australian feed.

In December 2018, federal parliament adopted the world’s first laws targeting encrypted messaging services, which allow law-enforcement agencies to demand access to decrypted messages — in other words, to build a “back door” into international encryption standards. The law’s top two targets were Facebook’s encrypted messaging service, WhatsApp, and Facebook Messenger. Not surprisingly, Facebook pushed back — but then, so did most Silicon Valley and Australian software companies. Scott Farquhar, co-founder of the Sydney-based software giant Atlassian, said the legislation amounted to “legislative creep” and warned that rules earmarked for serious crimes and terrorism may ultimately be used to prosecute traffic offences.

Dutton bristled at the criticism and used a National Press Club of Australia speech in 2018 to accuse American tech giants of dodging taxes and complaining about assisting authorities in democracies while cosying up to dictatorships in “grown markets” — by which he presumably meant China. Two years later, Dutton returned to the theme and singled out Facebook, saying that its plans to provide end-to-end encryption for its Messenger service would create a platform for child abuse. “Facebook would not allow in their workplace the abuse of women or children and yet they provide a platform that enables perpetrators to carry out that very activity,” Dutton said, no doubt knowing that once you’ve accused your adversaries of supporting paedophiles they’re unlikely to return your calls.

For Australian police forces and spy agencies grappling with money laundering and terrorism, access to encrypted messages was an important win. They argued that a key to unlock encrypted messages is now an essential part of their investigative toolbox, just as law-enforcement agencies could tap the phones of suspects in more innocent, pre-digital times.

For Facebook, though, the encryption laws were a disaster. By building a back door into its global encryption, said the company, Australia was paving the way for global criminal syndicates that are looking for weaknesses in secure communication. “Cybersecurity experts have repeatedly proven that when you weaken any part of an encrypted system, you weaken it for everyone,” a Facebook spokesperson said when the legislation was being reviewed in 2019. “The ‘backdoor’ access you are demanding for law enforcement would be a gift to criminals, hackers and repressive regimes, creating a way for them to enter our systems and leaving every person on our platforms more vulnerable to real-life harm.”

It’s not that these arguments fell on deaf ears — it’s more that the Australian government treated them with contempt. Since the MV Tampa entered Australian waters in 2001, the country’s centre-right coalition has staked its reputation on national sovereignty — or, at least, its understanding of national sovereignty. It was never going to accept arguments that it had a global responsibility to maintain the integrity of encrypted messaging services at the expense of national priorities. It argued that its responsibility was towards the people that live within Australia’s borders; the suggestion that it couldn’t apply local regulation to technology companies doing business here was never going to fly.

The 2019 legislation on abhorrent violent content, rushed through parliament in under a week, raised the same global concerns. The Coalition government dismissed them just as quickly. Facebook argues that its global operations mean that its Australian staff can’t be held responsible for, say, a piece of extreme terrorist content uploaded in Kazakhstan by someone with no links to Australia. The prospect of local Facebook employees ending up in jail if the company failed to act quickly to remove extreme violent content was at odds with the global nature of the internet, Facebook said.

This time, it was attorney-general Christian Porter’s turn to ridicule the suggestion that the government didn’t have the right to regulate what was appearing on Australian screens. If television stations were to broadcast extreme terrorist content, they would lose their licence — why should Facebook be any different?

That debate set the tone for the Australian government’s current interactions with Facebook. In April 2019, Porter said that discussions with the tech giant had convinced him that the social media platform had “no recognition of the need for them to act urgently to protect their own users from the horror of the live streaming” of the Christchurch massacre.

What had become clear then and remains clear today is that Facebook knows it’s on a hiding to nothing in Australia. The platform has zero friends among lawmakers and is treated with outright suspicion by a competition watchdog that reckons it understands the platform’s business model better than enforcers in other parts of the world. Meanwhile, time and time again, the Australian government has pushed for policy designed to hurt Facebook while mocking suggestions that a global platform was somehow out of the reach of local laws.

If Mark Zuckerberg does eventually decide that the time has come to turn his back on Australian news, it shouldn’t come as a surprise. •

The publication of this article was supported by a grant from the Judith Neilson Institute for Journalism and Ideas.

The post Winning the battle, still fighting the war appeared first on Inside Story.

]]>
Out of the office https://insidestory.org.au/out-of-the-office/ Tue, 20 Oct 2020 03:43:40 +0000 https://staging.insidestory.org.au/?p=63782

Covid-19 could change how we work, for the better and — if we’re not careful — the worse

The post Out of the office appeared first on Inside Story.

]]>
“I’m sitting in a building here that was built for 5000 people… and there are probably six in it today,” National Australia Bank CEO Ross McEwan told me recently during a parliamentary committee hearing. But there’s more: according to the bank’s surveys, four-fifths of staff members don’t want to return to regular working when the pandemic is over.

Despite promises of an economic “snapback,” it’s becoming increasingly clear that the world of work is likely to change significantly as a result of coronavirus. One of the likely shifts will be the rise of teleworking. If Covid-19 has taken us back a decade in terms of globalisation, it’s taken us forward a decade technologically. Large swathes of the workforce are working from home and the trend is likely to endure, with one US study projecting the share of working days spent at home to rise from 5 per cent to 20 per cent after the pandemic passes. Having fewer desks than employees may become the norm for white-collar firms.

One of the valuable changes will be a move away from open-plan offices, which were always more about corporate symbolism than productivity. We know from a bevy of studies that workers are more stressed, more dissatisfied and more resentful when they work in an open-plan setting. Compared with regular offices, employees in open offices experience higher levels of noise and more interruptions. They are less motivated, less creative and more likely to take sick leave.

Yet in their anxiety to save on rents and give the impression of being “collaborative,” firms pushed towards open plan regardless. Like other critics of open-plan offices, I never gave much thought to their potential to allow diseases to spread more quickly, but this may well be the clincher that shifts firms back to regular offices. If the research is to be believed, this is likely to be good for productivity.

For others, home will be the new office. People used to joke that there were three problems with working from home: the bed, the fridge and the television. But as the evidence rolls in, most of us appear less distractible than we might have feared. Certainly, working from home requires good technology (wouldn’t it be terrific if everyone already had fibre broadband, rather than trying to retrofit it?). It also helps if you’re not trying to juggle work and children. But once those conditions are met, an hour of working from home can be at least as productive as an hour in the office. In one randomised trial, employees in a Chinese firm were 13 per cent more productive working from home. During the pandemic, two-thirds of American GDP has been produced from people’s houses, and the stress levels of American workers has fallen by 10 per cent.

But remote work isn’t without its challenges. One is management quality. Great managers judge people based on their outputs and treat everyone equally. Lousy managers focus on inputs and favour their friends. This means that a major constraint on teleworking will be the quality of managers. Firms may quickly find that managers whose approach was “good enough” in 2019 won’t cut it in 2021. Organisations will struggle if they lack fair benchmarks for performance and good training systems for managers.

It doesn’t help that management training can be faddish, differing considerably across institutions and over time. If firms don’t have consistent performance appraisal systems, workers are more likely to feel that working from home is too much of a career risk. As the Economist recently put it, “the emotion that is most likely to lure workers back to the office is paranoia.”

Remote work is fine for knowledge workers, but if you’re a cleaner or a cashier it’s clearly not an option. According to a study by Harvard PhD student James Stratton, 41 per cent of Australian employees have the kind of job that lets them telework. Yet, as he notes, this simple average masks huge differences. Among low-wage employees, less than one-fifth can telework; among high-wage employees, it’s more than three-fifths. Most of those who have jobs in education or science can readily telework; hardly anyone employed in agriculture or hospitality can.

The consequences for inequality could be profound. In a recent report, MIT economists David Autor and Elisabeth Reynolds note that a rise in working from home could markedly reduce demand for cleaners, security workers, building maintenance workers, hotel workers, restaurant employees, taxi drivers and ride-sharing drivers. The pair predict that the decades-long shift towards urban densification is likely to slow or even reverse, reducing the demand for city workers.

Autor and Reynolds anticipate that a wave of mergers will cause employment to become increasingly concentrated in large firms, which tend to spend a smaller share of their earnings on workers and a larger share on managers and owners. And they forecast an increase in “automation forcing,” as Covid-related restrictions cause companies to adopt labour-saving technologies. When the pandemic is over, the economists point out, firms won’t unlearn these ideas. Retailers, cafe owners, car dealerships and meat packers will need fewer staff after the downturn than they did before.

What can government do? The starting point must be that the labour market of 2019 is not coming back. While it’s hard to forecast specific occupations, we can be sure that the demand for skilled workers will be stronger than ever. This makes it critical to ensure that disadvantaged students get the schooling they deserve. The Grattan Institute has called for intensive small-group tutoring to help a million vulnerable young people catch up to their more advantaged peers. It’s also crucial to ensure that underprivileged teens don’t drop out of school, potentially locking in a lifetime of disadvantage.

With overseas student enrolments falling for the first time in decades, Australia could expand opportunities for domestic students. Just as the early-1990s recession saw a surge in school completion, this recession is a chance to increase the university attendance rate. Why wouldn’t we create a university place for a talented young person who might otherwise be unemployed? Expanding education keeps young people engaged today, and makes them more productive tomorrow.

Crises can lead to astonishing changes. The Black Death helped usher in the Renaissance. The collapse of Chinese dynasties massively reduced inequality. The second world war paved the way for a huge expansion in Australian home ownership. The challenge today is to recognise how the recession will change the world of work, and how we can secure prosperity and equality for Australians in the decades to come. •

The post Out of the office appeared first on Inside Story.

]]>
The big Apple https://insidestory.org.au/the-big-apple/ Mon, 24 Aug 2020 07:17:37 +0000 http://staging.insidestory.org.au/?p=62811

The technology company’s latest valuation shows how big internet-based companies are using a public network to wield monopoly power

The post The big Apple appeared first on Inside Story.

]]>
Coming in the middle of the deepest recession for decades, the news that Apple Inc. has become the first US company with a stock market valuation of more than US$2 trillion might seem paradoxical. Admittedly, Apple’s business hasn’t been harmed by the Covid-19 pandemic, but neither has it greatly benefited — earnings in the June quarter were only about 10 per cent higher than in 2019, yet the stock price has doubled in less than six months.

Even more striking is the ratio of Apple’s share price to the book value of assets. Most of the time, the market value of a company is about equal to the value of its physical capital, so that the price-to-book ratio is close to one. For Apple, the ratio is a startling twenty-seven to one.

Much the same story can be told about other leading tech stocks. Along with Apple, Alphabet (owner of Google), Amazon, Facebook and Microsoft account for around 20 per cent of the total value of the S&P 500 Index. They have price-to-book ratios ranging from five (Google) to twenty (Amazon).

The difference between the book value of physical assets and the stock price is commonly explained by “intangibles.” That term can cover all sorts of things, and is often taken to refer to some special aspect of the firm in question, such as accumulated research and development, tacit knowledge or the “goodwill” associated with its brand.

At most, R&D is a small part of the story. The leading tech companies each spend between ten and twenty billion dollars a year on R&D, a tiny fraction of their market valuations. And while the big tech firms still retain plenty of goodwill among consumers, the attitude of their business partners is better described as one of resentful dependence. Software developers who want access to the iPhone market have little choice but to go through Apple’s App Store and give Apple 30 per cent of their revenue. Amazon’s Web Services platform has a similar hold on e-commerce. And so on.

The main intangible asset held by these companies is their monopoly power, which arises from network effects (every extra user adds to the value of the business for all users), their intellectual property, and good old-fashioned predatory behaviour. In this context, the crucial point about intangibles isn’t that they aren’t physical, it’s that they can’t be reproduced by anyone else.

No one can sell a Windows or Apple operating system, even if he or she were willing to invest the effort required to reverse-engineer it. While there are competitors for Google’s search engine (I recommend DuckDuckGo), the barriers to entry are huge, notably including the fact that the product is “free,” or rather supported by advertising for which all consumers pay whether they use Google or not.

There’s a complicated relationship here between the rise of monopoly and the development of the information economy in which the top tech firms operate. Information is the ultimate “non-rival” good. Once it’s generated by one person it can be shared with anyone else without diminishing in value. As the cost of communication has fallen, it’s become possible for everyone in the world to gain access to new information at essentially zero cost.

What this means is that there is very little relationship between the value of information and the ability of corporations to capture value from it. The protocols and languages that make the internet possible are a public good, created by collaborative effort and made freely available. The information on the internet is generated by households, businesses and governments using these protocols.

Without these public goods, Google would be worthless. But because advertising can be attached to search results, ownership of a search engine is immensely profitable. Similarly, Facebook’s value is derived entirely from the contributions of its users. Apple and Amazon are more like traditional businesses, but increasingly rely on internet services for their profits. Thus, a network created in the public sector has become the underlying infrastructure for private monopolies.

It is easier to diagnose the problem than to suggest a cure. Traditional remedies such as reversing anti-competitive mergers might improve the situation a little. But the ultimate solution is likely to require returning the internet to its non-commercial roots and treating crucial services like search and e-commerce platforms as public utilities, subject to tight regulation or public ownership.

Such changes would require a radical reversal of the opposition to public ownership that is still the default position of public policy, despite decades of failed market reforms. But if there is one thing that the last few years have taught us it is that, for good or ill, radical change only seems unthinkable until it happens. •

The post The big Apple appeared first on Inside Story.

]]>
Workers versus consumers: a false tradeoff https://insidestory.org.au/workers-versus-consumers-a-false-tradeoff/ Mon, 17 Aug 2020 05:15:47 +0000 http://staging.insidestory.org.au/?p=62690

Are trade, competition and technology good for consumers but bad for workers? History shows otherwise

The post Workers versus consumers: a false tradeoff appeared first on Inside Story.

]]>
Australians don’t care about consumers — at least, not as much as they care about workers. Surveys regularly show that a majority of us would be happy to pay more for goods and services if it meant more jobs for Australians. Recognising that consumers are workers, and workers are consumers, the view of most Australians can be summarised succinctly: what’s the point of cheap products if you don’t have a job?

This sentiment often comes up when we talk about more trade between countries, more competition between firms, and fresh advances in technology. All three reduce prices, which benefits the poorest consumers in our communities the most. But at what cost? If it means fewer jobs for Australians, Australians don’t see lower prices as being of much value.

This is a false trade-off. Not only do trade, competition and technology reduce prices for consumers (which means greater purchasing power for everyone), they also create jobs. This is not to say there are no losers: all three will destroy some people’s jobs. But history shows that they create jobs for other people and, most importantly, they create more jobs than they destroy. The real problem is politics. In advocating economic reforms, politicians neglect to mention that there will be losers from those reforms and, in doing so, make no plans to help them.

Take trade first. Trade has dramatically reduced prices for Australian consumers — audiovisual and computing equipment is 72 per cent cheaper thanks to trade, cars are 12 per cent cheaper, toys and games are 18 per cent cheaper, clothes are 14 per cent cheaper. But when it comes to the impact on workers, Australians are more suspicious.

They shouldn’t be. Research shows that Australia’s trade undoubtedly creates more jobs than it destroys. More importantly, the negative effect that imports can have on employment is weakening over time as more of the things we buy from overseas (mining equipment, IT equipment) are used in what we export (mining resources, education services).

These results are not surprising when trade is properly understood. Trade is about specialisation. It allows us to focus our finite resources (labour, capital, energy, materials) on producing the things we are good at (and that earn us the most money) while importing the rest. Focusing on the jobs lost from trade is to look at trade with one eye open. For every job lost in one area, more have been created in another.

Encouraging stronger competition between firms often attracts the same criticism. The idea is simple: industries protected from competition by government laws and regulations (domestic airlines, pharmacies, the medical profession, the legal profession, coastal shipping and many more) might charge higher prices for consumers, but at least their workers are safe from losing their jobs through cut-throat competition.

Again, the research shows this is completely backwards. Economic theory suggests that stronger competition means increased productivity and more businesses competing to attract workers, both of which result in higher wages and better conditions. And this is what we see in the data. Industries that lack competition not only inflict higher markups on consumers, they also treat workers terribly: they pay them less, are more likely to form anti-worker cartels and push down the share of national income going to wages. Paul Keating’s competition reforms substantially reduced prices — they made electricity 19 per cent cheaper, telecommunications 20 per cent cheaper and milk 5 per cent cheaper — and created more jobs as the reforms took effect.

Technology is the most controversial of the three alleged job-destroyers. By producing cheaper goods, automation has dramatically reduced the cost of living for the most vulnerable. But warnings about its effect on employment have been dire. Up to five million Australian workers might need to find new jobs, according to McKinsey. These aren’t just forecasts, either. The number of workers needed to make a car has fallen from eighty-four during the time of Henry Ford to just a handful in the time of Elon Musk, thanks primarily to automation.

But, again, this is a one-sided view. Advances in technology have created jobs and industries that Henry Ford could never have imagined. The long view of history shows that the disruptions caused by advances in technology have created more jobs than they’ve destroyed. In Australia, despite significant increases in all the things commonly believed to destroy jobs — trade, competition and technology, as well as immigration, population growth and foreign investment — both employment and workforce participation have trended upwards as a percentage of the population over the longer-term.

So why are Australians so glum about trade, competition and technology? Behavioural economics suggests a few theories. Topping the list are loss aversion, status quo bias and the identifiable-victim effect. Studies show that people get a greater benefit from not losing something than they got from gaining that thing in the first place; that people have a bias towards the status quo; and that people respond more strongly to a person clearly harmed by an action (for example, the unemployed worker on the TV news) than to a large but invisible group that benefited (for example, consumers).

Politicians know this. So, when they set about selling a reform that has both winners and losers, it’s easier to lie. They sell trade as “boosting Aussie exports” and “opening markets overseas,” and they sell technology as “the most exciting time to be an Australian,” because that way there are no losers. The problem is that there are losers from trade, competition and technological change, and pretending this is not the case prevents politicians for developing the supports and compensation those people need.

The result is exactly what we have today: a patchwork of weak state and federal policies on retraining and reskilling, inadequate and poorly targeted safety nets and a constant demonisation of the unemployed. It’s only when we accept the facts about trade, competition and technology that we can have an informed pubic conversation about how to manage their costs and benefits. •

The post Workers versus consumers: a false tradeoff appeared first on Inside Story.

]]>
Machine learning https://insidestory.org.au/machine-learning/ Fri, 19 Jun 2020 00:43:42 +0000 http://staging.insidestory.org.au/?p=61586

Does the federal government’s heavily qualified apology for the robodebt fiasco suggest that more trouble is on the way?

The post Machine learning appeared first on Inside Story.

]]>
Back in 2015 it was billed as “one of the world’s largest transformations of a social welfare system.” Tony Abbott’s social services minister, Scott Morrison, declared that the replacement of the thirty-year-old computer system responsible for $100 billion in payments to 7.3 million people would “ensure more government systems are talking to each other, lessening the compliance burden on individuals, employers and service providers.”

The simplified system, said Morrison, “will make it easier for people to comply with requirements and spend more time searching for jobs, which is the key element of welfare reform.” Moreover, “this investment will also help us stop the rorts by giving our welfare cops the tools they need on the beat to collar those who are stealing from taxpayers by seeking to defraud the system.”

Those were the days. The government was in its first term and Morrison was an ambitious, gung-ho minister with a fondness for the police and military analogies that had stood him in such good stead in party circles when he was overseeing Operation Sovereign Borders.

Robodebt started the same year — though it didn’t acquire its pejorative nickname until later — and it reflected Morrison’s punitive approach. The increasing potential of automated data-matching — in Morrison’s words, the fact that more government systems were talking to each other — was for the first time making it cost-effective to pursue overpayments of welfare benefits.

The problem was that the systems were talking different languages. Crucially, the tax office was supplying income figures that couldn’t be matched to the benefits people were receiving. The government went ahead anyway, putting the onus on welfare recipients to prove the figures wrong, and causing real hardship and trauma, including reported suicides, among vulnerable people.

A program that had previously reviewed 20,000 cases a year conducted more than 900,000 reviews in the four years to the end of August last year, with 734,000 identified as having been overpaid. Except that many of them hadn’t been. Most of the reviews looked at benefits paid under Newstart and Youth Allowance, though they were eventually extended to other payments, including the age pension, the disability support pension, Austudy and the parenting payment.

Subsequent events have culminated in the government’s promise to pay back $721 million to 373,000 Australians for 470,000 illegally recovered and often non-existing debts. With almost two-thirds of the debts having been reversed, the government’s early depiction of the sunlit uplands looks particularly ironic.

Speaking on the same day as Morrison in 2015, human services minister Marise Payne was just as effusive about how data analytics would inform policy decisions. “Improvements to real-time data sharing between agencies will mean that, with customer consent, their information won’t have to be provided twice,” she said. “Improved data sharing will also significantly increase the government’s ability to detect and prevent fraud and non-compliance. This means customers who [simply] fail to update their details with us will be less likely to have to repay large debts and those who wilfully act to defraud taxpayers will be caught much more quickly.”

When applied to robodebt, this enumeration of the new system’s benefits turned out to be wrong or misleading in every detail. The data shared was not real-time: annual income figures from the tax office were averaged out to compare them with fortnightly benefit payments, producing many wrong assessments. Customers were not asked for their consent: they were pursued to provide information the government already had or was responsible for obtaining.

On top of all that, the relatively few perpetrators of welfare fraud are also being repaid their robodebt money because the government finally had to concede that the whole scheme broke the law. That admission came more than two years after Terry Carney made exactly that point as a member of the Administrative Appeals Tribunal. Carney, now a professor of law, has since described robodebt as “illegal, immoral and ill-constructed.”

The government is continuing with “online compliance intervention” — robodebt’s official title — but it will no longer use income averaging and it has promised other “refinements.” It has yet to give a clear commitment not to try to recover some of the same debts by different means. As Morrison put it earlier this month, the decision to refund the money “doesn’t mean those debts don’t exist. It just means that they cannot be raised solely on the basis of using income averaging.”

The government is also forging ahead with upgrading and increasingly automating its welfare payments system. Properly designed — and that’s a big caveat in the light of recent experience — the modernisation should make it easier for people to claim their correct benefits and easier for the government to make sure that they are paid the correct amounts.

The Welfare Payments Infrastructure Transformation project — the one described as among the biggest in the world five years ago — has another two years to run. Services Australia, formerly the Department of Human Services, claims that it has made practical improvements, including introducing prefilled claim forms using information already available to the government, enabling claims via mobile devices and verbally, and speeding up claims processing for some students and the unemployed. It boasts that the number of questions on the online claim form for students and trainees has been reduced from 117 to thirty-seven.

But, as the robodebt experience demonstrates, many of the new system’s claimed advantages are double-edged. The “digital assistants” introduced to answer customer questions, for example, mean less human interaction, which is reflected in staffing reductions that have already taken place. But most of us already know just how frustrating it can be dealing with digital assistants.

Similarly, analytics will be used to “proactively provide support to those who need it.” And also to take it away? A new “payment utility platform” promises same-day payments but also “simpler debt repayment processes.” In the wake of robodebt, how many people will be keen to use it?

A new “entitlement calculation engine” will determine payment levels. And if a person wants to challenge the calculation? Presumably they will be expected to sort it out with one of the new digital assistants.

It is one thing to increase automation for people well versed in the ways of the digital economy, but it is entirely another to impose it on vulnerable people who may or may not be familiar with online processing. As Australian Council of Social Service chief executive Cassandra Goldie told Inside Story this week, “Robodebt fundamentally failed because we stripped out the ‘human’ in human services. Instead it was up to individuals to try and prove their innocence in a David versus Goliath battle with automation.” For Goldie, humans must have a role in decisions about essential services like income support and “we must build in ways to enable people to easily correct decisions where mistakes have been made.”


In the end, it is how the system is designed that will determine the nature of the experience for its users and how much emphasis is placed on ferreting out suspected wrong claims.

Judged by the guidance from the top, the bias will be towards limiting entitlements. In 2018 the government introduced ParentsNext to add another layer of obligations to those already imposed on parents on low incomes who receive parenting payments. According to a Senate committee report, one in five parents had their payments suspended for missing appointments or failing to participate in “pre-employment” programs under this new scheme.

The social services minister who promised in 2015 to use the modernised welfare system to sool the “welfare cops” on to beneficiaries is now the prime minister who says the refunded debts still exist.

Having initially refused to apologise for robodebt for fear of legal liability, Morrison thought better of it and, in response to a pointed question from Bill Shorten, assured parliament of his deep regret for any hardship caused. But government services minister Stuart Robert immediately added that 939,000 Australians had $5 billion worth of debt “that the government lawfully has to collect across a whole range of programs.” Message? We’re still coming after you.

In her 2017 book Automating Inequality, American political scientist Virginia Eubanks describes how digital eligibility systems, matching algorithms and other tools have been used in the United States to drastically cut the welfare rolls. “At their worst these systems act as empathy overrides, allowing us to turn away from the most pressing problem of our age: the life- and soul-threatening legacy of institutional racism, classism and sexism in America,” she said in a speech last year. “They allow us to ignore our moral responsibility by replacing the messiness of human relationships with the predictable charms of systems engineering.”

It doesn’t need to be that way. But governments will have to resist the temptation to succumb to the convenience of allowing machines to make decisions that require judgement, compassion and humanity. •

The post Machine learning appeared first on Inside Story.

]]>
Smart harvest https://insidestory.org.au/smart-harvest/ Thu, 11 Jun 2020 06:38:11 +0000 http://staging.insidestory.org.au/?p=61450

Pacific islanders are responding to disruptions to food security with cultural solidarity and new technology

The post Smart harvest appeared first on Inside Story.

]]>
The relative isolation of Oceania has limited the spread of Covid-19, leaving most island nations free of confirmed cases and reversing the early surge in Fiji, Papua New Guinea and New Caledonia. But the effective quarantining of island populations has been a double-edged sword. With international air and sea transport disrupted, overseas tourism has collapsed, hitting wage employment particularly hard in countries like Fiji, Vanuatu, Palau and Cook Islands.

Most Australian and NZ coverage of the crisis has highlighted the role of defence forces in supplying aid to the Pacific islands, and the competition for influence with China. There’s been little news of how local organisations are ensuring food security for the urban unemployed and people previously reliant on overseas supply chains.

Non-government, church and community organisations are supporting the poor in urban centres, networking with rural communities and promoting healthy, local foodstuffs. They are not only drawing on Pacific traditions of reciprocity, family and sharing, but also tapping into new technologies, organic farming and social media.

Development consultant Feiloakitau Kaho Tevi, a former general secretary of the Pacific Conference of Churches, highlighted the importance of family and community in a recent interview for the Global Research Programme on Inequality. “Families in Tonga have distributed their root crops freely in trying to help those in need,” he said. “A barter trade market on the internet is exploding in Fiji where individuals are exchanging goods and services, trying to help each other fare through these difficult times.”

Stories like these are coming in from many Pacific islands, Tevi said. “In some sense, it is not surprising that Pacific islanders react as such, given our communal living and our sense of caring for the other.”

Many people have responded with resilience and creativity — setting up barter networks for those without cash, shifting from export crops to local markets, returning to the village to work on family gardens and, above all, planting, planting and planting. “Our reactions to the pandemic, by far, have been more localised; falling back on our strengths as Pacific islanders: our sense of reciprocity and community living; living off our land,” said Tevi. “It was a consolation of some sort that the solutions to our ‘hardship’ are to be found in our own plantations and villages.”

This sentiment is echoed by the secretary-general of the Pacific Islands Forum, Dame Meg Taylor. When I spoke to her for Islands Business magazine, she welcomed international assistance, but highlighted the local mobilisation across the region: “After health, there’s going to be recovery around food security and environmental security. I think in the bigger islands, one of the good things is that everybody is planting and going back to our natural resources to feed ourselves. My own family and community in the Highlands of Papua New Guinea are getting their gardens going, so if there’s a long period of isolation, they will survive.”


Although the Solomon Islands has recorded no cases of Covid-19, the island nation had its own shock in April when Tropical Cyclone Harold hit, with devastating consequences. The government announced a state of emergency and the associated economic downturn has seen many people leaving the capital, Honiara, to ride out the crisis in their home villages on outlying islands. People in town are turning to family connections for support, and using social media to promote exchange and barter.

Alex Haro, principal of the Woodford International School in Honiara, joined with a group of friends to establish Trade Bilong Iumi, a Facebook page that allows people to barter and exchange necessities during the downturn.

“I started Trade Bilong Iumi because I had a lot of friends who had financial difficulties, so we came up with the initiative of this Facebook page,” Haro tells me. “Basically, there is no money involved, just the exchanging of goods and services. This is for Solomon Islanders if they have problems with their finance — this is their platform.”

Use of the page is gradually increasing. “For example, there were people from the [Weather] Coast, they actually needed some taro. So, they went fishing and then went on the Facebook page and said, ‘We’ve got some tuna and we need some bags of taro or cassava’ — and they actually exchanged the goods.”

For Haro, social media can build on existing cultural values among Melanesian communities. “This is what we have been practising back in the olden days — that’s how our ancestors have survived,” he says. “Our wantok system is very different to the Western world where you look after yourself, but here it’s about the community. If someone’s got a problem, then the brother or the sister or the aunty will step in. That’s how we survive.”

In other countries, activists are using social media to establish non-commercial barter networks, especially for people who have lost their jobs in the waged economy.

In Suva, the Barter for a Better Fiji group has 170,000 supporters and more than 4400 members on its public Facebook page. Administrator Marlene Dutta set up the site to encourage people who are doing it tough to connect with others. “Back in the before when money was sooo tomorrow,” say the organisers, “our ancestrals lived by exchanging what they had for what they needed. Easy eh? How about we do that again now? Some smart gang already doing it one-on-one style… but what if there was a space for everyone to trade? Well folks, this is it.”

In response, people have posted requests for food, clothes or other items, offering to barter an eclectic mix of goods: “My daughter’s tricycle for groceries (Rewa powder milk; 2kg sugar; 4kg rice, 2 tin tuna, eggs, oil, Maggi noodle etc)”; “One rooster to exchange with 2 x stereo speakers”; “A metal sink for fish and cassava”; “Seven kilos of waqa [kava] for a good smart phone.” One person has even offered tattoos in exchange for goods.


A different sort of pandemic-era scheme is running in Lautoka, Fiji’s second-largest town. Widely known as “Sugar City,” Lautoka is located in the sugar-cane belt on the west coast of the main island, Viti Levu. It was the site of Fiji’s first sugar mill, built by indentured labour from India and Solomon Islands and launched in 1903 by the Colonial Sugar Refining Company.

Lautoka also recorded Fiji’s first confirmed cases of Covid-19, after a flight attendant from Fiji Airways was diagnosed on 19 March. Within three days, two members of his family were also diagnosed with the disease.

Having already banned cruise ships and restricted international air travel, the Fiji government moved to quarantine Sugar City to limit the possibility of further community transfer. During the initial two-week lockdown, police roadblocks prevented people from leaving the city, except for essential travel.

“When the lockdown was announced, we thought we were just shutting the office and going home,” Sashi Kiran tells me. “But after a couple of days it was very obvious that people in Lautoka who were dependent on the city — hawkers, casual workers, wheelbarrow boys and other people with day jobs — were asked to stay at home at short notice. People who live week to week or even day to day were asking for food.”

Kiran is director of the Foundation for Rural Integrated Enterprises and Development, or FRIEND, a non-government organisation that has run programs on socioeconomic development, health and welfare in Lautoka for nearly two decades. Kiran says the overnight lockdown of the city created immediate problems for the poorest members of the community.

“Within days we partnered with organisations like the counselling body Empower Pacific,” she says. “Eighty per cent of the calls were people asking for food, and we also had the challenge of people not being able to access their medications. We asked for public assistance and people were very generous and we started doing food distribution. Unfortunately, it was raining very heavily because of Tropical Cyclone Harold and people couldn’t come outside. Our people were going out to impacted areas and to homes to deliver food, so we’ve been on the ground since March.”

Even before Fiji was hit by the double whammy of the coronavirus pandemic and the category-five cyclone, food security and good nutrition had been an issue for some rural communities and people living in peri-urban squatter settlements. The country has significant rates of non-communicable diseases, and studies around the world are showing that the risk of severe illness from Covid-19 is compounded by obesity and diabetes.

During the pandemic, lack of access to food or cash has created new pressures. In response, FRIEND has expanded existing programs to help people grow nutritionally diverse food, to ensure that children don’t face malnutrition.

“For people in town without land, we’ve been doing training on how to grow food in sacks or containers,” says Kiran. “Access to land in the squatter settlements, including the poorest communities, is a major challenge. They don’t have resources where they can plant. Sometimes when we reach people, they say, ‘My children haven’t eaten for the last three days.’ At that time, because of the cyclone, the rain and the Covid lockdown, they couldn’t even go to the shore to fish.”

Lautoka City Council responded to NGO requests for land with two blocks, including almost a hectare near some of the squatter communities. “The youth are preparing that land and planting,” says Kiran. “With this communal garden, the youth will be able to harvest and give people the food they need.”

The Covid-19 crisis is creating opportunities for young people to develop businesses around sustainable agriculture and nutrition. Youth entrepreneur Rinesh Sharma founded Smart Farms Fiji in April, and has been marketing basic hydroponic systems for households without land to grow leafy foods and vegetables, to supplement their diet.

Non-government organisations are also reaching out to rural communities, to support urban workers who have lost jobs and income during the current crisis. “We’ve also spoken with i-Taukei landowners and Indian farmers, and some villages have allocated large pieces of land, five acres or ten acres, to grow food,” Kiran said. “This is getting ready for people from the tourism industry who have lost jobs and who are coming back to their home village.”

In one case, she says, people from Tailevu brought food to people from their villages who are living in Lautoka. “Through these communal gardens, the surplus can be shared with their own people.”


Before the crisis, Pacific governments were supporting farmers’ networks through training and agricultural extension programs. Regional intergovernmental organisations like the Pacific Community, or SPC, have made food and water security a central element of their work on disaster preparedness and climate adaptation. For many years, the SPC has been testing new crops that can withstand the extremes of drought, flooding and salinity brought on by climate change.

In Marshall Islands, for example, the SPC has been supporting the Readiness for El Niño project since 2017. Women from outlying drought-prone islands like Ailuk and Kwajalein have established community nurseries, introduced improved soil management and drought-resistant crop varieties, and expanded water storage. Since the Covid-19 lockdown, new initiatives such as the Seeds for Life project, implemented by the SPC and Manaaki Whenua Land Care Research, have improved access to planting materials in Kiribati, Samoa, Tonga, Fiji, Tuvalu and Vanuatu.

This government work is complemented by the grassroots farmers’ networks of the region-wide Pacific Farmers Association. These local groups have encouraged the development of seed banks, communal gardens and organic farming, while seeking to improve livelihoods and food security for smallholders and village-based farmers. The long-established networks are all the more important today, as unemployed people move back to the provinces to clear land and make gardens.

For twenty years, the Kastom Gaden Association, or KGA, has been supporting farmers in villages as well as urban settlements around Solomon Islands. KGA developed sup sup gardens (backyard plots) in Honiara’s settlements, and its Planting Material Network has nearly 3000 members across the country.

“Kastom Gaden has already created gene banks or germplasm centres in the provinces,” KGA coordinator Pita Tikai explains. “We had some partners that we worked with to establish germplasm collections, like a seed garden. Farmers can access some planting material, especially at this time where people are going crazy looking for seeds, looking for planting materials in order to grow things.”

Tikai says that KGA hasn’t so far seen food shortages, “but you can see people going round who have lost full time jobs, so they are resorting to making backyard gardens,” he says. “People are looking for seeds, people are looking for planting materials. Currently we haven’t got this full lockdown, but people are wondering what the future will be like. People are getting gardens so they will have food stocks if we have a real crisis and confirmed cases [of Covid-19] and the government suddenly gives us a total lockdown.”

The disruption of transport has halted some agricultural exports, along with imports of crucial farming resources like seeds and fertilisers: “Commercial seeds coming into the country are already affected. If you go to shops around town that normally sell seeds, they say, ‘Our orders are yet to come in.’ So here in town, people are flooding to KGA’s main office here in Honiara, asking for nursery seedlings. Our partners are also asking us to raise seedlings that they can supply to their communities.”

Tikai believes that donors and government departments should be working in collaboration with existing networks established by non-government organisations. “I really want the government to work with us, as NGOs, to strengthen these gene banks and seed collections. The government is now thinking about establishing seed gardens, but we at Kastom Gaden already had this network of farmers and seed gardens around the country that people can source planting materials.”

The government’s agriculture ministry has begun distributing some free seedlings, says Tikai, “but it’s time for collaboration between stakeholders, especially from the line ministry, to support us to strengthen this network for when the real disaster comes. If there’s full lockdown, then people can find the materials that they need to survive. That would sustain the food supply and also help avoid a food health crisis that might happen in future.”


Food production is also closely linked to tourism, which makes up more than 40 per cent of the GDP of Fiji, Vanuatu and Palau.

Tourists are also a major earner for the Polynesian atoll nation of Cook Islands. Despite talk of a “tourism bubble” involving Australia, New Zealand and some island nations, the downturn in tourist numbers has damaged Rarotonga’s burgeoning organic agriculture industry. Growers face collapsing sales to tourist hotels, and are looking to find new markets for local production.

According to organic farmer Missy Vakapora, secretary of Natura Kuki Airani, or NKA, the Cook Islands organic farming industry has taken a significant hit as overseas visitors stay away. “The growers that I know are finding it very hard because the majority of them supply the resorts and they are losing money,” she tells me. “For organic growers, it’s often the tourists — whether from New Zealand or America or Australia — who buy our organic produce. So, with the crisis, we’ve lost this market, all up about 60 per cent of our business.”

But there is a positive side. “The majority of us have had to drop our organic prices to normal prices, so now local people have a choice between conventional products and the organic products which are much more affordable than they were before the virus hit.”

Vakapora believes there will be significant shifts in agriculture as long as the pandemic lasts. “The majority of growers are planting short-term crops now, more for the quick turnover,” she says. “There’s a lot more leafy products out there than normal. They’re not growing all the fancy stuff like carrots and radishes that the local people don’t like — they’ve returned to traditional foods like taro, kumara, local snake beans and other local varieties.”

The hit to markets and transport has also disrupted initiatives to expand organic farming in the Cook Islands. In 2015, the UN’s International Fund for Agricultural Development and the SPC came to Rarotonga to encourage a shift to organic production among local farmers. Growers were trained to use certified bio-organic materials and develop the naturally grown teas or herbs that are popular among older Cook Islanders. Farmers soon recognised the need for an organic seedbank in Cook Islands — an initiative that was almost completed when the coronavirus pandemic hit.

“The seed bank that we’re trying to get up and running is at the prison,” Vakapora tells me. “They actually have a conventional garden, right in the middle of the prison where nobody goes and they decided to go organic. Before the current crisis, it was just starting to get going. Through IFAD and SPC, we got funding for the cooling system for seeds, and we were just about to start generating the seeds for the prison when the coronavirus hit.

“Fiji were just about to send us open-pollinated seeds that were already certified organic, which would have been easier for us to plant at the prison, then harvest and secure the storage for them. However, the virus hit and we couldn’t get the seeds. It’s on hold until the borders open.”

How long will it be until that happens? Until a Covid-19 vaccine is developed and distributed, the global economy faces a long, slow return to pre-pandemic levels of activity. In the meantime, people are looking to develop more sustainable modes of development — and it’s clear that Pacific farmers are even more essential than before to lives and livelihoods across the region. •

The post Smart harvest appeared first on Inside Story.

]]>
Can we break the climate cycle? https://insidestory.org.au/can-we-break-the-climate-cycle/ Mon, 01 Jun 2020 07:10:57 +0000 http://staging.insidestory.org.au/?p=61112

Human psychology might finally be on the side of decisive action to decarbonise Australia’s economy

The post Can we break the climate cycle? appeared first on Inside Story.

]]>
What does the pandemic-induced recession mean for clean energy and decarbonisation? It’s a question lots of people have asked me recently, and it invariably takes me back to a conversation I had in 2007, the year before the global financial crisis.

I had bumped into an old university friend at a friend’s wedding. One of the smartest in our year, he’d ended up in Macquarie Bank’s “millionaires factory,” at the very pinnacle of the economy, sewing up multibillion-dollar deals for airlines, airports and other major infrastructure.

I wasn’t making anywhere near as much money as he was, but I had just spent the most rewarding twelve months of my life working for the main industry association representing the clean energy industry. Before that, I told him, I’d felt like I was banging my head against a brick wall. I’d spent the first six years of the decade battling for an emissions trading scheme, or ETS, an expansion in renewable energy and half-reasonable energy-efficiency standards, at every turn rebuffed by the Howard government. And no one seemed to care: while I struggled to get the media interested, the government rode high in the polls.

Then, suddenly, the wall had crumbled. In the last year of the Howard era, emissions trading became bipartisan policy and the Renewable Energy Target was to be expanded to 15 per cent (although rebadged as a Clean Energy Target). After a big battle with the Housing Industry Association we had managed to get a five-star efficiency standard for new homes. Horribly inefficient conventional lightbulbs would finally be phased out. Heck, we’d even got the rebate for solar panels doubled from $4000 to $8000 without even asking.

My friend just nonchalantly remarked, “Yeah, it’s the economy. Everything is going well. When people feel economically secure they start to worry about more than just themselves and start to think broader and more long term.”

Given his intelligence and vantage point I couldn’t simply dismiss his observation. But I felt like shouting, “No, you’re wrong! This is a permanent change in perspective. People have now woken up to the fact that global warming is a massive threat to humanity.”

At the time, Australia was in the grip of the Millennium Drought. Water storages in every mainland capital had fallen to near-emergency levels. State governments were pursuing an option previously restricted to Middle Eastern countries: desalinated sea water. City dwellers saw their gardens dying; farmers watched their crops fail and their livestock starve. This incredibly long and severe drought had set people to wondering. This isn’t normal, why is it happening?

It was as if the severe drought had created in people’s minds something like it was creating in the surrounding bush — a tinder-dry underbrush of questions about what was unfolding with our climate. All it took was a few well-placed matches from the likes of Al Gore’s An Inconvenient Truth and Nicholas Stern’s report on the economics of climate change to set it alight.

A few months later, Kevin Rudd sailed into government with a major climate change agenda. His government ratified the Kyoto Protocol and released a comprehensive plan for an ETS. In the United States, the Republican presidential candidate, John McCain, didn’t just accept the need to act on climate change; his name was on several bills to introduce a national ETS across the United States.

Then the financial world collapsed. Labor continued to pursue its climate policies, but more slowly, distracted by the threat of a deep recession. Business leaders who had been reasonably supportive of an ETS became increasingly strident in their claims that economic disaster would strike if they weren’t exempted from the scheme. A Coalition faction led by Nick Minchin argued that we could no longer risk job losses from the scheme.

In December 2009, Malcolm Turnbull was replaced as opposition leader by Tony Abbott. Less than six months later Kevin Rudd abandoned the ETS.

Then it started to rain, or rather pour. Victoria experienced major flooding in September 2010, Brisbane over the subsequent summer. Desalination plants were put into mothballs. By 2011 polls were showing that Australians’ concern about climate change had waned considerably.

While Rudd’s successor, Julia Gillard, ultimately legislated an ETS (thanks to pushing by the Greens and independents), it was dead on arrival. Investors could see that Tony Abbott was an unbackable favourite to win the next election, so none of them would risk their money on ventures requiring an ongoing carbon price. Investment in large-scale renewables projects completely dried up as financiers fled in the face of Abbott’s expected axing of the Renewable Energy Target.

What in 2007 had seemed to be a permanent change for the better was revealed to be transitory. It seemed that my friend was right.


The parallels between then and now are striking. Just a few months ago Australia experienced bushfires so extreme in scale and duration that large segments of the population and the media were again wondering out loud: isn’t this the climate change we’ve been warned about?

The fires were preceded and largely facilitated by an extraordinarily severe drought that had also savaged agriculture. They began in late winter and lasted all the way to February, and they were terrifyingly huge. The Morrison government found itself pounded day after day about why it was doing so little to reduce Australia’s emissions and encourage greater global action. All sorts of rumours began swirling that the government would cave to the pressure, including adopting a net-zero emissions target for 2050.

But then it started to rain. Sydney’s water storages surged to almost 90 per cent capacity. Good rains fell across the east coast of Australia. And then Covid-19 came along and we suddenly had other things to worry about.

Now it is possible, indeed logical, for Australian governments to use the need to bring the economy back to life as an opportunity to rejuvenate and decarbonise our electricity system — not simply by building more solar and wind farms, but by creating a national system fit for the future.

The relatively weak interconnections between state electricity systems have always been a problem for competition, and hence prices, and even more so as variable wind and solar power takes a bigger share of the market. Building new transmission capacity would improve reliability and lift competition while simultaneously opening up areas holding rich wind and rich solar resources. Proposed pumped hydro projects dotted through New South Wales, Tasmania and South Australia would balance out the variation in wind and solar.

Another opportunity lies at the smaller scale of households. Ranked according to the proportion of households with solar systems, Australia leads the world. Around one in three households in South Australia and Queensland have a solar system, one in four across Western Australia and one in five in New South Wales. This is a solid foundation for a world-leading roll-out of home battery storage and smart-internet controls of appliances to help manage demand. True, batteries are expensive, but so were solar panels back in 2008 when Australia started its path-breaking household solar journey.

This rejuvenation of the electricity system wouldn’t come cheap. But it would be a far more productive use of taxpayers’ money than the $80 billion the government is planning to spend on submarines that might be obsolete by the time they are finished. Given the likely trend to working from home, it’s also a better bet than new roads. Borrowing money has never been cheaper and superannuation funds are desperate for large-scale investments offering long-term returns.

The federal government has taken an interest in some elements of this program. But the many passionate climate sceptics within the Coalition will furiously press the cost-of-living button to combat any proposal to displace fossil fuels. They’ll be on shaky ground, though. New electricity infrastructure could be underpinned by budgetary measures rather than surcharges on energy bills. With interest rates so low, the extra cost to taxpayers would be small and would be substantially offset by enhanced electricity market competition.

My friend was probably right about human psychology, but perhaps it can be harnessed to push along efforts to contain global warming rather than hinder them. Economically insecure voters focused on the short term may be more than happy for the government to take on low-interest debt long into the future if it will provide jobs and economic stimulus in the here and now. •

Funding for this article from the Copyright Agency’s Cultural Fund is gratefully acknowledged.

The post Can we break the climate cycle? appeared first on Inside Story.

]]>
Gmail’s trial by ordeal https://insidestory.org.au/gmails-trial-by-ordeal-2/ Thu, 12 Mar 2020 06:16:25 +0000 http://staging.insidestory.org.au/?p=59526

It’s the error message most dreaded by users of Google’s email service — but the story has a happy ending

The post Gmail’s trial by ordeal appeared first on Inside Story.

]]>
Never mind the toilet paper shortage. I’ll hand over a dozen rolls of the stuff and throw in two boxes of tissues and a half-bottle of hand sanitiser if someone will give me back my Gmail account! I lost it last Wednesday, the fourth of March 2020, and it has hit me hard.

Gmail and I had been working happily together for twenty years. It stores thousands of my bits and pieces in its electronic guts. Nothing exciting, revealing or incriminating; just useful bits and pieces that one refers to from time to time.

Last Wednesday, after an unsuccessful attempt to use the Medibank app on my relatively new iPhone, I got the “Verify it’s you” message when I hit the Gmail icon. Easy: I’ve done it before. Answer the questions, provide your previously lodged alternative email or mobile phone details, wait for a code, enter the code and bingo, you’re back into your account.

But not this time. Having performed the required tasks, I got the dreaded “Account disabled because of suspicious activity” message. Mata Hari no doubt felt the same way when the Sûreté knocked on her door in 1917.

I probed the Gmail beast and found the “Tips to complete account recovery steps” and then the “Why your account recovery request is delayed” page.

It appears that “suspicious activity” is determined by algorithms that also govern your recovery attempts. Get the answer wrong when you’re asked “What was the name of your first teacher?” and you can say goodbye to your account. (Did I tell Gmail long ago that she was “Miss Brown” or just “Brown”? I can’t imagine ever referring to her as “Brown”…)

The official Gmail messages are not clear about what happens to your attempts to recover “disabled” accounts. On the one hand, they suggest that once you have a case number relating to your inquiry, you should wait — three to five days, one page says — for your request to be reviewed.

On the other hand, there’s a suggestion that what’s gone is gone. You won’t see that account again, and the people who are sending mail to it will never know what happened to you. The gas and electricity companies, and all the others, won’t be able to send electronic condolences to your digital funeral. They’ll never know.

That’s the official Gmail line: be patient, there may be hope, but perhaps not much.

The news gets worse, however, when you find user-group sites on the web and enter a world of broken dreams. The begging messages, sent fruitlessly into cyberspace, cry for mercy: “Please, please, won’t somebody at Google help me. I need this account for my exams/rent/medicine.” “Without logging in to that email… I can’t work,” writes another pilgrim on this vale of tears. “I will soon be going hungry.”

Because Gmail is a “free service” (we pay by sowing our data for digital combine harvesters to mow through like wheatfields in the Wimmera), Gmail owes us nothing. There’s no helpline leading to nice young women in Manila. Or good guys in Gurgaon who can sometimes be lured into abandoning their I’m-John-how-can-I-help-you personas to discuss Virat Kohli and fix your problem.

The clearest description I’ve found of my Gmail doomsday comes from Ron Miller, a techie journalist, who had his account blocked in early December 2017. He got it back three weeks later after constantly harassing a public relations contact at Google. “Without special contacts like I had because of my job,” he wrote, “you are out in the cold.” In a later piece he gave a couple of suggestions about how to get an account unlocked, but neither worked for me.

The advice varies on what to do while you wait to see whether the algorithms will be merciful. One school says keep attempting to get into the account. Let them know you care. Another school says don’t try too often or the algorithms will get angry and block you as a digital nuisance.

There are hints that somewhere there may be human beings. One optimist says that if you get a registration number for your request for review that means you are in a queue and, somewhere, life forms are looking at your case. They will, the optimist believes, eventually see that your account is as innocent of “suspicious activity” as a newborn cyber lamb. They’ll free it to gambol again in cyberspace.

Can a person protect against Gmail doomsday? I’ve seen no suggestions, other than to stay away from Gmail and find other providers. Your security questions, backup email addresses and mobile phone numbers are no protection once the algorithms target you. And there’s no way, either, of informing your correspondents that their messages are not reaching you.

So I wait, prodding Gmail every day or so in the most polite way so that the algorithms won’t get cross.

And I’m also being especially nice to our postman. He, at least, has never held back my copy of RoyalAuto for suspicious activity. •

Postscript: Yes, Virginia, there is a Google Claus

Twenty-five days after I wrote this article, Google Claus guided his digital reindeer into my computer and gave me back the Gmail account that dark, algorithmic forces had locked away.

As you’ll know from the article above, my longstanding Gmail account had suddenly told me it was locked for my own protection because “suspicious activity” had been detected. Thousands of useful bits and pieces, lazily left by me in Gmail messages, were now beyond my shaky grasp.

So, every night for twenty-four days I muttered incantations and went through the designated routine to restore a locked account. I’d tax myself on how I had answered the Google security question. Did I say Miss Brown, my Grade 1 teacher, was my first teacher? Or Miss Black, my kindergarten teacher?

Then, on the twenty-fifth day, with hope almost gone, I went through the nightly routine, using Miss Black from kindergarten and an old password. And there it was. The reindeer had landed! There was even a message signed by Lily at the Google Accounts Team.

The moral of the story is that if this happens to you, do not despair. Persevere. And before the worst happens, check your security questions, backup email address and phone details.

There are small pluses with a visit from Google Claus. You don’t need to clean the chimney, and he doesn’t need milk and cookies left out for him. Any cookies will be digital, and of course they’ll be left by him. •

The post Gmail’s trial by ordeal appeared first on Inside Story.

]]>
Will we finally look clearly at facial recognition technology? https://insidestory.org.au/will-we-finally-look-clearly-at-facial-recognition-technology/ Fri, 24 Jan 2020 03:09:27 +0000 http://staging.insidestory.org.au/?p=58734

Revelations about Clearview AI’s harvesting of online images challenge us all to think carefully about this technology’s impacts

The post Will we finally look clearly at facial recognition technology? appeared first on Inside Story.

]]>
Last weekend another new dystopian-sounding facial recognition application hit the headlines. This time, it was a little-known start-up, Clearview AI, which is providing identity-matching software to law enforcement agencies in the United States.

Stories about how facial recognition is being used by law enforcement aren’t that surprising these days. But the Clearview AI revelations, published by the New York Times, made the tech industry sit up. Here was a company that, even in a world of increasingly invasive facial recognition applications, had crossed a line. It scraped the open web, collected billions of photos of people, and built an app enabling users to match their own pictures of a person with the photos in that vast database, with links to pages on the web where those photos appeared.

This kind of application — breathtaking in scale, deeply invasive in implementation — has long been technically possible; it just wasn’t something technology companies were keen to do (or at least, to be seen as doing).

Up until recently, conversations about facial recognition technology haven’t usually gone much further than whether we should or shouldn’t ban it. There has been no middle ground. Supporters are on the side of law and order, whatever that takes; opponents are radical leftists with a disregard for public safety or luddites opposed to technological progress. The many different choices made in designing and deploying the various tools and methods that fall under the umbrella of “facial recognition” — some of them sensible, others careless, some downright ugly — tend to get lost along the way.

Many things are technically possible. That doesn’t make them safe, ethical or useful. It is technically possible to build a three-wheeled car. It just might keel over if you go round a bend at more than forty kilometres per hour. It’s technically possible to manipulate software measuring carbon emissions in a car so that readings are artificially lowered, but that doesn’t mean it’s legally or socially permissible.

Technologies are not monolithic. The design of every product rests on a range of choices and trade-offs. Some products are well designed and conscious of their social and ecological footprints. Other products pose threats to physical safety, discriminate against people, or are designed to cheat. We need to think carefully about how we want technology to be applied — how we want it to be manifested in the world. Facial recognition is no different.

Clearview AI’s facial recognition application wasn’t just bad because it scraped billions of images of people without their knowledge or consent. If details of the New York Times’s investigation are true, it went a lot further than that. It built software capable of monitoring whom its users — mostly law enforcement agencies — were searching for. It manipulated image search results, and removed some matches. Images uploaded by police were stored on their own servers, with little verification of data security.

Are these things we want? Are these practices okay?

Clearview AI is just the latest in a long line of stories about buggy, inaccurate, invasive and outright offensive implementations of facial recognition. Face-detection settings on cameras that only work on certain faces. Image-tagging software making racist comparisons. Identity-matching databases used to investigate crime consistently misidentifying members of already marginalised groups. Software engineers matching women’s faces with adult videos online, to help men check if their girlfriends had ever acted in porn.

Last week European Union regulators indicated they’re considering a potential ban on facial recognition technology for up to five years — with some exceptions — while they figure out the technology’s impact and the regulatory issues that need to be tackled. Google and Facebook have already expressed cautious support for such a ban.

Some cities have already started curtailing facial recognition: in San Francisco, the government voted in 2019 to ban local law enforcement from using the technology. In New York State, the education department demanded a school district cease using the technology in public schools.

Speaking to the New York Times, one investor in Clearview AI, David Scalzo, was doubtful about the power of any prohibition. Technology can’t be banned, he said. “It might lead to a dystopian future or something, but you can’t ban it.”

It’s true that a technology, once discovered, can’t be undiscovered (though some have been forgotten). But throughout history, societies have temporarily banned the development or certain applications of technologies when it’s unclear whether they will do more harm than good: think nuclear power, or gene editing. Sometimes temporary bans become permanent ones. Sometimes they’re lifted once we’ve used the breathing space to figure out the rules of engagement.

And yes, it’s true that bans can be broken. But technologies don’t break bans — people do. People who do not respect or recognise the concerns of the societies they live in.

Technologies do not lead us into a dystopian future: we decide the future we want. •

The post Will we finally look clearly at facial recognition technology? appeared first on Inside Story.

]]>
You, me, data and the city https://insidestory.org.au/you-me-data-and-the-city/ Wed, 18 Dec 2019 02:49:07 +0000 http://staging.insidestory.org.au/?p=58321

Is the data-rich city taking on a life of its own? And can Hugh Stretton’s Ideas for Australian Cities help us navigate its hazards?

The post You, me, data and the city appeared first on Inside Story.

]]>
It’s time to head to the airport. Bag packed, call the cab. Better make it an Uber — at least you know exactly when it’ll arrive. And your driver’s behaviour is being graded out of five stars, which I’ve found means the trip is more likely to be convivial. I often find myself deep in conversation with Uber drivers in a way that’s rare in a taxi. Perhaps we’re both performing for the algorithm, or at least feeling safer in the knowledge that each of us has a stake in the conversation going well.

I wait seven minutes before the car arrives. My driver this time is Australian-born, and we get chatting pretty quickly. I learn he’s driving Ubers on the side while doing his MBA. While we talk, we also negotiate with the digital map that lurks on the dashboard, giving us instructions on where to go. I have my own favourite routes through the local streets that get me to Sydney’s airport, but the map, fed by data obtained from all of us going about our daily business, has its own ideas.

I tell my driver about a photographer friend of mine, also an Uber driver on the side, who likes to run two versions of Google maps when he’s driving, one within the Uber app and one straight from Google. My friend says the Uber route is just slightly longer, and therefore more expensive, most of the time. That seems hard to be believe, my driver told me, but yeah, maybe.

My driver and I get to talking about Uber’s opening up to public shareholders earlier this year, which saw the company attract a lower sharemarket price than was anticipated, mostly because of revelations by the company that it was struggling to make a profit from ride-sharing. My driver was pretty well informed about all this. “For Uber, long term, it’s about the data, not the ride-sharing,” he explained to me, not knowing quite how obsessed I was with the company myself, and with the rise of data-driven urban platforms more generally. I nodded and agreed. Yep, it’s all about the data.

We’d probably been reading similar stories about Uber in the news recently. Dara Khosrowshahi, the chief executive, seems to be running a PR campaign about the company’s ambitions to become the “broker of human movement” — specifically, of “people, food, and freight” — in cities. Kind of like the Amazon of transportation, he has been saying. When it was first launched, Uber created a new marketplace for ride-sharing by better connecting people who needed a lift somewhere with people who could trade some, or most, of their spare time for extra money. Uber Eats, launched in 2012, has likewise provided a simple way for people to order takeaway food, by connecting these people with others in their city who would happily deliver it to them for a small fee without having to be tied to one particular restaurant. Uber Freight is coming too, an app that “matches carriers with shippers” in a way that presumably will aim to beat systems people use now to get their freight where it needs to be.

For Uber, creating a technology platform to improve human movement was more a way to grow a company quickly — really quickly, faster than any company before.

I wonder aloud, to my driver, what kind of “broker” of human movement Uber is. Is it really so “data-driven’? Isn’t this still, really, about human capital, and how it’s being put to use? Uber spends a lot of money on incentivising drivers, “shippers” and food deliverers to use its platform to generate their own income, perhaps swapping what they’re already doing for this gig, or perhaps squeezing it into their spare time to make some extra cash. The amount of money Uber spends on marketing and user incentives is why they don’t yet make money on their ride-sharing business, despite the considerable fees they charge drivers, food deliverers and restaurants to use the platform. Being big, being part of more and more urban interactions taking place across every city around the world, is what matters most here. I put it to my Uber driver: Isn’t that just classic rent-seeking behaviour?

The difference here is the data, he explains to me. He says “dayta,” like an American, not “darta” like most Australians. You can’t work in tech or business these days and get away with “darta.” Uber can make all kinds of uses of the dayta created by users — which means, he explains, their brokerage service is not non-productive, in that classic rent-seeking sense. They’ve accelerated into self-driving cars, and they can also use all that dayta they’re generating to capture a granular sense of how cities really work.

Yes yes, that dayta.

It’s one of those conversations that pokes around some big topics, but won’t jump fully in. My Uber driver and I know we’re only going to be speaking for a few more fleeting moments before I jump out of the car with a cheery goodbye, trying not to slam the door too hard lest I damage my rider rating. Good luck with everything! I say.

Once out of the car, the Uber app hits me with a rating request. How was your ride today? I enthusiastically tap on the five-star option, and choose “Great conversation” to describe why my trip was so great.


The experience I’ve just recounted is not quite true — it’s more an amalgam of many different Uber trips I’ve taken over the past few years, and the conversations I tend to strike up with drivers. During much of Uber’s life in Australian cities, I’ve been reflecting on how platforms of this kind have been affecting the way we live in cities together.

Despite all kinds of misgivings about the kind of company Uber is — a massive, US-based outfit that fleeces the very people it seeks to “partner” with to sell its technology, and puts taxi drivers like my neighbour out of a job — I’ve become fascinated by the kinds of interactions it and similar companies have introduced.

There’s a sense of heightened sociability between strangers that seems to occur, perhaps because we’re protected by that threat of algorithmic-generated banishment if either party takes a misstep. Or maybe it’s the kind of people who have been quick to take up Uber as a transport option. As an iconic company of the “gig economy,” Uber often attracts people who aren’t looking to drive cars for the rest of their lives — which means you get to chat with people like my Uber driver, who also happened to be studying for an MBA and thinking about the future of data platforms.

The translation of myriad kinds of urban interaction into data points for more sales is what attracts the brightest and the wealthiest to thinking about all kinds of new ideas for Australian cities.

Uber is also, in many senses, the realisation of a quite radical idea advocated for many years by sustainability planners. What if we could get people to stop thinking of transport through the lens of the privately held motor vehicle and instead encourage people to share their driving experience? Couldn’t this cut the number of cars on the road, and free up space for other kinds of uses?

Like the car-sharing company GoGet, founded by sustainability advocate Bruce Jeffreys, Uber advocated its way into our cities as an innovative way to get people to move around them differently. Assets once considered purely private could become shared resources. Instead of ownership being the goal, we could reduce consumption and shift towards an economy based on access.

Apart from GoGet, these companies haven’t focused on creating cities that use fewer of the planet’s scarce resources. For Uber, creating a technology platform to improve human movement was more a way to grow a company quickly — really quickly, faster than any company before, generating huge benefits for its business leaders, investors, and shareholders.

For a company like Uber, the future of cities, and how we live in them, is primarily about the possibility that digital infrastructure built today will stick around as a foundational platform for future generations, in cities all around the world, to use as their first choice. Time to go to the airport? Better Uber it. Getting people around the world to use your company name as a verb to describe some of the most basic things we do in cities is, really, the ultimate, multi-billion-dollar goal. Want to know where you’re going? Better Google it.


Those who think about cities and digital innovation are often people like my Uber driver, busy coming up with start-up business ideas in this brave new world of tech-driven urban interaction. Many, it seems, focus their minds on new ways to do takeaway food. After all, people are time poor, but they also need to eat. Why not better service the needs of those who not only lack the time or wherewithal to cook, but would also prefer not to have to actually go fetch their order?

The success of digital food delivery apps has probably caught your attention. Australia is, it seems, becoming “an Uber Eats nation,” as one journalist puts it, with online food delivery services now worth 12 per cent of all sales in the $44 billion cafe, restaurant and takeaway food industry, and one in three adults living in Australian cities reporting use of food delivery apps.

No wonder, then, that moving around our major cities now seems to involve a lot of interaction with bike-riders carrying large square boxes of food to the time-poor customers. A relative of mine has told me that university campuses, like University of Sydney and UNSW, are hotspots for these delivery services, as many overseas students prefer the ease of using an app to order their lunches rather than having to negotiate campus food options.

Australian digital entrepreneurs are hoping to cash in on the trend. Two founders of a company called Kloopr have created an app that allows anyone travelling from point A to point B to become a delivery driver. Just think: you might be able to earn a bit of money on your way home, just by checking if one of your neighbours has ordered take-out. Others, like Bring Me Home, let you buy and pick up discounted surplus food from nearby cafes, restaurants, bakeries, groceries and supermarkets. This company is targeting Australia’s food waste problem in a way that’s also attractive to those who would prefer not to cook tonight or go out.

In other words, today’s digital platforms have made Australian cities attract spaces for disruptive new ideas about how to connect people differently. Most Australian citizens are now equipped with their own GPS receiver, bundled into the shiny, glowing, advanced computational device they carry around with them everywhere, otherwise known as the smartphone. On that phone are likely to be abundant maps, apps, recording devices, listening devices, maybe fitness trackers, maybe also air quality monitors. The phone many of us are carrying with us may also be listening to us in various kinds of ways — whether through the tiny microphones used to listen in on conversations (who knew?!), or by “listening” in the sense of analysing the information we churn through as we go about our daily business.

The translation of myriad kinds of urban interaction into data points for more sales is what attracts the brightest and the wealthiest to thinking about all kinds of new ideas for Australian cities. How can all this information be used to build new businesses, sell more, but also make our cities “smarter,” more responsive to infrastructural breakdowns, “closing the loop” between human interaction (or malfunction), infrastructure, services, and utilities?

Those people who come up with the best and brightest ideas for using data to make Australian cities work better are showered with investment money to help them scale as fast as possible. One such company, Neighbourlytics, offers “the data you need, to create cities people love.” Founded by two Australian women, this urban platform offers “simple ways to collect and understand rich digital data about what makes places thrive” by using social media data to capture community sentiment about a place. It’s particularly useful for real estate companies and city leaders who want to “see places through local’s eyes.”

Countless other digital platforms are also vying to change how we learn about, manage, govern, experience, connect and interact with each other in cities. For each, it is the data — the dayta — that drives innovation and new ideas about Australian cities. If data is the new oil, cities are the new goldmines, ripe for data-mining machine-learning, behavioural nudging and, ultimately, value-extraction.


Many who work in urban tech these days tend to think the possibilities offered by information technology are quite new. This, like my Uber story, is only partially true. Certainly, all the computational innovations that underpin our digitally mediated experience of cities today are new. But, at the same time, this way of seeing cities has its own peculiar history. In previous decades, it was spurred on by ideas from cybernetics, emboldened by the potential of clever “counting machines” to decode the complex webs of interaction that make up a city.

Will lots of things be missed? Historian Hugh Stretton. University of Adelaide

Despite their novel techniques, many urbanists railed against these computer-mediated visions, not because they weren’t passionate about better understanding complex urban problems, but because they worried what kind of city this way of seeing would bring into focus.

One such worrier was the Australian urbanist and historian Hugh Stretton. Paying close attention to the relationships between urban form, urban marketplaces and diverse urban sociality, Stretton was ambivalent about the use of “information” as a lens through which to understand Australian cities. In his 1970 book Ideas for Australian Cities, now half a century old, he reflected on what he described as a kind of urban “ideology” that looked to create objective measures of urbanity to plan and manage Australian cities. He wrote:

What are cities, essentially? They are systems of intense, hyper-efficient interaction. Interaction is quintessentially the transmission, reception and exchange of information. The basic unit of information is the simple clause or image, the “bit.” The basic unit of interaction is the transmission of one bit from one human to another. Call this basic transaction a “hubit.” Private, face-to-face hubits are not countable. But the public channels of communication are all metered, one way or another. Count the hubits they carry. Weight them for distance carried. Divide by time and population, and you have indexed the intensity of interaction. Indeed, you are on the way to a universal, abstract and reliable measure of urbanity, and a general theory of it. You also have a political program: to maximise urbanity.

Stretton saw problems in this way of seeing cities. It proposed that they could best be understood through the lens of science, specifically “systems analyses” that used mathematical methods to understand and manage things like traffic flows. These were fine, in some instances, but shouldn’t be used as the basis from which to understand other things about cities. The risk, as Stretton saw it, was that only some things would get counted. If cities are intense interaction systems, the kinds of interactions we would pay most attention to might end up being those that could be counted most easily. Lots of things might be left out:

Making love is an interaction; so is a business deal or a visit to the doctor or a sparkling conversation about art in somebody’s salon; so is every jostle on a crowded pavement, every bit of unwanted commercial soliciting, every exchange of complaints about the noise or pollution or segregation of the city; so is every eviction, extortion, blackmail threat, sale of dope, or crime of violence in the city.

In this way, as Stretton put it, “Objectivity begets its own politics.” Certain kinds of interaction may grab most of the attention, simply because they can be counted, and the counting of them becomes beneficial to some parts of society, but not others. So, to Stretton, here was the basis of a political program: “to maximise urbanity.”


To many of today’s urbanists, who look to the potentials of “smart cities” and data-driven methods of managing infrastructure and service provision, Hugh Stretton’s cantankerous views might feel a bit old-fashioned. Certainly, Ideas for Australian Cities is not easy to find (I had to borrow my father’s copy in order to re-read it for its coming anniversary). His reflections on information and urban science, in a chapter called “Ideology,” are tucked away in chapter six, after a series of quite quotidian reflections on the strengths of mid-sized Australian cities like Adelaide and Canberra.

And yet, re-reading his critical reflections on what we might call urban informationism today, Stretton’s writing feels urgent, and altogether necessary. With a wave of urban apps hitting the streets, encouraging us to stay home, watch Netflix, stay away from restaurants, and keep swiping, we’d do well to remember that all this data we’re producing, through our myriad ways of interacting digitally, may benefit particular ways of being “urban.”

Even if they are mightily convenient, these services may, in the end, not be in our best interests. During these years I’ve spent observing and writing about this new wave of “urban apptivism,” I’ve noticed how many of the best, most responsive digital platforms are those built by technology companies that have access to very large amounts of user interaction data. And they like certain kinds of interaction over others.

Take Uber. Over fourteen million trips occur on Uber’s platform each day, by people like me and my MBA student, who Uber counts as a “driver partner” of the company, a “micro-entrepreneur” out to hustle. Uber likes our transaction to be considered a mutually beneficial transaction between two individuals in a marketplace — not a service delivered by a global multinational company. The user experience Uber offers us is also unparalleled among Australian taxi apps — no surprise, considering the $22 billion in investor finance ploughed into Uber over the past decade.

As Uber extends its muscle into the hospitality business, Australian restauranteurs are now experiencing something akin to what Australian news media outlets have gone through, in the wake of Google and Facebook, recently the subject of the Australian Competition and Consumer Commission’s digital platforms inquiry. They are finding their capacity to reach customers increasingly depends on nifty apps, which have figured out how to connect buyers and sellers, or readers and writers, in highly location-aware, responsive and “human-centred” ways.

With teams of global developers working to ensure the best user-interaction experience possible, it’s not surprising Australians love these apps. An ex-Google employee compares them to casino slot machines. With all this swiping going on, the data exhaust of our urban lives can in turn be ploughed into creating new, cleverer ways of interacting with each other. Like Stretton said: this is a program to “maximise urbanity” — except it’s an urbanity that amplifies the intelligence of machine-learning systems but doesn’t care much about our high streets, so long as we’re all interacting.

When I re-read Stretton today, I can’t help but wonder: who is championing our cities with these ideas in mind? His Ideas for Australian Cities influenced a generation of urban planners and policy-makers, including people like Tom Uren, who led bold interventions on behalf of the federal government to protect and champion particular mixed-use precincts in cities like Sydney, Fremantle and Hobart. This wasn’t anti-development, it was strategic intervention to protect what worked best.

Today’s cities likewise need a strategic government intervention to better shape their data infrastructures. Specifically, we need a new deal on city data, to ensure urban digital innovation doesn’t also mean digital feudalism.

And if Australian cities are going to be hotbeds of data-driven innovation, we would do well to remember Stretton’s cautionary words. To remember to cherish that which can’t be counted as possibly among the very best things that make our places thrive. •

Funding for this article from the Copyright Agency Limited’s Cultural Fund is gratefully acknowledged.

The post You, me, data and the city appeared first on Inside Story.

]]>
Australia versus big tech https://insidestory.org.au/australia-versus-big-tech/ Sun, 08 Dec 2019 22:19:08 +0000 http://staging.insidestory.org.au/?p=58173

Australian policymakers don’t share technology companies’ belief in a borderless world

The post Australia versus big tech appeared first on Inside Story.

]]>
The world’s largest and most powerful technology companies are getting used to pushback from governments and regulators around the world. Yet the evisceration they received when home affairs minister Peter Dutton took to the stage at the National Press Club in October last year may have taken them by surprise.

In a speech crafted to hit the US-based giants where it hurt, a pugnacious Dutton asked why the companies were opposing his government’s request for access to decrypted messages carried by services such as Signal and Facebook’s WhatsApp, given they had few qualms about doing business with the world’s most repressive regimes.

These companies were operating “in less democratic countries and accepting… a compromise on privacy to allow their presence in these growth markets,” he said, without naming his targets — although Google’s work on a censored search engine in China can’t have been far from his mind. They are “the same companies that need to be hounded to pay tax in Australia and other jurisdictions,” he added, “and the same companies who have misused personal data to commercial advantage.”

As knife twisting goes, Dutton was in top form. The “misuse of personal data” line was a reference to the Cambridge Analytica scandal, in which Facebook revealed it had allowed a British political consultancy to harvest the data of millions of users — including more than 300,000 Australians — without their consent. As for tax dodging, he was obviously alluding to Apple, Facebook and Google sidestepping European Union tax regimes by moving their profits to Luxembourg, Ireland, Malta and other zero- or low-tax jurisdictions.

Why the sudden antagonism? What had the tech companies done to deserve this broadside from one of the government’s most powerful members?

All we can be sure of is that the tech companies had been fighting hard behind the scenes to scuttle the encryption legislation, correctly arguing that the laws would have created “back doors” into encrypted messaging systems. To grant Australian law enforcement agencies this access, the companies’ lobbyists said, would build weaknesses into secure communications that could be exploited by criminals and hackers the world over. It would undermine legitimate and important uses of encryption — for example, in the transmission of medical records.

But the policeman-turned-minister wasn’t buying it — a point he made both in speeches and in submissions to the parliamentary committee reviewing the legislation. What was the difference between police or intelligence agencies intercepting a phone call and a message sent on WhatsApp? The only difference between the two, as far as he could tell, was that Silicon Valley said they were different. And if big tech had a problem with back doors — well, too bad.

“If a criminal had a handwritten plan detailing a paedophile network he had established, the police could obtain a warrant to enter the house and seize the handwritten note as evidence,” Dutton said. “If the same criminal typed the same detail of the plan and sent it via a text message or email, the police could again obtain a warrant and recover the text as evidence.”

But if the “exact same detail of the paedophile network was sent via an encrypted messaging service, like Wickr or WhatsApp, the police would not be able to recover the information,” he went on. “It is of course an absurdity because the clear advice from law enforcement and security agencies is that we are now losing our edge to criminal enterprises.”

What the US tech companies may have failed to grasp, and with them their local lobby firms — which include Nexus APAC, TG Endeavour, Hawker Britton and Barton Deakin, according to the transparency register — is that they were up against a core value of modern Australian conservatism. Ever since the 2001 Tampa affair, Australian centre-right governments have staked their claim to policies that centre on notions of state sovereignty; it’s a world in which laws apply, unaltered and undiminished, in every inch of Australian territory.

Critics, of course, would dispute the premise that a democratic country’s sovereignty is absolute. Every international defence treaty, every trade agreement or reciprocal immigration deal we sign brings with it a lessening of sovereignty. Yet the tech companies’ argument that their operations are supranational, that their technologies and content can’t be tailored to the requirements of national regulation, was never going to fly. Dutton wasn’t merely bringing vocal companies to heel over Australian law enforcement agencies’ right to crack encrypted messages; he was sending the message that no global company operating anywhere from Christmas Island to Tasmania’s South East Cape was beyond the reach of Australian laws.

In his showdown with Silicon Valley, Dutton didn’t blink. In spite of legitimate concerns about the lack of independent oversight of the encryption laws and their impact on local tech start-ups, by the end of 2018 the new rules had been rammed through in an eleventh-hour parliamentary sitting. The government has promised to review the legislation, but big tech and civil liberties advocates concerned with the law’s privacy implications suffered a loss from which they appear unlikely to recover.

But Dutton’s feud with US technology giants had only just begun. When an Australian gunman entered two mosques in Christchurch in March, allegedly shooting dead fifty-one people, he broadcast his actions live on Facebook. For whatever reason, the platform dragged its feet in removing the content, and graphic images were beamed into countries around the globe, including Australia. This event would unleash one of the most forceful regulatory backlashes a US digital company has yet to witness, anywhere in the world.

Within weeks of the killings, the Australian parliament had adopted laws that included prison sentences for local employees of digital platforms that failed to remove “abhorrent violent material” in an “expeditious” way. Local employees of Facebook and Google could wind up spending three years behind bars if they failed to move fast, no matter where the offending content had come from.

These laws marked the nadir of the relationship between Canberra and big tech. The prospect of company employees ending up in the slammer over something uploaded by a kid in Uzbekistan was shocking — particularly in the light of the legislation’s fuzzy reference to the “expeditious” removal of “abhorrent” content. But even more concerning for the platforms was the underlying logic of the legislation, which was contemptuous of the tech giants’ argument that they simply don’t have the power to tailor global content to suit national regulatory requirements.

The message from Canberra was that it wanted to regulate the global platforms as though they were Australian television broadcasters. “Mainstream media that broadcast such material would be putting their licence at risk and there is no reason why social media platforms should be treated differently,” attorney-general Christian Porter said at the time.

It’s this recasting of their role that the platforms object to. The shift is evident in the apparent success of Australian newspapers’ campaign to have policymakers view digital platforms as publishers of content — a definition at odds with the platforms’ argument that they are merely neutral conduits linking readers to media content. And with the abhorrent violent material laws, the Australian government is telling the world that the regulatory imbalance between platforms and television broadcasters is coming to an end. If Facebook or, say, Twitter’s Periscope want to be in the business of pumping out video content, then they should expect to be regulated as though they were a fully fledged local TV company.

Politically, this tough stance comes at zero risk. News Corp has been an outspoken critic of the platforms in its submissions to the Australian Competition and Consumer Commission’s digital platforms inquiry — an inquiry that has produced a damning account of regulatory failures in dealing with Facebook and Google. MPs on both the left and the right of the political firmament appear to agree that tougher regulation is needed, and the platforms’ argument that their content is supranational — that it can’t be edited country-by-country — is being dismissed if not ridiculed. If Facebook has the technology to target advertising at individual users, critics believe it can be expected to muster whatever software is needed to avoid shoving images of fifty-one people being shot to death into the faces of Australian users.

If any of this mucks up Silicon Valley’s global business model or puts at risk the security of communications outside Australia — well, it’s not Canberra’s problem. Big tech may believe in a borderless world, but Australian policymakers don’t. The message from the government is that capital-S sovereignty is here to stay, no matter what technology is thrown at it. •

The post Australia versus big tech appeared first on Inside Story.

]]>
More Star Trek than Terminator? https://insidestory.org.au/more-star-trek-than-terminator/ Mon, 25 Nov 2019 00:59:34 +0000 http://staging.insidestory.org.au/?p=57940

Can the hopes of tech optimists and the fears of tech pessimists be reconciled?

The post More Star Trek than Terminator? appeared first on Inside Story.

]]>
The most significant consumer innovation of the last decade was announced on 9 January 2007. Despite uneven health, Apple chief executive Steve Jobs took to the stage at the Macworld Conference in San Francisco and unveiled the iPhone. Ten years later, a billion of them had been sold. Today, many think touchscreen smartphones are as necessary as underwear and more important than socks. Yet when Jobs launched his revolutionary phone, many believed it would fail. His counterpart at Microsoft, Steve Ballmer, laughed at the device, calling it “a not very good email machine.”

The critics were wrong, and wrong in a major way. As industry insiders, they all paid the price for their poor predictions. Their products would all exit the industry, replaced by the new Apple, of course, but also by Samsung and Huawei. What turns out to be a successful innovation might not seem that way at first. There is a reason for that: innovation is new to the world. If it was obvious, someone would have done it.

Technology forecasts can also be wrong in the other direction. In 2001, after years of stealth development, inventor Dean Kamen unveiled the Segway. This was a personal transporter with two wheels on either side of a platform with a stick and handlebar jutting from its centre. At a time when computer-controlled gyroscopes were rare, it seemed like magic. Implausibly, the Segway would balance itself and its occupant upright. The rider simply leaned forward to accelerate and backward to stop. It seemed like something from the future. It seemed like something that you wanted to try.

Many others heralded the Segway as a revolution. Steve Jobs said it was “as big a deal as the PC.” John Doerr, the famous venture capitalist behind Netscape and Amazon, believed it would be bigger than the internet. It was hard to find many early detractors. Alas, a decade and a half later, you might see a Segway used by a traffic cop or group of tourists being led around a city. Otherwise, it is a discarded technological concept. Why didn’t the Segway work out? There were some safety issues, but that hasn’t prevented the police from adopting them. One theory is that people stuck out too much on them, drawing attention in an unwelcome way.

The point is that our forecasts — optimistic or pessimistic — for individual technologies can often be way off base. Marc Andreessen, Netscape founder and venture capitalist, compares his performance with that of Warren Buffett, the world-famous proponent of “value investing”: “Basically, he’s betting against change. We’re betting for change. When he makes a mistake, it’s because something changes that he didn’t expect. When we make a mistake, it’s because something doesn’t change that we thought would.”

We started this discussion of technological prospects with the iPhone and Segway precisely because of this question of far-reaching impact. The iPhone established a dominant design for smartphones. Thanks to people having the internet in their pocket, we got Uber, Airbnb and Spotify. We got Facebook, Instagram, LinkedIn and Twitter to inform, engage and infuriate us. Developing economies skipped over bank accounts to mobile banking, such as Kenya’s ubiquitous M-Pesa service. If the doubters had been right, we would have had none of these things. With the Segway, we didn’t end up changing urban transportation. Billions might have switched to a travel technology that eased congestion and cut emissions, but we didn’t.

Are there still big breakthroughs to be made? On this, economists disagree. There are technological optimists who believe that big breakthrough innovations lie in our future, and pessimists who believe they won’t surpass the past. How can we evaluate their arguments?

The tech optimists

In early 2014, owners of Tesla’s Model S electric vehicles received a recall notice from the US National Highway Traffic Safety Administration related to a problem that could cause a fire. What car owners usually have to do in these cases is return the car to a dealer to be fixed. This is costly for everyone involved. This time it was different. The problem could be fixed by updating the software in the car, and the update could be pushed to almost 30,000 vehicles overnight because Teslas are connected by default to the internet. No muss, no fuss.

The fact that this could now be done for so many products with embedded software caused Andreessen to proclaim that “software is eating the world.” Put simply, real things were no longer fixed in their capabilities. Because of software, they could be enhanced without having to physically rebuild them.

The tech optimists are not optimistic simply because they know that the universe has more to reveal. They are optimistic because they believe that we are still living in a time of accelerating technological change. Andreessen argues that the benefits of computing technologies and the digitisation revolution are ongoing because they are based on software — something that scales easily. More than half the world’s population came online in just the past decade, and the world is not yet fully connected. Moreover, the value of that network increases disproportionately to the number of people on it — an effect known as Metcalfe’s law.

From the perspective of an innovator in software, that means the customer base is still growing rapidly. What is more, with greater numbers of users, distributed infrastructure — known commonly as “the cloud” — becomes cheaper to use, even aside from the reductions in the cost of hardware in data centres. In 2000, it may have cost a start-up $150,000 per month to host an internet application in the cloud. Today it is less than $150. Those gains translate into increased profitability and lower risk for every single software entrepreneur.

Tech optimists point to multiple trends. Since the 1960s, Moore’s law saw processing power double roughly every eighteen to twenty-four months. As a consequence, microprocessors in 2018 had eight million times as many transistors as the best microprocessor in 1971. Worldwide data storage is now around a zettabyte, or ten bytes to the power of twenty-one. Each minute, 300 hours of video are uploaded to YouTube. The next mobile telephony standard, 5G, will operate at many times the speed of the previous generation of wireless technology.

Technologies are sometimes used in unexpected ways. Graphics processing units (developed for hardcore gamers) were used to train neural networks designed to emulate the learning functions of the brain. These new developments in what is called machine learning have led to a renaissance in artificial intelligence research.

Around five years ago, using deep learning methods pioneered by several Canadian university professors, computers’ ability to understand speech and recognise images took a leap forward. These new methods mimicked the brain function, and allowed multiple levels of sorting and classification. The result effectively allowed computers to pick up nuance and associations that even humans would miss. In October 2016, Microsoft engineers announced that their speech recognition software had attained the same level of accuracy as human transcribers when it came to recognising speech in the “Switchboard Corpus,” a set of conversations used to benchmark transcribers. In a controlled environment, machine voice recognition is now more likely to comprehend what we’re saying than the average human. Meanwhile, facial recognition algorithms used by Baidu, Tencent-BestImage, Google and DeepID3 have an accuracy level above 99.5 per cent, compared with humans’ rate of 97.6 percent.

The best way to explain what has happened is to focus on what the new artificial intelligence techniques do best: prediction. Machines can now take a large amount of data (numbers, images, sound files, or videos) and review it for relationships that allow them to forecast with a high degree of accuracy. Image recognition, for example, is basically a prediction activity: “Here is a picture. What is your best guess at what someone would call this?”

Although these technologies still make mistakes, they have the ability to outperform humans in real-world contexts. In 2011, IBM’s Watson computer played the quiz show Jeopardy! against two champions of the game: Ken Jennings and Brad Rutter. Watson won. IBM’s next major human-versus-machine contest came in 2018, when the company showed off its IBM Debater. The computer was able to engage at a reasonably coherent level with a human counterpart on the topic of whether government should subsidise space exploration.

Learning machines don’t just have to rely on their own experience. Indian online retailer Myntra recently deployed an algorithm that designed new clothing images by modifying and combining popular patterns. One of those computer-designed t-shirts, featuring blocks of olive, blue and yellow, is now a bestseller. Artificial intelligence is arguably the next general-purpose technology: a technology so foundational that myriad other innovations grow on its base. We have seen this happen with the steam engine, electric power, plastics, computers, and the internet. The optimists believe that artificial intelligence could have the same potential.

To see how technology might drive science, remember that Galileo’s research — which showed convincingly that Earth revolved around the sun — was based on a technological advance in the form of a telescope that could magnify distant objects thirty times. A few decades later, the creation of a microscope that could magnify tiny things 300 times enabled Robert Hooke to document the existence of cells. These massive breakthroughs in astronomy and biology would have been impossible without advances in glass production and precision manufacturing.

Today, it’s easy to point to similar advances. The use of gene editing could revolutionise medical science. Strong and light materials such as graphene could change manufacturing. These are radical technologies that could bring about decades of further innovation.

The tech pessimists

Others take an altogether dimmer view of our prospects. They worry that we have already picked the low-hanging fruit over the past two centuries, and that the outlook for the next century is bleaker. Their argument is not based on some oracle-like insight into the future but instead on the inescapable economic law of diminishing returns.

In economics, the figure that looms largest on this side of the argument is Robert Gordon. His concern revolves around just how great the relatively recent past has been. Prior to 1870, economic growth occurred at a trickle. But after 1870, the major innovations at the heart of the Industrial Revolution began to work their way fully through society. It wasn’t just that steam power made factories more efficient; our knowledge of science also brought us to a point where new technologies were shaping the environment around us.

In the century following 1870, most people in the United States and Western Europe (and a handful of other places) went from carrying water to having it delivered to their houses at the turn of a tap, instantly and in a form safe enough to drink. Washing machines saved time and made our clothes last longer. Indoor toilets took sewage far away from houses at the push of a lever or yank of a chain. Energy could be easily delivered to people’s houses. Information was brought in by the radio, telephone and television. Cars provided freedom and reshaped the urban form. A reasonable person might suppose that society will never again see such radical changes. The interesting thing is that we can see this in the data on economic growth that measures how innovations have translated into productivity improvements.

Growth has its ups and downs. Smooth out the temporary recessions and upswings, though, and the century until 1973 was an era of steady progress that suddenly petered out. Initially, many economists saw the slowdown as an aberration. Nobel laureate Robert Solow, who pioneered the field of economic growth, said in 1987 that “you can see the computer age everywhere but in the productivity statistics.” Maybe it was a mismeasurement because computers were assisting services whose productivity was notoriously hard to measure? The economic historian Paul David reminded us that when electricity was introduced, it took decades for it to show up in measures of productivity. Maybe once firms worked out how to use computers effectively, the productivity gains would become apparent?

Many advanced nations did experience a surge in productivity growth in the late 1990s. Yet its rate then slowed in the twenty-first century. For workers, things are even worse because of a decoupling of wages from productivity. Even where firms are getting more output for a given level of inputs, they are not sharing most of those gains with employees.

Consequently, a generation of adults has not experienced the fruits of productivity improvements. They are as well educated as their immediate forebears, they are more lightly taxed, and the businesses that employ them have the benefits of more integrated global financial markets.

The problem comes down to something economists call “diminishing returns.” When England continued to put more land under farming during the nineteenth century, as David Ricardo noted, the productivity of additional acres fell. Take any fixed resource and there is only so much you can extract from it. In the twentieth century, Solow observed that this held for other types of capital such as machines. It also applied to workers. The only way out was technological progress, which allowed society to get more out of the same inputs.

So long as the growth in knowledge we had achieved in the past continued into the future, there was nothing to worry about. Yet here is where the tech optimists and tech pessimists part company. The optimists, as we have noted, anticipate rapid technological progress. The pessimists are not so sure. If that is the case, they say, then why have this generation’s inventions not transformed our lives in the way of the great twentieth-century innovations? Do the twenty-first century’s inventions really compare with air conditioning, airplanes and automobiles (to take just one letter of the alphabet)?

To tech pessimists such as Gordon and Tyler Cowen, the answer comes from merely looking at how technological changes from the 1870s to the 1970s transformed the way we live. Electricity transformed work, shifting people from agriculture to the cities. In the cities that shift combined with running water, sewerage systems, and efficient heating and cooling techniques to allow for a comfortable and productive urban life. Electrical appliances reshaped household economics, freeing women to join the paid labour force. Transport on the roads and air was transformed, facilitating unprecedented interregional trade and travel. All this added up to dramatic improvements in productivity. Since 1973 there have been useful inventions to be sure. But they are yet to deliver an equivalent surge in productivity.

What has the pessimists worried is that researchers and scientists are finding it harder to unearth new ideas. Research by Northwestern University’s Ben Jones shows that Nobel laureates are getting older. To be more precise, over the past century the age at which someone does research that will win them a Nobel prize has been rising. The same is true of work that leads to a patent. In addition, more knowledge breakthroughs are being made by teams rather than individuals. This points to more specialisation in knowledge production, with fewer instances in which an individual comprehends developments at the frontier of multiple disciplines. Because this raises the cost of innovating, Jones calls it the increasing “burden of knowledge.”

As technology advances, it becomes tougher to find the next new thing. Take semiconductors. As we have noted, Moore’s law has seen a steady doubling of the density of computer chips every eighteen to twenty-four months. Moore’s law continued up until the mid 2000s, but significantly, the cost of recent increases is eighteen times larger than it was for similar proportionate increases in the 1970s. The same pattern exists in agriculture and medical research. What was once easy has become hard. It suggests that just to keep the slower growth in productivity that we have, innovators must run faster and faster.

Uncertain prospects

The tech optimists and the tech pessimists both have a point. The optimists note that there is still potential for new knowledge, and can point to exciting possibilities that are attracting significant scientific and engineering resources. The pessimists’ colder calculations remind us how exceptional past growth was and point to the logical implication that those ideas that gave the biggest boosts to productivity were likely ones we have already exploited. Historians such as Joel Mokyr have looked at all this discussion and remind us that we have been here before. In every decade, one can find optimists and pessimists. And, at least as far as continuing technological change is concerned, the optimists have usually been on the right side of history.

What does this all mean, however, for the creation price — that is, the price that must be paid to reward innovators and entrepreneurs for their efforts? The answer lies in the cost of innovation. Where the tech optimists and tech pessimists fundamentally differ is in how costly it will be to innovate in the future. If there are technological opportunities just waiting to be exploited, as the optimists claim, then the creation price can be set relatively low. On the other hand, if the cost of innovation is rising, as the pessimists claim, then the creation price will be higher, and growing over time. More resources will have to be dedicated to innovative activities to maintain historical growth rates. In that situation, we will have to ask if it is a price worth paying.

Forecasting the future is like driving through fog. We need to accept that the creation price is uncertain. It could be high, low or somewhere in between. It will likely be different for different technological opportunities and directions. But at the same time, everyone faces this uncertainty. No one has a special insight into the future. That includes entrepreneurs. And given that uncertainty, the best way to get more equality and more innovation is to reduce the costs those entrepreneurs face today.

Planning for flexibility

Which brings us to equity. Here, the goal ought to be a set of institutions that provide a safety net, both for entrepreneurs who fall short of the stars and for those left behind when the rocket takes off. It pays to think about such institutions as a form of insurance, providing greater resilience in the face of a changing world. If you’re giving advice to a teenager, now is the time to tell him or her about the value of being flexible. Education isn’t just an investment; it’s about providing more life options.

To achieve this in the education system, we propose making teacher effectiveness the core focus of schooling, improving the quality of vocational training, and encouraging MOOCs (massive online open courses). And it makes enormous sense to use the talents of the 51 per cent of the population who are women by encouraging technologies that make jobs more family-friendly, and reforming laws that end up biasing the labour market against women. Gender equity isn’t worthwhile just because it will boost productivity but also because — as Canadian prime minister Justin Trudeau might say — it’s 2019.

As economist Sendhil Mullainathan puts it, “The safest prediction is that reality will outstrip our expectations. So, let us craft our policies not just for what we expect but for what will surely surprise us.” The task is to shape a future that looks more like Star Trek than Terminator.

Uncertainty need not be scary. The story of human history — particularly in recent centuries — is of how we have employed our shared ingenuity to improve lives. Longevity has risen. Whole diseases have been eliminated. The typical job is more fulfilling and less painful. Entertainment is more abundant, and much of it is of higher quality (try spending a week watching television from a generation ago). Food standards have risen, and cars are safer than ever. Life is far from perfect, but there is a good deal to celebrate. •

The post More Star Trek than Terminator? appeared first on Inside Story.

]]>
Big tech in the dock https://insidestory.org.au/big-tech-in-the-dock/ Wed, 20 Nov 2019 22:46:50 +0000 http://staging.insidestory.org.au/?p=57907

The world is watching a David and Goliath battle in the Federal Court

The post Big tech in the dock appeared first on Inside Story.

]]>
If you’re looking for an illustration of the service industry’s future, look no further than Australian start-up Sked Social. The Melbourne company was established on a simple premise: small-business owners relying on social media to get their message out to consumers mightn’t have the time or the know-how to publish quality content at the right time. What they needed was the option of outsourcing their social media presence so they could avoid faffing around on their computers when they should be putting the kids to bed.

Sked, originally named Schedugram, offered a solution. The company would manage your social media output, both content and timing, in return for payment. It relied on the fact that a platform controlled by a global technology company — in this case, Facebook’s Instagram — was available to be used as a vehicle. Think of platforms like Instagram as pipelines accessible to competing gas companies, or railway lines on which rival train companies can fight it out on pricing and try to poach each other’s customers. In short: healthy, robust competition on an impartial, accessible platform.

Sked’s relationship to Facebook was similar to that of developers building an app that uses advertising gathered and onsold by Google, or retailers using Amazon not to buy stuff but to sell their own wares in a much larger marketplace. In short, Sked was approaching Facebook not from the front end, where consumers post family photos, but from the back end, where businesses use the platforms to make money. In competition terms, Sked is a third-party user.

Almost by definition, third-party businesses rely on fair and equal access to their chosen platform. But what happens when the platforms — whether they’re online portals, railway hubs or gas pipelines — are controlled by a vertically integrated company competing for space and customers on the same platforms? What guarantee can, say, Sked have that it has been treated fairly when Facebook is developing rival content-publishing software to offer similar services to Instagram users?

This question lies at the heart of a lawsuit currently before the Federal Court of Australia. Sked’s owner, Dialogue Consulting, is claiming that Facebook’s decision to block it from Instagram was a business decision to squeeze out a competitor — something that fails the “substantial lessening of competition” test contained in Australian competition law.

Facebook’s decision to block Sked’s access has been hit by a court injunction — meaning that the Australian start-up can continue to operate at least until next year, when the hearings get under way in Melbourne. But court documents have revealed that Facebook is set to frame this as a privacy issue with no bearing on the right of third parties to compete. The problem, the company will argue, is merely that Sked can only offer its services if its clients hand over their Facebook login details — something the tech giant says is a clear violation of its terms of use.

In what may be an odd turn of events, both sets of lawyers could be right. Sked may have been violating Instagram and Facebook’s terms of use — but who wrote those terms in the first place? And what’s to prevent the platforms from using terms of use as a catch-all excuse for harming a potential rival and violating competition law in the process? What regulation is in place to monitor how terms of use are formulated and implemented?

Of course, there’s nothing uniquely Australian about this argument. The fear that platforms may be using their market power to “self-preference” their own businesses has been examined by regulators around the globe, with South Korea and the United States just two of the jurisdictions where competition watchdogs are now on the warpath. And earlier this year the European Commission, the EU’s regulator, imposed a fine of €1.49 billion on Google for abusing its market dominance by imposing restrictive clauses in contracts with third-party websites, preventing the platform’s rivals from placing advertisements on those websites.

What’s unusual in the Australian context is that the Sked lawsuit, along with a parallel investigation by the Australian Competition and Consumer Commission into Google’s treatment of a start-up called Unlockd, is occurring at a time of unprecedented regulatory upheaval. The ACCC’s groundbreaking report on Facebook and Google — the first broad overview of its kind in the world — is now with the government and new policy directions are expected to be announced before the end of this year.

That response is likely to be dominated by the headline issues of the ACCC’s digital platforms inquiry: Facebook and Google’s use (or misuse) of data and its implications for users’ privacy, the platforms’ impact on media and journalism, and related regulatory inconsistencies. Yet buried deep in the final report, published in July, is a reference to the predicament of both Unlockd, which is now in receivership, and Sked, underpinned by the regulator’s belief that third-party users are sitting ducks, liable to being shut down at the flick of a switch if the platforms decide to misuse their market power.

In fact, fears that tech giants are already inflicting damage on third-party users by skewing their access to platforms — and using privacy concerns or terms of use as a justification — may well be the biggest challenge facing regulators across the globe. And all indications are that both Facebook and Google are going to fight what they perceive as moves to regulate and oversee their relationship with third-party companies.


There was a time when Australian start-up Unlockd had the world at its feet. Founded in 2014, the Melbourne-based company had prepared an initial public offering for 2018 with an anticipated valuation of over US$180 million and an array of powerful backers. Its only vulnerability was a business model based entirely on third-party access to Google’s advertising services. Without that, Unlockd couldn’t survive — something that became apparent when the search giant pulled the plug.

While it lasted, Unlockd was a platform that allowed owners of smartphones using Google’s Android operating system to receive targeted advertisements when unlocking their devices, in return for in-kind payments — for example, shopping vouchers. It had major business partners in different jurisdictions, but it also relied on advertising being collated and provided by Google’s advertising service, AdMob.

When Google decided to cut the Australian start-up from its firmament — which meant no AdMob and no access to Google Play, the app retail outlet — Unlockd went belly up. Attempts to take legal action against Google in Australia and Britain floundered amid funding concerns; legal action in the United States, however, appears possible. The ACCC’s investigation of what happened has reportedly been under way for months and may yet spark court action of its own.

For its part, Google has said the decision to cut Unlockd’s access had nothing to do with competition but was purely the result of the Australian start-up falling foul of, yes, its terms of use. The search giant said it had given Unlockd time to fix problems it identified and find alternatives, but Unlockd had failed to act. But, just like in Sked’s relationship with Facebook, the question is not so much whether Unlockd violated Google’s terms of use but whether those terms were themselves implemented in a way that violated Australia’s competition law.

The ACCC’s July report was scathing in its assessment of the platforms’ ability to harm third-party users. Facebook and Google had both the “ability and incentive to engage in leveraging behaviour which may affect competition” in online markets, it said. Both advertisers and third-party service providers were particularly vulnerable, the ACCC found, because they had to contend with Facebook and Google’s vertical integration — something that was problematic under competition law.

Under the subheading “Increasing Risk,” the ACCC concluded that the “broad range of markets that each of Google and Facebook operates in provides many opportunities for self-preferencing to occur.” As for whether the platforms actually enjoyed substantial market power, ACCC chairman Rod Sims said it was an open-and-shut case. “I think the argument over whether they have market power is really a strange one,” he said when launching the final report. “I think we’ve got to move on from that argument, because it’s patently obvious that they have market power.”

The ACCC’s draft and final reports were manna from heaven as the lawyers representing Sked in the Federal Court looked for ways to bolster their claim against the social media colossus. The company’s court filings embraced the notion that Facebook had violated Australian competition law, with its amended statement of claim this month arguing that Facebook’s approach to Sked had clearly led to a substantial lessening of competition — again, the trigger words under Australia’s 2010 Competition and Consumer Act. The documents suggest that Facebook’s actions were designed to entrench the platform’s substantial market power and protect its online display advertising revenues by “aggressively deterring [Sked] and other participants from seeking to develop innovative products focused on the planning and publishing of organic content” while also shielding Instagram’s rival content-publishing software from competition.

All this suggests that the capacity of the third parties to take on the tech giants — and the willingness of both the courts and lawmakers to support them — is becoming a significant battlefield in a global war between platforms and the agencies charged with regulating them. The eyes of the world will again be on Australia. •

The post Big tech in the dock appeared first on Inside Story.

]]>
What Ada Lovelace can teach us about digital technology https://insidestory.org.au/what-ada-lovelace-can-teach-us-about-digital-technology/ Sun, 08 Sep 2019 15:40:56 +0000 http://staging.insidestory.org.au/?p=56832

Extract | How collaborative work can be liberating and effective

The post What Ada Lovelace can teach us about digital technology appeared first on Inside Story.

]]>
From the moment she was born, Augusta Ada Gordon was discouraged from writing poetry. It was a struggle against her genetic predisposition. Her father had led by example in the worst possible way, cavorting around the Mediterranean, leaving whispered tales of deviant eroticism and madness wherever he went. He penned epic stanzas full of thundering drama and licentiousness. Lord Byron understood the dangers of poetry. “Above all, I hope she is not poetical,” he declared upon his daughter’s birth; “the price paid for such advantages, if advantages they be, is such as to make me pray that my child may escape them.” Ada, as she was known, failed to make this escape and barely enjoyed the advantages. The poetry she went on to write was beyond even her father’s imaginings.

Likewise hoping that her daughter might avoid the fate of a father who was “mad, bad and dangerous to know,” Ada’s mother, Lady Anne Isabella Milbanke, ensured that she was schooled with precision and discipline in mathematics from her earliest days, and closely watched her for any signs of the troubles that had plagued her father. Lord Byron, abandoning them weeks after Ada’s birth, died when she was eight; his legacy cast a ghostly shadow over her life.

Ada’s schooling marched ever forward, toward an understanding of the world based on numbers. “We desire certainty not uncertainty, science not art,” she was insistently told by one of her tutors, William Frend. Another tutor was the mathematician and logician Augustus De Morgan, who cautioned Ada’s mother on the perils of teaching mathematics to women: “All women who have published mathematics hitherto have shown knowledge, and the power of getting, but… [none] has wrestled with difficulties and shown a man’s strength in getting over them,” he wrote. “The reason is obvious: the very great tension of mind which they require is beyond the strength of a woman’s physical power of application.”

But Ada was never going to be denied the opportunity to learn about mathematics. Lady Anne was a talent herself, dubbed “Princess of Parallelograms” by Lord Byron. Having managed to outlive him, the desire to expunge in her daughter the slightest genetic tendency for mad genius and kinky sex took precedence over any concerns about Ada’s feminine delicacy.

Married at nineteen years of age, Ada, now Countess Lovelace, demonstrated curiosity and agility of mind that would prove to be of great service to the world. Just a year before her marriage, in 1833, she had met Charles Babbage, a notable mathematician with a crankish disposition (he could not stand music, apparently, and started a campaign against street musicians). Together they worked on plans for the Analytical Engine, the world’s first mechanical computer. It was designed to be a mechanical calculator, with punch cards for inputting data and a printer for transcribing solutions to a range of mathematical functions. Babbage was a grand intellect, with a penchant for snobbery and indifference to many of the practicalities of getting things done. Lovelace was his intellectual equal but arguably better adapted to social life.

Like the proverbial genius, Babbage struggled with deadlines and formalities. When one of his speeches was transcribed for publication in Italian and neglected by Babbage, Lovelace picked it up and translated it. She redrafted parts of it to provide explanations to the reader. Her work ended up accounting for about two-thirds of the total text. This became her significant contribution to the advancement of computing: turning the transcription into the first-ever paper on computer science. It became a treatise on the work she and Babbage did together.

There remains some controversy about the extent of Lovelace’s participation in this project, but ample historical evidence exists to dismiss the detractors, not least the direct praise bestowed on her work and intellect by Babbage. Lovelace applied her mathematical imagination to the plans for the Analytical Engine and Babbage’s vision of its potential. She sketched out the possibility of using the machine to perform all sorts of tasks beyond number crunching. In her inspired graphic history of Babbage and Lovelace, Sydney Padua describes Lovelace’s original contribution as one that is foundational to the field of computer science: “By manipulating symbols according to rules, any kind of information, not only numbers, can be operated on by automatic processes.” Lovelace had made the leap from calculation to computation.

Padua describes the relationship between Babbage and Lovelace as complementary in computational terms. “The stubborn, rigid Babbage and mercurial, airy Lovelace embody the division between hardware and software.” Babbage built the mechanics and tinkered endlessly with the physical design; Lovelace was more interested in manipulating the machine’s basic functions using algorithmic formulas. They were, in essence, the first computer geeks.

The kind of thinking needed to build computers is precisely this combination of artistry and engineering, of practical mechanics and abstract mathematics, coupled with an endless curiosity and desire for improvement. The pioneering pair’s work blurred the division between science and art and navigated the spectrum between certainty and uncertainty. Without Babbage, none of it would have happened. But with Lovelace’s predilection for imaginative thinking and education in mathematics, a perfect alignment of intellect allowed for the creation of computer science. Lovelace and Babbage’s achievements were impressive because they challenged what was possible while at the same time remaining grounded in human knowledge.

And beyond all this, Lovelace was a woman. (A woman!) In direct contradiction to her tutors’ warnings decades earlier, Babbage wrote, Lovelace was an “enchantress who has thrown her magical spell around the most abstract of Sciences and has grasped it with a force which few masculine intellects (in our country at least) could have exerted over it.” Lovelace showed it was possible to transcend not only the bounds of orthodox mathematics but also her socially prescribed gender role.

No doubt all this caused Lovelace’s mother considerable worry. The madness seemed to be catching up, much to her consternation. In the years after her visionary publication, Lovelace poignantly beseeched Lady Anne: “You will not concede me philosophical poetry. Invert the order! Will you give me poetical philosophy, poetical science?”

For Babbage, the perfect was the enemy of good, and he never did manage to build a full model of his designs. In 1843, knowing that he struggled with such matters, Lovelace offered, in a lengthy and thoughtful letter, to take over management of the practical and public aspects of his work. He rejected her overtures outright yet seemed incapable of doing himself what was required to bring his ideas to fruition.

Lovelace’s work in dispelling myths and transforming philosophy was cut short when she died of cancer aged just thirty-six. Babbage died, a bitter and disappointed old man, just shy of eighty. The first computers were not built until a century later.


Technological advances are a product of social context as much as of an individual inventor. The extent to which innovations are possible will depend on a number of factors external to the individuals who make them, including the education available to them, the resources they have to explore their ideas, and the cultural tolerance for the kind of experimentation necessary to develop those ideas.

Melvin Kranzberg, the great historian of technology, observed that technology is a “very human activity — and so is the history of technology.” Humans are responsible for technological development but do not labour in conditions of their own choosing. Had Babbage been a bit more of a practical person, in social as well as technological matters, the world may not have needed to wait an extra century for his ideas to catch on. Had Lovelace lived in a time where women’s involvement in science and technology was encouraged, she might have advanced the field of computer science to a considerably greater degree.

So too, then, technological developments more generally can only really be understood by looking at the historical context in which they occur. The industrial revolution saw great advances in production, for example, allowing an economic output that would scarcely be thought possible in the agrarian society that had prevailed a few generations earlier. These breakthroughs in technology, from the loom to the steam engine, seemed to herald a new age of humanity in which dominance over nature was within reach. The reliance on mysticism and the idea that spiritual devotion would be rewarded with human advancement were losing relevance. The development of technology transformed humanity’s relationship with the natural world, a process that escalated dramatically in the nineteenth century. Humans created a world where we could increasingly determine our own destiny.

But such advances were also a method by which workers were robbed of their agency and relegated to meaningless, repetitive labour without craftsmanship. As machines were built to do work traditionally done by humans, humans themselves started to feel more like machines. It is not difficult to empathize with the Luddites in the early nineteenth century, smashing the machines that had reduced their labour to automated work. In resisting technological progress, workers were resisting the separation of their work from themselves. This separation stripped them of what they understood to be their human essence.

Whatever the horrors of feudalism, it had allowed those who laboured to see what they themselves produced, to understand their value in terms of output directly. Such work was defined, at least to a certain extent, by the human creativity and commitment around it. With industrialisation and the atomisation of craftsmanship, all this began to evaporate, absorbed into steam and fused into steel. Human bodies became a vehicle for energy transfer, a mere input into the machinery of production. It gave poetic significance to the term Karl Marx coined for capital: dead labour.

Though the Luddites are often only glibly referenced in modern debates, the truth is that they were directly concerned with conditions of labour rather than mindless machine-breaking or some reactionary desire to turn back time. They sought to redefine their relationship with technology in a way that resisted dehumanisation.

“Luddites opposed the use of machines whose purpose was to reduce production costs,” writes historian Kevin Binfield, “whether the cost reductions were achieved by decreasing wages or the number of hours worked.” They objected to machinery that made poor-quality products, and they wanted workers to be properly trained and paid. Their chosen tactic was industrial sabotage, and when their frame-breaking became the focus of proposed criminal law reform, it was, of all people, Lord Byron who leaped to their defence in his maiden speech to the House of Lords. Byron pleaded that these instances of violence “have arisen from circumstances of the most unparalleled distress.” “Nothing but absolute want,” he fulminated, “could have driven a large and once honest and industrious body of the people into the commission of excesses so hazardous to themselves, their families, and the community.”

The historical effect of this strategy has been to associate Luddites forever with nostalgia and a doomed wish to unwind the advances of humanity. But to see them as backward-looking would be an interpretive mistake. In their writings, the Luddites appear more like a nineteenth-century equivalent of Anonymous: “The Remedy for you is Shor Destruction Without Detection,” the Luddites wrote in a letter to the home secretary in 1812. “Prepaire for thy Departure and Recommend the same to thy friends.”

There is something very modern about the Luddites. They serve as a reminder of how many of our current dilemmas about technology raise themes that have consistently cropped up throughout history. Another one of Kranzberg’s six laws of technology is that technology is neither inherently good nor bad, nor is it neutral. How technology is developed and in whose interests it is deployed is a function of politics.

The call to arms of the Luddites can be heard a full two centuries later, demanding that we think carefully about the relationship between technology and labour. Is it possible to resist technological advancement without becoming regressive? How can the advances of technology be directed to the service of humanity? Is work an expression of our human essence or a measure of our productivity — and can it be both?

Central to understanding these conundrums is the idea of alienation. Humans, through their labour, materially transform the surrounding world. The capacity to labour beyond the bare necessities for survival gives work a distinct and profound meaning for human beings. “Man produces himself not only intellectually, in his consciousness, but actively and actually,” Marx wrote, “and he can therefore contemplate himself in a world he himself has created.” Our impact on the world can be seen in the product of our labour, a deeply personal experience. How this is organized in society has consequences for our understanding of our own humanity.

What happens to this excess of production — or surplus value — is one of the ultimate political and moral questions facing humanity. Marx’s critique of capitalism was in essence that this surplus value unfairly flows to the owners of capital or bourgeoisie, not to the workers who actually produce it. The owning class deserve no such privilege; their rapacious, insatiable quest for profit has turned them into monstrous rulers. Production becomes entirely oriented to their need for power and luxury, rather than the needs of human society.

Unsurprisingly, Marx reserved some of his sharpest polemical passages for the bourgeoisie. In his view, the bourgeoisie “resolved personal worth into exchange value, and in place of the numberless indefeasible chartered freedoms, has set up that single, unconscionable freedom — Free Trade. In one word, for exploitation, veiled by religious and political illusions, it has substituted naked, shameless, direct, brutal exploitation.”

This experience of exploitation gives rise to a separation or distancing of the worker from the product of her labour. Labor power becomes something to be sold in the market for sustenance, confined to dull and repetitive tasks, distant from an authentic sense of self. It renders a human being as little more than an input, a cog, a calculable resource in the machinery of production.

For those observing the development of the industrial revolution, this sense of alienation is often bound up with Marx’s analysis of technology. The development of technology facilitated the separation between human essence in the form of productive labour and the outputs of that labour. Instead workers received a wage, a crass substitute for their blood, sweat and tears, a cheap exchange for craftsmanship and care. Wages represented the commodification of time — they were payment for the ingenuity put into work. The transactional nature of this relationship had consequences. “In tearing away from man the object of his production,” Marx wrote, “estranged labour tears from him his species-life, his real objectivity as a member of the species, and transforms his advantage over animals into the disadvantage that his inorganic body, nature, is taken from him.”

As Amy Wendling notes, it is unsurprising that Marx studied science. He sought to understand the world as it is, rather than pursue enlightenment in the form of spirituality or philosophy alone. He understood capitalism as unleashing misery on the working class in a way that was reprehensible but also as Wendling put it, “a step, if treacherous, towards liberation.” There was no going back to an agrarian society that valued artisan labour. Nor should there be; in some specific ways, the industrial revolution represented a form of productive progress.

But how things were then were not how they could or should be forever. Marx’s thinking was a product of a desire to learn about the world in material terms while maintaining a vision of how this experience could be transcended. Navigating how to go forward in a way that valued fairness and dignity became a pressing concern of many of many working people and political radicals in his time, a tradition that continues today. •

This is an edited extract from Future Histories: What Ada Lovelace, Tom Paine, and the Paris Commune Can Teach Us about Digital Technology, by Lizzie O’Shea, published last month by Verso.

The post What Ada Lovelace can teach us about digital technology appeared first on Inside Story.

]]>
The tech god that failed https://insidestory.org.au/the-tech-god-that-failed/ Fri, 07 Jun 2019 02:42:20 +0000 http://staging.insidestory.org.au/?p=55567

Books | Something’s amiss, but has communications strategist Peter Lewis nailed it?

The post The tech god that failed appeared first on Inside Story.

]]>
It seems tragically appropriate that the most incisive description of the malign influence of the internet in these unpleasant times came from Leonard Pozner, father of a six-year-old victim of the Sandy Hook Elementary School massacre in Connecticut in 2012. Pozner has been hounded by online conspiracy theorists ever since. “History books will refer to this period as a time of mass delusion,” he told the Guardian in 2017. “We weren’t prepared for the internet. We thought the internet would bring all these wonderful things, such as research, medicine, science, an accelerated society of good. But all we did was hold up a mirror to society and we saw how angry, sick and hateful humans can be.”

High enthusiasm followed by disillusionment with technological innovation is a familiar story, and is the starting point of Peter Lewis’s Webtopia. Lewis, a former journalist and Labor political adviser, is the director of “progressive research and communications company” Essential Media and a columnist with Guardian Australia.

This biographical outline wouldn’t be necessary if it weren’t for the fact that Lewis has made his own experiences, and those of his fellow generation Xers, central to the story he tells. Alas, this book is yet another example of memoir-as-journalism, neatly described by journalist Elle Hardy as “an explorative essay shaped by the author’s views and experiences rather than comprehensive investigation of concepts.” In Australia and beyond it has become the dominant mode of contemporary nonfiction writing.

Using the personal and professional stories of friends, colleagues and acquaintances (many of whom, like the author, are members of the progressive political and media class), Lewis structures his narrative around the contention that his generation have lived their lives in four acts: “the way we were before the web, the way we thought the world would be when the web came along and blew our minds; the way the world is now (which is not quite the utopia we expected) and what the world could become if only we could get our act together.”

There is some logic in this structure, but at times it feels forced, and it yields some of the book’s most banal moments. It is particularly problematic during Lewis’s regular lapses into gen X nostalgia — recalling the “magical” sound of a landline telephone, say, or buying a Smiths LP from a record store. (Has he not heard about the worldwide vinyl revival?) Things really go off track when Lewis explains the internet as if it’s still 1995 and his readers have never been online: “The genius of the web is that you don’t need to house all the information on your own computer in order to see it.”

Meanwhile, he finds the kids of today to be inevitably diminished by their addiction to smartphones. Eavesdropping on a group of teenagers laughing and posing for photos on a night out, he suggests that technology has transformed them into “fundamentally different creatures” from his own generation. “We both chased the normal things that teenagers do: acceptance, finding a place, identity,” he writes. “But where my generation sought it in joining a group, it seems to me these kids are seeking something different, an identity defined by how they stand out from the crowd rather than how they fit in.” One could more persuasively conclude the group remains just as important to these young people, despite their ubiquitous phones.

Perhaps Lewis is overcorrecting here. After all, he confesses to having been exceptionally naive upon the arrival of the internet, up to and beyond the point of parody: “From the moment of its conception the web was an object of beauty, built on a set of simple choices that would transform the world.” Later, contemplating what the web might have achieved in economics and international relations, he gets more specific:

As national economies collapsed into regional and global free trade blocs, countries would better share scarce resources across national boundaries, especially things like ideas and IP that could be expressed as bits not atoms. A rejuvenated United Nations would lead collective responses to global challenges like climate change, population growth and human rights. The rising middle classes of Asia and India would embrace these global values and adapt them to their own specific cultures. Africa and the Middle East would follow suit, tapping technology to bypass the need for an industrial economy, transforming directly into vibrant, information-based societies. As the alliances of a unified Europe inspired other blocs of cooperation, the very idea of a nation-state would become outmoded. Humanity would connect as one, wrapped in its warm and loving web of togetherness.

Putting to one side the Western cultural supremacism inherent in his thought, did Lewis really believe in such nonsense? Webtopia indeed.

This naivety is also evident in the book’s case studies, in which Lewis too often accepts at face value the self-congratulatory puff of the people he should be challenging. Nowhere is this more apparent than his discussion with Guardian Australia editor Lenore Taylor about that outlet’s entry into the Australian media market. “Inside the Sydney office, a converted inner-city warehouse floor,” Lewis enthuses, “it feels like the sort of buzzy mothership that existed before the internet threw down its existential challenge to the media.”

“The model is the opposite to clickbait,” says Taylor, a debatable contention that Lewis fails to interrogate. Headlines like “I Was Filled with Self-Loathing after Losing My Novel on My Laptop — Russell Crowe Came to My Rescue” and “‘You Stole My Cheese!’: The Seven Best Post-it Note Wars” may be aimed at a younger, more progressive audience, but they’re still clickbait. Lewis doesn’t probe whether the Guardian Australia behaves any better or any worse than its peers in an industry that underpays staff reporters and often pays external contributors barely anything at all.

We shouldn’t run down the Guardian, which has produced countless instances of fine journalism in its six years in Australia. But if all Lewis is prepared to do is reinforce its boilerplate PR spiel about being a different type of news organisation, he runs the risk of repeating the same errors that made him a wide-eyed webtopian.

Quite apart from the conceptual problems, this book is also littered with some truly awful prose. Take: “The challenge is to strip back all that white noise and filter out the morsels that have meaning.” Or: “Politics was big and loud, painted on a broad canvas where the next chapter of human history was up for grabs.” It’s difficult to know where to begin with metaphors so mixed.

Then there are the platitudes, some of which could easily find themselves repeated on chalkboards outside struggling cafes: “While we may be asked to click, we are never asked to really share; to bring a plate or mow the lawns and build something that becomes so real and such an integral part of who we are that it ends up being part of our life story.”

Such attempts at profundity become more and more prevalent as Lewis tries and fails to come up with any meaningful proposals that will “make the net work.” On the one hand he wants governments to step in and regulate the companies that control the internet. On the other he wants individuals to take responsibility for using the web safely and ethically, despite their lack of agency:

If we want the web to be what it could still be — empowering and transformative — we need to work harder at tapping the one truly natural resource that powers it: people. Surely that’s the most valuable element that lies inside the web, not another predictive algorithm but the actual agents of human interaction: us.

Webtopia is a missed opportunity. Laments about the tech god that failed are becoming overly familiar, but this by no means lessens the urgency of the questions being asked. Unfortunately, there are no answers here. As someone who spends too much time online might say, this ain’t it, chief. •

The post The tech god that failed appeared first on Inside Story.

]]>
Computer says no https://insidestory.org.au/computer-says-no/ Sun, 28 Apr 2019 22:33:49 +0000 http://staging.insidestory.org.au/?p=54649

The hazards of being a woman in technology

The post Computer says no appeared first on Inside Story.

]]>
Preparing the publicity plan for Made by Humans, my recent book about data, artificial intelligence and ethics, I made one request of my publisher: no “women in technology” panels.

I have never liked drawing attention to the fact that I’m a woman in technology. I don’t want the most prominent fact about me to be my gender rather than my expertise or my experience, or the impact of my work. In male-dominated settings, the last thing I want to feel is that I’m only there because of my gender, or that it’s the first thing people notice about me. I don’t like that being described as a “woman in tech” flattens my identity, makes gender the defining wrapper around my experience, irrespective of race, class, education, family history, political beliefs and all the other hows and whys that make life more complicated. These are some of the intellectual reasons.

A more immediate reason why I haven’t wanted to be known for talking about gender is because I’m still young and I want to have a career in this industry. I like what I do. I work on a range of issues related to data sharing and use. At the moment I lead a technical team designing data standards and software to support consumers sharing data on their own terms with organisations they trust. Data underpins so much of the current interest in AI, so it’s a good time to be working on projects trying to make it useful and learning to understand its limitations.

It’s also true that most of the time my clients, employers, team members, fellow panellists and advocates are male. Most of the time they’re excellent people. I don’t like making them feel uncomfortable. People generally don’t like feeling as if they’re not being fair or that in some structural ways, the world isn’t fair. Avoiding making people uncomfortable — particularly those who decide whether you’ll be invited to speak at a conference, or hired, or promoted, or put forward for a new exciting opportunity — is still a sensible career move.

So I have long assumed that the best way to talk about gender and be a strong advocate for women is from a distance, at the pinnacle of my career, when I’m the one in the position of power. What a strange pact to make with myself: to effect change in the technology industry, to become a female leader, I just need to stay silent on issues affecting women in the industry. In thirty years’ time ask me what it was like and, boy, will I have some stories to tell — and some really good suggestions!

But I don’t think I can wait any longer. My sense of how to navigate the world as a woman and still get ahead was shaken in 2018. In the media, I watched as women who tried to keep their heads down and avoid making a scene still found themselves branded attention-seekers, deviants, villains — even though it was men behaving badly who were on trial. I published a book in a technology field and tried diligently to avoid discussing gender, only to be dismayed by the influence gender had on how it was received, who read it, who saw value in it.

At events, it was almost always women who approached me to say they enjoyed the panel and to ask follow-up questions. The younger the men, the more likely they were to want to argue with and dismiss me. At the book signings that followed, while my queues mainly comprised women, their requests for dedications were almost always to sons and nephews, brothers and husbands. I smiled politely through comments about being on panels specifically to add “a woman’s perspective.” I looked past the male panellists who interrupted me, repeated me, who reached out to touch me while they made their point.

I was also pregnant. By the time you read this I will have given birth to a baby girl. It is hard to describe how much this has recast what I thought was the right and wrong way to make it as a woman. She is the daughter of two intelligent parents who are passionate about technology. She is currently a ferocious and unapologetic wriggler who takes up every inch of available space and demands our attention, oblivious to the outside world. I don’t know her yet but I already admire her for that.

I do not want her to grow to believe that in order to successfully navigate the world she must make herself small, put up with poor treatment, apologise for taking up space that is rightfully hers. I don’t know how I would look her in the eye when she realises that this way of being does not help her.


While it can seem like we’re only now talking about gender issues in the tech sector, the discussion has been going on for decades. What makes it wearying is how many of the problems raised years ago remain the same.

In 1983 — before I was born — female graduate students and research staff from the computer science and artificial intelligence laboratories at MIT published a vivid account of how a hostile environment to women in their labs impeded academic equality. They described bullying, sexual harassment and negative comments explicitly and implicitly based on gender. They described being overlooked for their technical expertise, and ignored and interrupted when they tried to offer that expertise. The authors made five pages of recommendations aimed at addressing conscious and unconscious differences in attitudes towards women in the industry. “Responsibility for change rests with the entire community,” they wrote, “not just the women.”

It’s clear that for there to be serious improvements in the numbers of girls and women in technology, cultural attitudes towards what girls are interested in and capable of will have to change. We have to want them to change. But I’m not sure that we want that as a society. In Australia, declining participation rates among girls studying advanced maths and science subjects in high school continue to be a cause for concern. The reasons for the decline remain broadly the same as they were twenty years ago, when Jane Margolis and Allan Fisher interviewed more than a hundred female and male computer science students at Carnegie Mellon University, home to one of the top computer science departments in the United States, as part of their seminal study of gender barriers facing women entering the profession.

In Unlocking the Clubhouse: Women in Computing, Margolis and Fisher charted how computing was claimed as male territory and made hostile for girls and young women. Throughout primary and high school, the curriculum, teachers’ expectations and parental attitudes were shaped around pathways that assumed computers were for boys. Even where women did persist with an interest in computers into college, Margolis and Fisher observed that by the time they graduated “most… faced a technical culture whose values don’t match their own, and ha[d] encountered a variety of discouraging experiences with teachers, peers and curriculum.”

Back then, as now, barriers persisted in the workforce. In the mid 2000s, The Athena Factor report on female scientists and technologists working in forty-three global companies in seven countries concluded that while 41 per cent of employees in technical roles in those companies were women, over time 52 per cent of them would quit their jobs. The key reasons for quitting: exclusionary, predatory workplace cultures; isolation, often as the sole woman on a technical team; and stalled career pathways that saw women moved sideways into support or executor roles. “Discrimination,” Meg Urry, astrophysicist and former chair of the Yale physics department, wrote in the Washington Post in 2005, “isn’t a thunderbolt, it isn’t an abrupt slap in the face. It’s the slow drumbeat of being underappreciated, feeling uncomfortable and encountering roadblocks along the path to success.”

These themes emerge in thousands of books, white papers, op-eds and articles: women leave the tech industry because they’re isolated, because they’re ignored, because they’re treated unfairly, underpaid and unable to advance. These problems persist, and every year there are new headlines concerning gender discrimination at every level of the industry. In 2018, female employees working for Google in California filed a class-action lawsuit alleging the multinational tech company systematically paid women less for doing similar work to men, while “segregating” technically qualified women into lower-paying, non-technical career tracks. The same year, 20,000 employees and contractors walked out at offices around the world to protest sexual harassment at the company after news broke that Andy Rubin, the “father of Android,” had been paid US$90 million to leave Google quietly amid credible accusations of sexual harassment.

In every tech organisation I have worked in, these kinds of dynamics remain uncomfortably familiar. There are more women than men in administrative roles, in project coordination, and in front-end development and design roles, although some of these women began in the industry with technical degrees and entry-level technical roles. Gender-related salary gaps persist, even within the same technical roles and leadership positions. HR processes, designed to create a level playing field, still inadvertently reward those who complain the loudest, who demand more money, who tend more often than not to be men. It remains difficult for women to pursue bullying and harassment complaints, particularly against powerful harassers, without career consequences.


Writing about gender in technology, it’s easy to fall into what computer engineer Erica Joy Baker describes as “colourless diversity.” A 2018 study by the Pew Research Center noted that while 74 per cent of women in computer jobs in the United States reported experiencing workplace discrimination, 62 per cent of African-American employees also reported racially motivated discrimination. Women of colour in tech find themselves doubly affected.

I am reflected in the statistics around gender: I am affected by them, perpetuate them, benefit from them. I have never successfully negotiated a meaningful salary increase or bonus. I have watched as male successors in my own former roles are paid more than I was to do the same job. I have managed teams where my male employees earned more than I did. These kinds of things happen both because I don’t speak up and because organisations let it happen.

I know this because as a manager, I let it happen too. I get requests for higher pay and promotions more frequently from men. Even if they’re rude or unreasonable or completely delusional, I take their sense of being undervalued seriously. I want to make them happy. The women I have managed rarely express their outrage or frustration so openly. They’re slower to escalate concerns and less likely to threaten to leave. I also know that as a white woman in tech, despite these experiences, I’m still statistically likely to be paid more and receive more opportunities than any of my non-white colleagues, male or female or non-binary.

Years of focus on bringing more women into technology roles have created a sense of being eagerly sought out and highly valued. But workplace dynamics and cultural attitudes still persist in making women of all ages, ethnicities, sexualities, once they’re in the sector, feel undervalued and ignored. It’s a strange paradox to live within. On the basis of our gender we’re highly visible (more visible if we’re white and heterosexual). As experts, as contributors to those technologies, it can still feel like we’re hidden in plain sight.

In August 2018, WIRED magazine asked: “AI is the future: but where are the women?” Working with start-up Element AI, it estimated that only around 12 per cent of leading machine-learning researchers in AI were women. Concerns about a lack of diversity in computer science have taken on new urgency, partly driven by growing awareness of how human designers influence the systems making decisions about our lives. In its article, WIRED outlined a growing number of programs and scholarships aimed at increasing gender representation in grad schools and industry, at conferences and workshops, while concluding that “few people in AI expect the proportion of women or ethnic minorities to grow swiftly.”

For women, the line between “inside” and “outside” the tech sector — and therefore what contributions are perceived as valuable — keeps moving. While WIRED focused on a narrow set of skills being brought to applied AI — machine-learning researchers — it nonetheless recast these skills as the only contributions worth counting towards AI’s impact. The “people whose work underpins the vision of AI,” the “group charting society’s future” in areas like facial recognition, driverless cars and crime prediction, didn’t include the product owners, user-experience designers, ethnographers, front-end developers, the sociologists and anthropologists, subject-matter experts or the customer-relationship managers who work alongside machine-learning researchers on the same applied projects.

Many of these roles evolved explicitly to create the connection between a system and society, between its intended use and unintended consequences. They are roles that typically encourage critical thinking, empathy, seeking out of diverse perspectives — all skills that leaders in the tech industry have identified as critical to the success of technology projects. The proportions of women in these kinds of roles tends to be higher, both by choice and perhaps, as the female employees at Google alleged in their 2018 lawsuit, by design. Yet, even as we wrestle in public debates with the impact the design of a technical system has on humans and society, our gaze keeps sliding over these people — already in the industry — who are explicitly tasked with addressing these problems, including many women. If in public commentary we don’t see or count their contributions as part of the development of AI, then their contributions don’t get valued as part of teams developing AI.

I want more diverse women to become machine-learning researchers. I also want the contributions women already make in a range of roles to be properly recognised. What matters isn’t so much getting more women into a narrowly defined set of technical roles, the boundaries of which are defined by the existing occupants (who are overwhelmingly male). This is still a miserably myopic approach to contributions that “count” in tech. What matters is that the industry evolves to define itself according to the wide set of perspectives, the rich range of skills and expertise, that go into making technology work for humans.

What I really don’t want to see, as the relationship between technical systems and humans takes on greater status in the industry, is women being pushed out of these important roles. It would be the continuation of a historical pattern in the tech industry. As technology historians including Marie Hicks have shown, despite this persistent sense that women “just don’t like computing,” women were once the largest trained workforce in the computing industry. They calculated ballistics and space-travel trajectories; they programmed the large, expensive electromechanical computers crunching data for government departments and commercial companies while being paid about the same as secretaries and typists. But as the value of computing grew, women were squeezed out, sidelined, overtaken by male colleagues. Once considered a “soft” profession, women’s work, not “technical” as defined by the men who occupied other technical roles, computer programming eventually became highly paid, prized… and male-dominated.

I worry that we’re about to do it again, this time in ethical AI. Ethical AI is in the process of transitioning from being a “soft” topic for people more concerned with humans than computers, and treated by the technology industry primarily as a side project, to being a mainstream focus. Experts in ethical AI are a hot commodity. KPMG recently declared “AI ethicists” one of the most in-demand hires for technology companies of 2019. There’s a reason a book about ethical AI like Made by Humans got picked up by a mainstream publisher.

A significant proportion of critical research fuelling interest in the impacts of AI on humans and society has been driven by women: as computer scientists, mathematicians, journalists, anthropologists, sociologists, lawyers, non-profit leaders and policymakers. MIT researcher Joy Buolamwini’s work on bias in facial recognition systems broke open the public debate about whether facial recognition technology is yet fit for purpose. Julia Angwin’s team at ProPublica investigated bias in computer programs being used as aids in criminal sentencing decisions, and exposed competing, incompatible definitions of algorithmic fairness. Data scientist Cathy O’Neil’s book Weapons of Math Destruction was one of the first big mainstream books to question whether probabilistic systems were as flawless as they appeared.

There are many prominent women in AI ethics: Kate Crawford and Meredith Whittaker, co-directors of the AI Now Institute in New York and long-term scholars of issues of bias and human practice with data; Margaret Mitchell, a senior machine-learning researcher at Google, well known in the industry for her work in natural-language processing, who has drawn attention to issues of bias in large corpuses of text used to train systems in speech and human interaction; Shira Mitchell, quantifying fairness models in machine learning; danah boyd; Timnit Gebru; Virginia Eubanks; Laura Montoya; Safiya Umoja Noble; Rachel Coldicutt; Emily Bender; Natalie Schluter. In early 2019, when TOPBOTS, the US-based strategy and research firm influential among companies investing in applied AI, summarised its top ten “breakthrough” research papers in AI ethics, more than half of the authors were women.

Which is why it’s been disconcerting to see, as interest in and funding for AI ethics grows, the gender distribution on panels discussing ethics, in organisations set up explicitly to consider ethical AI, start to skew towards male-dominated again. It’s not just that more men are taking an interest in ethical AI; this is a reflection of its importance, which is something to be celebrated. What troubles me is that what “ethical AI” encompasses often seems to end up in these conversations being redefined as a narrow set of technical approaches that can be applied to existing, male-dominated professions.

Even as the women in these professions — and many of the influential women I just cited are computer scientists and machine-learning researchers — are doing pathbreaking work on the limitations (as well as the strengths) of technical methods quantifying bias and articulating notions of “fairness,” these technical interpretations of ethics become the sole lens through which “ethical AI” is commoditised.

Ethical AI is thus recast as a “new,” previously unconsidered technical problem to be solved, and solved by men. I have been consistently unnerved to find myself talking to academics and institutes planning research investments in ethical AI who don’t know who Joy Buolamwini is, or Kate Crawford, or Shira Mitchell. And I worry that user researchers, designers, anthropologists, theorists — many of them women — whose work in the industry has for years involved marrying the choices made by engineers in designing systems with the humans affected by them, will end up being pushed out as contributors towards “ethical AI.” I’m afraid we’ll just keep finding new ways to render women in the industry invisible. I worry that my own contributions will be invisible.


Every woman in technology can tell you a story about invisibility. At a workshop in 2018, I watched a senior, well-respected female colleague, who was supposed to be leading the discussion, get repeatedly interrupted and ignored. She finally broke into the conversation to say, mystified, as though she couldn’t quite trust what was in front of her own eyes and ears, “Didn’t I just say that? Did anyone hear me say that?” What stayed with me was the way she asked the group the question: she wasn’t angry, just… puzzled. As though perhaps the problem wasn’t that people weren’t listening to her, but that there was some issue with the sound in the room itself, or with her. As if perhaps the problem was that she was a ghost who couldn’t be heard.

I’ve listened to men repurpose my proposals as their own — not intentionally or maliciously, just not realising that they had heard me say the same thing seconds earlier. I’ve watched as men on projects I’ve led attribute our success to their own contributions. It is unsettling and strange to be both visible as a woman in tech, and yet invisible as a contributor to tech. For women of colour, invisibility is doubly felt. Even at conferences and on panels dedicated explicitly to the experiences of women in tech, most panellists will be white women in the industry.

Perhaps nobody has captured the stark influence of gender on visibility more vividly than the late US mathematician Ben Barres. In his essay “Does Gender Matter?” Barres recounts his own experiences as a graduate student at MIT, entering the school as a woman before transitioning to a man. At the time, Barres was responding to comments made by male academics, including Harvard president Lawrence Summers and psychologist Steven Pinker, asserting biological differences to explain the low numbers of women in maths and science. In the essay, Barres is careful not to attribute undue significance to his own experiences. But sometimes anecdotes reveal as much as statistics do.

Barres described how, as a woman and as the only person to solve a difficult maths problem in a large class mainly made up of men, she was told by the professor that “my boyfriend must have solved it for me.”After she changed sex, a faculty member was heard to say that “Ben Barres gave a great seminar today, but then his work is much better than his sister’s.” Confronting innate sex differences head on, Barres described the intensive cognitive testing he underwent before and after transitioning and the differences he observed: increased spatial abilities as a man; the ability to cry more easily as a woman. But by far, he wrote, the main difference he noticed on transitioning to being a man was “that people who don’t know I am transgendered treat me with much more respect. I can even complete a whole sentence without being interrupted by a man.”

This is the first time I’ve written publicly about my own experiences as a woman in technology. Up till now I’ve played by the rules. I have spent years being polite in the face of interruptions, snubs, harassment. I know instinctively how to communicate my opinion in a way that won’t upset anyone and have tended to approach talking about sexual harassment and discrimination on the basis that what matters most is not upsetting anyone. I have focused on finding the “right words,” the “right time” to talk about gender issues.

But I have observed the ways in which gender (and race, and sexuality) continues to shape who is in power and whose contributions get counted in the tech industry, in ethical AI. Even when it comes to the “right” way to talk about gender issues, I can’t help but notice how different “right” looks for men as compared with women, and for white women as compared with women of colour. Increasingly, men are publicly identifying as champions of change for women: they sign panel pledges, join initiatives pursuing gender equality, demand equal representation, and it’s seen as a career booster. Women demand change too forcefully and are labelled bullies, drama queens, reprimanded for their over-inflated sense of self-importance. Women of colour are vilified and hunted. And women who stay silent in the face of all this, as I have, implicitly endorse the status quo, often finding themselves swallowed up by it anyway. If my daughter grows up to be interested in tech, these are not the experiences I want her to have. I want her to be unafraid to speak up, to demand our attention. I want her to be seen, and I want her to speak up for others.

If there is no “right” way as a woman to speak about gender issues — if there is no “right” way for a woman to take up space, to take credit — then silence won’t serve me or save me either. The only way forward from here is to start speaking. •

This essay is republished from GriffithReview 64: The New Disruptors, edited by Ashley Hay.

The post Computer says no appeared first on Inside Story.

]]>
A spectre is haunting the workplace https://insidestory.org.au/a-spectre-is-haunting-the-workplace/ Thu, 11 Apr 2019 00:02:43 +0000 http://staging.insidestory.org.au/?p=54392

Books | Employers are exercising an extraordinary level of control — overt and covert — over their workers

The post A spectre is haunting the workplace appeared first on Inside Story.

]]>
Most people spend most of their time in slavery. They live in a world of subordination, obeisance and arbitrary decrees; they must endure loyalty oaths, surveillance and the soul-destroying vagaries of dictatorship; they suffer under the burden of potential exile; they are vassals, shunted from fiefdom to suzerain and back again.

Do you recognise this world? You do? Maybe you’re a North Korean dissident, or a refugee who fled Stalin’s Russia. Or maybe you’ve just got home from work.

“Most workers,” argues the American philosopher Elizabeth Anderson in her provocative new book, “are governed by communist dictatorships in their working lives.” When we enter our workplaces we enter a system of private government. And it’s not a pretty sight: the private governments of the past were run by leaders who took power by force or by birth; the private governments of today are run by CEOs.

Taking her examples from American sources, Anderson tells of poultry processors forced to wear adult nappies because they are denied toilet breaks; sweatshop conditions in Californian garment factories; astounding levels of sexual harassment in restaurants across the nation; out-and-out wage theft in many industries; and Amazon warehouse workers suffering under heatwave conditions because management wouldn’t install air conditioning. (To be fair, they did organise for ambulances to ferry the workers who collapsed from heat stroke to hospital.)

Of course, these are the extreme examples. But the system of control, Anderson says, is near universal. Big Brother is boss, and if you don’t like it, there’s the door.

Western governments might be democratic in nature, but when we pass through those ubiquitous security gates into the place where we spend a third of our lives, our corporate lanyards round our necks, we surrender to a system of government that Henry Tudor would recognise and approve of.

Philosophers love their thought experiments, and Anderson’s is a doozy. She describes something familiar in a new way and the scales fall from your hitherto unseeing eyes. After reading her book your workplace will never look quite the same again. Men and women might be born free, but everywhere they are chained to their cubicles.


Journalist and screenwriter Dan Lyons would agree. He’s spent the last couple of years journeying through the work gulags of modern capitalism, talking to dissidents on the shop floor and hubristic managers in the corner offices, and he’s come back a modern-day Solzhenitsyn.

Lyons is no philosopher, or sociologist for that matter, but he is funny — and he’s pissed off. Once a successful journalist, he was disrupted out of a job he loved at Newsweek, ended up working at a tech start-up staffed by enthusiastic millennials, wrote a jaundiced book about his experiences (naturally enough called Disrupted), and ended up contributing to the hit HBO comedy Silicon Valley.

You’d think he’d be content now, but Lyons is having none of that. He’s convinced that modern work practices are making people seriously unhappy, and he thinks it’s all the fault of his traditional enemies: those smug oligarchs who run Big Tech. Lab Rats is the result of this personal crusade.

Because Big Tech companies are powerful and successful, how they do things is copied shamelessly by many other companies hoping to emulate their share prices. As a long-time critic of Silicon Valley’s culture, Lyons is convinced that the industry’s influence on the happiness of modern workers will reach far beyond a pocket of California.

Big Tech loves cheap workers. Facebook, Amazon, Netflix and Google — the so-called FANGs — are all big companies with relatively small workforces, mostly made up of contractors. For Lyons, the growing use of contract workers, pioneered by tech companies like Lyft and Uber, is making job security a thing of the past.

And Big Tech also loves change, at least on its own terms. Unfortunately, that’s not so good for workers. As Lyons points out, “being exposed to persistent, low-grade change leads to depression and anxiety. The suffering is akin to what we experience after the death of a loved one or spending time in combat.”

Move fast and break things, as the Facebook motto had it. Even lives.


Uncovering the dark side of Silicon Valley is becoming a journalistic industry. In Brotopia, the American TV reporter Emily Chang, presenter and executive producer of Bloomberg Technology, examines how the tech industry treats women. Her tales show job discrimination, investor prejudice, and a culture of sexual politics that would look more at home in parts of the National Rugby League.

The chapter of Chang’s book that got a lot of publicity (including an extract in Vanity Fair) when it was first published in the United States dealt with the anonymous sources who told her about the sex parties held by the upper echelons of the tech business. The sense of entitlement among powerful men is, of course, an old and dismaying story. But the CEOs, founders, venture capitalists and paper billionaires of the Valley have given it a new twist. Perhaps because they were virginal nerds in high school and university, they somehow feel they have earnt the right to run a tech version of the Playboy mansion circa 1972, but with more money, better drugs and even less duty of care.

It’s the same kind of self-justifying ideology used to disrupt legacy businesses or claim that the gig economy is an improvement on that old-fashioned notion, workers’ rights.

Chang’s book also reports on the structural issues that bedevil the tech sector. Despite playing an important role in the creation of the industry, women are woefully underrepresented in its workforce. According to Chang, not only are there too few female coders, developers, CEOs and venture capitalists in tech, but there is also no real commitment to overcoming the deficiency.

In an otherwise interesting and well-researched book, Chang does seem to miss one of the most important issues arising from the Valley’s woman problem. As Safiya Umoja Noble shows in another recent book, Algorithms of Oppression, the programming choices that lie at the heart of the whole enterprise are often hopelessly biased against women and minorities, mainly because they are written by a very narrow cohort of white, Ivy League–educated men.

So, what kind of shop floor of the future are these men creating?


One day a software programmer in Los Angeles called Ibrahim Diallo turned up to work to find that his security pass didn’t work. Luckily the security guard recognised him and let him in. Then his computer wouldn’t let him log on. His supervisor sent an email to HR to sort things out. She got back a computer-generated reply saying that he was no longer a “valid employee.” Diallo had been fired by a computer.

Because of a software malfunction, Diallo spent three frustrating weeks without a job and without pay. A minor Kafkaesque moment in the history of industrial relations, you might say. But Dan Lyons says it’s a parable of what’s already happening. “We are meat puppets, tethered to an algorithm,” he writes.

We already have software programs that screen resumés, and irritating workplace surveys run by AI, as well as the widespread use of “continual performance improvement algorithms” that literally monitor a worker’s every move. By creating a new hybrid, part worker, part machine, we run the risk of dehumanising the workplace, says Lyons — with far-reaching psychological effects on workers.

“In my quest to understand the epidemic of worker unhappiness,” writes Lyons, “I’ve come across stressors like dwindling pay cheques, job insecurity and constant, unrelenting change. But [the] fourth and final factor of unhappiness in the workplace — dehumanisation — might be the most dangerous of all.”

An optimist at heart, Lyons says he’s also found signs of a counterrevolution, at least in some companies. There’s the Chicago-based software company Basecamp, which mandates that its staff work only forty hours a week — except in summer, when they take Fridays off. Basecamp’s owners aren’t pursuing world domination, just interesting and fulfilling lives. And, anyway, they’re not short of cash: one of them can still afford to collect racing cars and compete at Le Mans.

Then there’s Managed by Q, a contract-cleaning business, where “everybody cleans,” even the firm’s founder and CEO, who still does shifts scrubbing toilets. Unlike many newly minted companies, Managed by Q doesn’t rely on the gig economy to shave wages and conditions; everyone who cleans also gets health insurance, a pension scheme and stock options.

Lyons argues that companies like these have seen what’s happening in Silicon Valley and then done the opposite. They don’t make Google-sized returns, but their staff turnover is lower, their productivity is higher, and people have fun at work.

The reform that Lyons seems to miss, however, is the most obvious one: collective action. Last November, for example, around 20,000 Google workers staged a walkout protesting at the company’s use of forced arbitration to settle sexual harassment or assault cases instead of allowing its workers to go to court. By February Google had bowed to the pressure and agreed that it would no longer force employees to settle disputes in this way.

As the group that led the campaign, Google Walkout for Real Change, tweeted at the time, “This victory would never have happened if workers hadn’t banded together, supported one another and walked out. Collective action works. Worker power works. This is still just the beginning.” •

The post A spectre is haunting the workplace appeared first on Inside Story.

]]>
Radio revolutionary https://insidestory.org.au/radio-revolutionary/ Sun, 13 Jan 2019 23:27:50 +0000 http://staging.insidestory.org.au/?p=52780

Books | “Visionary” Sydney-born engineer Cyril Elwell played a pioneering role in what became Silicon Valley

The post Radio revolutionary appeared first on Inside Story.

]]>
Cyril Elwell has not left many traces. Stanford University’s archives hold multiple drafts of an autobiography no one would publish. The Palo Alto Times carried a one-column obituary when he died in 1963. His name is on a plaque outside a house in Silicon Valley where, according to the inscription, other people did something that “led to modern radio communication, television and the electronics age.”

It’s true that Elwell gets much of a long chapter in Hugh Aitken’s fine history of early twentieth-century American wireless because of his role in a “revolution in the art of radio.” But it has taken Ian Sanders and Graeme Bartram to do what Elwell himself couldn’t manage through all those autobiographical fragments in the Stanford archives — piece together and publish a comprehensive account of the work and life of this important Australian link to the early days of Silicon Valley.

About a decade after Leland Stanford Junior University opened its doors in the 1890s, the Australian-born Elwell began a BA in electrical engineering. Stanford, as Sanders and Bartram write, was set up to teach “the traditional liberal arts and the technology and engineering that were… changing America.” Its four-year electrical engineering course and related work program quickly acquired a reputation that reached young Elwell in Sydney via an American working at the Ultimo Powerhouse. On a lecture tour of Australia the year Elwell graduated, Stanford’s founding president, David Starr Jordan, mentioned this “most brilliant Australian student” to the Brisbane Telegraph.

“It cannot have been unusual in Australia in 1902 for a young man with a technical bent to want to study electrical engineering,” writes Aitken. “What was distinctive about Elwell was his bullheaded determination that, with no financial support from his family, with the slimmest of cash resources of his own, and with no assurance whatever that his previous education would gain him admission, he was going to study electrical engineering in America, and not just any university, but at Stanford in particular. And it must have been this determination, this confidence that the thing could and would be done, that gave him the aid and encouragement of people on whom he had no claim except friendship.”

Elwell was born in the Melbourne suburb of Richmond in 1884. There is no official record of the death of his birth father, an American from Rochester, New York. Cyril took the surname of his mother Clothilda’s second husband, an English journalist who became the Sydney Morning Herald’s principal state political reporter. After Thomas Elwell died of kidney disease on Cyril’s eleventh birthday, Clothilda married the owner of Sydney’s Grosvenor Hotel. Aitken suggests that Elwell “became accustomed to abrupt change, and perhaps learned not to commit himself too completely to any given state of affairs.”

Living at the hotel with his family, he learned about electricity from a German engineer who maintained the electrical systems: at the time, being “lighted throughout with Electric Light and Gas” was luxury. In 1900, after finishing school at nearby Fort Street High, he travelled in Europe with his family for six months. They visited the Universal Exhibition in Paris, where one of the hit attractions was Valdemar Poulsen’s “telegraphone,” a recently patented magnetic wire recorder the Danish inventor had used to record the voice of the Austrian emperor.

Back in Australia, Elwell did a course in physics at Sydney Tech then started an apprenticeship with the Electrical Section of the NSW Railways, where he worked upgrading the Ultimo Powerhouse to supply electricity to Sydney’s trams. Deciding he wanted to study electrical engineering at the university he had heard about from the visiting American, Elwell worked his passage to San Francisco then studied further to pass the entrance exam.

He remained a “big current” man through his studies at Stanford, undertaking a final-year project on the design of high-current transformers for electrical smelting furnaces and then landing a job with a Californian steel company. He was not especially interested in the “small currents” of telephone and telegraph engineering when one of his Stanford professors recommended him to the Oakland bankers backing a local wireless telephone start-up.

Elwell responded as he would many more times in his career: he took a big risk, made something remarkable happen, got an even better idea, fell out with people he really needed to get on with, and moved on to something else.


Early “spark” wireless transmitters generated intermittent electrical emissions that worked well enough for the dots and dashes of telegraph signalling. Elwell was one of the pioneers of a new method, “continuous waves.” Working for the start-up in San Francisco, he made enough progress with spark apparatus to persuade the Oakland Tribune that the wireless telephone was “assured of success.” But he convinced himself of the opposite: that intelligible sounds could not be transmitted over long distances without continuous electromagnetic waves.

Different inventors developed three ways to generate these waves. One, patented by the Dane Valdemar Poulsen, was the “arc.” Poulsen used this technology to communicate by telephone across a distance of 270 kilometres in 1907. Elwell bought US rights to the “Poulsen arc,” demonstrated the equipment in San Francisco, and set up a company with himself as president and chief engineer and David Starr Jordan, the Stanford president, as a founding investor. Stations opened in Stockton and Sacramento in 1910. A third, at San Francisco’s Ocean Beach, became a giant landmark for local shipping. Elwell developed the equipment, including a better receiver. Other stations followed in cities like Seattle and Portland on the west coast, Kansas City and Chicago in the east, and, in 1912, Honolulu.

Despite its impressive technical achievements, the company struggled to make money. It was recapitalised and renamed, twice. In the process, Elwell lost control of it. From 1911, the person with that revered Silicon Valley moniker, the founder, was an ordinary director and only a small shareholder in what was now the Federal Telegraph Company.

 

Technological translator: Cyril Elwell (fifth from the left) at a Federal Telegraph Company reunion in Palo Alto in 1956. Lee de Forest is third from the left. History San José

Wireless telephony was the goal, but continuous waves also had important benefits for wireless telegraphy. They could be tuned to particular frequencies and they didn’t dissipate electrical power to the same extent as spark transmissions. When Elwell travelled to Washington in 1912 to try to sell Federal Telegraph’s technology, the US navy already had equipment that could feed into an antenna twice the power that Elwell’s could. But when a test was set up to communicate across America with San Francisco, observers were stunned to find Federal Telegraph’s Honolulu station listening in as well. This “unheard of” distance provided big possibilities for US military command.

Federal got a contract for a station in the Panama Canal Zone in 1913. It went into service two years later. Others followed in the Philippines, Pearl Harbor, San Diego, Puerto Rico, Guam, Samoa, New York, Annapolis and Lafayette, France. By the end of the first world war, the US navy’s oceanic communication system was the world’s best and Washington, DC could communicate with ships anywhere in the Caribbean, the Pacific and the Atlantic.

By then, though, Cyril Elwell was long gone from the company that supplied all the transmitters. Sanders and Bartram say he disagreed with the level of expenditure needed to extend the company’s network to Japan and China. Other directors were given an ultimatum: they could have the engineer and founder, or they could have the financier who now controlled the company. They picked the financier. One of Elwell’s own hires, Leonard Fuller, took over the engineering and went on to design what Aitken calls “the third and greatest generation of arc transmitters.”


Based in London from May 1913 and Paris from 1916 to 1920, Elwell worked as “a kind of freelance engineer” for anyone who paid, among them the Royal Navy (which used the Poulsen arc system on all its vessels by the end of the war), the British Post Office, the governments of France and Italy, and the company holding the Poulsen patents for the British Empire. A major coup was the selection of Elwell–Poulsen’s system ahead of Marconi’s and other competitors’ for two large stations near Oxford and Cairo in the early 1920s, the first steps in Britain’s long-delayed Imperial Wireless Chain. Marconi’s new shortwave “beam” system dominated later British stations, and valves overtook arcs as the technology of choice for generating continuous waves.

Building a big, Californian-style home in Surrey in the 1920s, Elwell became something of a “local celebrity,” write Sanders and Bartram. He was an early investor in the Mullard Radio Valve Company (eventually bought by Philips) and claimed to have made a lot of money from it, though Sanders and Bartram say “the scope of [his] involvement following its formation in 1920 is unclear.” He established a company, C.F. Elwell Limited, hoping to profit from the boom in radio broadcasting in the 1920s, but the enterprise was not a success, and he “all but erased the episode from his later recollections of history.” The second part of this handsome, expanded edition is filled with pictures and descriptions of the “Aristophone,” “Statophone” and other receivers that C.F. Elwell Limited manufactured in Britain.

Elwell also set up a company to manufacture and market “talking pictures” equipment, licensing American Lee de Forest’s Phonofilms system for Britain and its overseas territories in 1923. The company was liquidated in 1929 as better technologies developed by bigger corporations came to dominate the retooling of cinemas for “talkies.”

De Forest and Elwell had history. Elwell gave de Forest a job in 1911 at Federal Telegraph. There, in the laboratory and factory on the corner of Emerson Street and Channing Avenue in Palo Alto, as the commemorative plaque now reads, “with two assistants, Lee de Forest, inventor of the three-element radio vacuum tube, devised in 1911–13 the first vacuum tube [valve] amplifier and oscillator.” Amplification of tiny signals was a crucial advance that made long-distance telephony, broadcasting, talking pictures and much else possible, hence the further tribute that “worldwide developments based on research conducted here led to modern radio communication, television and the electronics age.” Elwell’s role was to have founded the company where de Forest and his assistants did this work. He was never happy with the credit de Forest got, especially his later self-styling as the “father of radio.”


Elwell married twice. Two of the four children he had with his first wife, Ethel, died very young, one at six weeks, the other just before his fourth birthday. Ethel died in 1927, aged thirty-seven. Two years later, Elwell married Helen Hubbard, the resident piano player at British Talking Pictures, where he was working as an adviser. They had a daughter in 1932.

By the 1930s, Sanders and Bartram say, Elwell’s technical skills had “lost their edge.” He was commissioned to design and construct transmitters for the BBC, and some stations for the early-warning radar system constructed along the English coast before and during the war. In 1940, he took his family to live in the United States. From 1947, he consulted to the young Hewlett-Packard, another Silicon Valley start-up founded by Stanford graduates, where he was remembered as “extravagantly garrulous,” a ready source of “tales of de Forest’s perfidy.”

For contemporary entrepreneurs dreaming of founding disruptive enterprises, “Cy” Elwell’s story is cautionary. These biographers conclude he was “visionary” — about continuous wireless waves, talking pictures and television — but “not a particularly deep-thinking theorist.” He was a “highly competent, practical implementer of engineering concepts” with “little tolerance for those who questioned his technical judgement.” Efficient, cold even, in dealing with engineering challenges, he could be emotional in personal interactions and boardrooms.

Success in what came to be called Silicon Valley always needed more than big ideas, certainty and self-confidence. Elwell was “torn between the need to act alone — where he had the best chance to receive credit for what he saw as his exceptional technical foresight — and the need for funding which could only come from large, established enterprises.”

Hugh Aitken calls Elwell a “technological translator,” someone who “worked at the interface between the laboratory and the marketplace.” With continuous waves, he “engineered a shift from the world of purely technical criteria… to a world where market considerations played a major role,” but paid heavily for it. “As control shifted from the individual innovator to the corporate institution, as technical development became increasingly a function of market performance, stresses appeared that in the end made joint action impossible.” •

The post Radio revolutionary appeared first on Inside Story.

]]>
Will a robot take your job? https://insidestory.org.au/will-a-robot-take-your-job/ Thu, 27 Sep 2018 07:32:26 +0000 http://staging.insidestory.org.au/?p=51119

Review essay | Three new books challenge lazy thinking about job-stealing robots and infallible algorithms

The post Will a robot take your job? appeared first on Inside Story.

]]>
Thinking about the implications of artificial intelligence, or AI, can be disorienting. On the one hand, we are surrounded by technological marvels: robot vacuum cleaners, watches that call the nearest hospital when we have a heart attack, machines that can outplay humans at just about any game of skill. On the other hand, many parts of life seem to be going backwards. Things we once took for granted, from the ABC to the weekend, have become “luxuries we can no longer afford.”

Seeming contradictions like these are not new. Technological change has always been uneven, making manufactured products cheaper, for instance, yet leaving many service activities largely unaffected. Increased productivity in the economy as a whole has pushed wages up, making labour-intensive services more expensive.

This divergence is much more marked with AI. Compared to earlier rounds of technological change, we are seeing a combination of incredibly rapid change and near stagnation. The acceleration of computing power has been so fast that a Series 1 Apple watch (itself a museum piece three years after its introduction) can perform calculations as fast as the Cray X-MP, the most powerful supercomputer in the world back in 1982. The amount of digital information generated every hour of every day exceeds all the digital data that was created up to and including the year 2000.

By contrast, many areas of daily life have changed little over the course of a generation. The most technologically advanced item in the average kitchen is the microwave oven, first marketed to households in the 1970s. Air travel reached its peak of speed with the introduction of the Concorde in 1973; it was withdrawn from service in 2003.

Every now and then, some new advance revolutionises a previously stagnant activity. The typical passenger car today is only marginally different from the models of twenty or even fifty years ago. It has smarter electronics and improved safety systems, but the experience of driving and the basic technology of the internal combustion engine are the same. Over the past decade, though, we have seen the arrival of electric cars and then of autonomous vehicles. While the future remains unclear, it seems certain that road transport will change radically over the next twenty years, and even more so over the next fifty.

Not all the new arrivals are beneficent. In 2062: The World that AI Made, Toby Walsh points to the alarming possibilities raised by autonomous weapons, of which armed drones like the Predator represent the first wave. The drone itself contains nothing fundamentally new — it’s a pilotless aircraft, equipped with cameras and missiles, that can fly for hours. The big developments are in the telecommunications systems that allow controllers on the other side of the planet to view the camera output in real time and order the firing of the missiles at any target that they choose.

At present these controllers are human, error-prone but capable of making moral choices in real time. But the development of pattern-recognition technology is such that it is already feasible to replace the human controllers with an automated control system programmed to fire when preset criteria are identified. The point at which moral choices are made, explicitly or otherwise, is in the setting of the criteria and the programming of the control system.

Further off, but by no means inconceivable, are systems whose criteria for targeting (for example, “fire on vehicles containing armed men”) are replaced by higher-level objectives. Such an objective might be “fire when the result will be a net saving of lives” or, more probably, “fire when the result will be a net saving of lives on our side.” In this case, in effect, the machines are being give moral principles and ordered to follow them.

These possibilities are alarming enough that Walsh, a professor of artificial intelligence at the University of New South Wales, and some of his colleagues organised an open letter calling on the United Nations to ban offensive autonomous weapons. The letter rapidly attracted 2000 signatures and started a process that may ultimately lead to a new international convention. As the history of disarmament proposals has shown, though, the resistance to any restriction on lethal technology is always formidable and usually successful.

The theme of human choice is developed further in Ellen Broad’s Made by Humans, an excellent analysis of the way the magical character of AI hides built-in human biases. Among Broad’s central observations is the fact that the word “algorithm” is being used in a different way, something I hadn’t noticed until she pointed it out.

For the last thousand years or so, an algorithm (derived from the name of an Arab mathematician, al-Khwarizmi) has had a pretty clear meaning — namely, it is a well-defined formal procedure for deriving a verifiable solution to a mathematical problem. The standard example, Euclid’s algorithm for finding the greatest common divisor of two numbers, goes back to 300 BCE. There are algorithms for sorting lists, for maximising the value of a function, and so on.

As their long history indicates, algorithms can be applied by humans. But humans can only handle algorithmic processes up to a certain scale. The invention of computers made human limits irrelevant; indeed, the mechanical nature of the task made solving algorithms an ideal task for computers. On the other hand, the hope of many early AI researchers that computers would be able to develop and improve their own algorithms has so far proved almost entirely illusory.

Why, then, are we suddenly hearing so much about “AI algorithms”? The answer is that the meaning of the term “algorithm” has changed. A typical example, says Broad, is the use of an “algorithm” to predict the chance that someone convicted of a crime will reoffend, drawing on data about their characteristics and those of the previous crime. The “algorithm” turns out to over-predict reoffending by blacks relative to whites.

Social scientists have been working on problems like these for decades, with varying degrees of success. Until very recently, though, predictive systems of this kind would have been called “models.” The archetypal examples — the first econometric models used in Keynesian macroeconomics in the 1960s, and “global systems” models like that of the Club of Rome in the 1970s — illustrate many of the pitfalls.

A vast body of statistical work has developed around models like these, probing the validity or otherwise of the predictions they yield, and a great many sources of error have been found. Model estimation can go wrong because causal relationships are misspecified (as every budding statistician learns, correlation does not imply causation), because crucial variables are omitted, or because models are “over-fitted” to a limited set of data.

Broad’s book suggests that the developers of AI “algorithms” have made all of these errors anew. Asthmatic patients are classified as being at low risk for pneumonia when in fact their good outcomes on that measure are due to more intensive treatment. Models that are supposed to predict sexual orientation from a photograph work by finding non-causative correlations, such as the angle from which the shot is taken. Designers fail to consider elementary distinctions, such as those between “false positives” and “false negatives.” As with autonomous weapons, moral choices are made in the design and use of computer models. The more these choices are hidden behind a veneer of objectivity, the more likely they are to reinforce existing social structures and inequalities.

The superstitious reverence with which computer “models” were regarded when they first appeared has been replaced by (sometimes excessive) scepticism. Practitioners now understand that models provide a useful way of clarifying our assumptions and deriving their implications, but not a guaranteed path to truth. These lessons will need to be relearned as we deal with AI.

Broad makes a compelling case that AI techniques can obscure human agency but not replace it. Decisions nominally made by AI algorithms inevitably reflect the choices made by their designers. Whether those choices are the result of careful reflection, or of unthinking prejudice, is up to us.


Beyond specific applications of AI, the technological progress it generates will have effects throughout the economy. Unfortunately — as happened during earlier rounds of concern about technology — the discussion has for the most part been reduced to the question, “Will a robot take my job?” Walsh and Broad both point to the simplistic nature of this reasoning.

A more comprehensive assessment of the economic and political implications of AI comes in Tim Dunlop’s The Future of Everything. (Disclosure: I’ve long admired Dunlop’s work, and I wrote an endorsement of this book.) Rather than focusing on AI, Dunlop is reacting to the intertwined effects of technological change and the dominant economic policies of the past few decades, commonly referred to as neoliberalism or, in Australia, economic rationalism.

The key problem is not that jobs will be automated out of existence. In a system dominated by the interests of capital, the real risk is that technological change will further concentrate wealth and power in the hands of the dominant elite often referred to as the 1 per cent. As Dunlop says, radical responses are needed.

The most obvious is a reduction in working hours. This has been one of the central demands of the working class since the nineteenth-century campaign for an eight-hour working day. After a century of steady progress, the trend towards shorter working hours halted, and even to some extent reversed, in the 1970s. The four decades of technological progress since then have produced no significant movement.

This is a striking illustration of the fallacy of technological determinism. Under different political and economic conditions, information and communications technology could already be providing us with the leisured life envisioned by futurists of the 1950s and 1960s. Instead, it has become a tool for keeping us tethered to the office on a 24/7/365 basis.

Closely related is the question of flexible working hours. As Dunlop observes, “flexibility” is an ambiguous term. Advocates of workplace reform praise flexibility, but what they mean is top-down flexibility, the ability of managers to control the lives of workers with as few constraints as possible. Bottom-up flexibility, the ability of workers to control their own lives, is directly opposed to this. To put it in the language of game theory, flexibility is (most of the time) a zero-sum commodity.

More radical ideas include treating data as labour and moving to collective ownership of technology. Some of the most valuable companies in the world today, including Facebook and Alphabet (owner of Google), rely almost entirely on data generated by users of the internet. “We are all working for these tech companies for free by providing our data to them in a way that allows them to hide our contribution while benefiting immensely from it,” writes Dunlop. “It is way past time that we were paid for this hidden labour, potentially using that income to offset reductions in our formal working hours.”

Dunlop suggests that taxes on the profits of tech companies could be used to finance a universal basic income, which would provide everyone with an income sufficient to live on, whether or not they were engaged in paid work.

The collective ownership of technology sounds radical, but it is, in many respects, an extension of that same argument. Increasingly, technology is embodied not in large pieces of equipment, like blast furnaces or car factories, but in information: computer code, data sets and the protocols that integrate the two. As Stewart Brand observed back in 1984, information wants to be free. In the absence of legal restrictions or secrecy, that is, a piece of information can be replicated indefinitely, without interfering with the access of those who already have it. As the cost of communications and storage drops, so does the cost of replicating and transmitting information.

Of course, there are many reasons, such as privacy, why we might want to restrict access to information. But concerns about privacy have been largely disregarded under neoliberal policies. On the other hand, strenuous efforts have been made to protect and extend “intellectual property,” the right to own information and prevent others from using it without permission. These rights, supposedly given as a reward to inventors and creators, almost invariably end up in the hands of corporations.

From this perspective, longstanding demands for workplace democracy and worker control are merging with the critique of intellectual property largely driven by technical professionals. For these workers, the realities of the information age are incompatible with the thinking behind intellectual property. As Dunlop says, worker ownership is “another way of changing how we think about technology… not just a means to a fairer society, but a demand that fundamentally changes how we understand the creation and distribution of work and wealth.”

There’s a lot more in these books, and particularly Dunlop’s, than can be covered in a brief review. Each provides useful correctives to the lazy thinking about job-stealing robots and infallible algorithms that dominates much of our public discussion. And all centre on the same basic point: while technology has its own logic, the way technology is used is a matter of choice.

The key question is: who gets to make those choices? Under current conditions, they will be made by and for a wealthy few. The only way to democratise choice about technology is to make society as a whole more democratic and equal. •

The post Will a robot take your job? appeared first on Inside Story.

]]>
Lost in translation – or should that be transcription? https://insidestory.org.au/lost-in-translation-or-should-that-be-transcription/ Mon, 20 Feb 2017 22:31:00 +0000 http://staging.insidestory.org.au/lost-in-translation-or-should-that-be-transcription/

Books | This account of the latest research on genes and society poses some of the right questions

The post Lost in translation – or should that be transcription? appeared first on Inside Story.

]]>
The meaner a book review, the more fun it is to read. In the introduction to The Genome Factor, Dalton Conley explains that he and Jason Fletcher came together after he dismantled a paper Fletcher had presented at a conference. Dismantling the book that has resulted – indeed, dismantling anything – is only possible if the thing hangs together in the first place. This book doesn’t. It is more like a trash and treasure market. There are gems here, but also – to the mind of a confirmed reductionist molecular biologist like me – vast stalls offering items that hold little value.

But I must stress that one person’s trash can be another’s treasure. If you enjoy lines like “We show how genotype acts as a prism, refracting the white light of average effects into a rainbow of clearly observable differential effects and outcomes,” then you are going to find a lot to enjoy in this work. For the sake of sport, let’s start with what I considered to be the trash and then come to the treasure.

For a molecular biologist, attention to detail is important. One single error in a gene, like an error in a computer code, can kill everything. Everyone makes mistakes and pedants (like me) take an embarrassing degree of pleasure in pointing them out. To be fair, there aren’t too many mistakes in The Genome Factor, but the ones on display are beauties. On page 36, Huntington’s disease is given as an example of a recessive genetic disorder. In reality, Huntington’s is the classic example of a dominant disorder, as is taught in most undergraduate classes. This might just be a slip of the pen but it is akin to messing up the central dogma of modern molecular genetics by getting transcription and translation wrong. That occurs too, on page 197, where nonsense mutations are said to block transcription when they actually stop translation.

The book’s embedded confusion about natural selection, evolution and genetics is more worrying. On page 6, the old argument about dilution of genetic advantage in each generation is brought up, this time by analogy to a card game called Pass the Trash. Even if you receive a royal flush in this game, you lose some of those cards in the next round – just as you can only pass on half your genes to your children, diluting their effect and giving your kids “no particular genetic advantage.” This criticism of the idea of evolution was levelled at Darwin himself, but it was resolved after Gregor Mendel showed that specific genes could encode features, essentially autonomously, across generations, which means that blending and dilution don’t undercut evolutionary progress. Even if you only have half a royal flush, you still hold important cards. Of course, the authors are aware of this, and elsewhere they explain that “if every gene depended on ten others, evolution would be pretty constrained.” Including these conflicting explanations makes the text very confusing.

But there’s more. The most puzzling chapter comes with the provocative title “Is Race Genetic? A New Take on the Most Fraught, Distracting, and Nonsensical Question in the World.” This certainly is a fraught question, and innumerable ill deeds have been perpetrated in response. But I was surprised to see it described as a “nonsensical” question. Perhaps we should avoid it, but I didn’t expect it to be eliminated before the chapter had begun. By page 94, the verdict is in: “Race does not stand up scientifically, period.” (Back on page 7, it was described, with emphasis, as “just plain wrong in genetic terms.”) Part of me was relieved, but then a few pages on I am invited to “compare Pygmies and Bantus… Inuit and Swedes.” My mind is swimming as I struggle to compare things that don’t exist scientifically and to cleanse myself of my own recalled observations of the rich diversity of this imperfect world.

But terminology is there to help me. Although it appears that race is scientifically invalid, we discover on page 101 that “self-identified” and “self-reported” race, “continental ancestry,” “race groups” and “continental clines” are useful classifications. There is a valid point being made here, and the term “continental ancestry” is certainly better than “race.” Then, on page 111, everything becomes clear. The authors explain that it is not race itself but the “common government definitions of race” that are “flimsy” and “the notion of continental ancestry having distinct genetic signatures is a biological reality.” From the perspective of researchers working in the United States – where residents and visitors are frequently asked to tick boxes declaring “Caucasian,” “Hispanic,” “African American” and so on – I guess this chapter may make some sense, but I do worry that many readers will be as confused as I was by the linguistic somersaults performed to avoid the fraught topics raised in this chapter.

We’re nearly at the end of the “trash” section. Another aspect that I didn’t like but others might enjoy was the authors’ festival of “straw man” burning. I am not sure if it was regarded as a straw man at the time, but the controversial 1994 book by Richard J. Herrnstein and Charles Murray, The Bell Curve, certainly preoccupies the authors. As far as I can discern, this is a nonfiction equivalent of Aldous Huxley’s Brave New World. I believe it made a major impact on publication, but I am not sure if it is still driving agendas in the social sciences. Nevertheless, refuting it seems to be a major motivation behind The Genome Factor.

But that’s not the only straw man. I jotted the words in the margin each time I came across a case. On page 35, we hear that “it was silly of scientists to think they would find the gene… for sexuality or for IQ.” I am aware of no scientist, alive or dead, who has been that silly. The idea has always been to see if one could identify genes that had an impact in those characteristics. Given the extensive achievements in the genetics of mental retardation, for instance, you would have to conclude that many such genes have been identified.

Next Conley and Fletcher refer to the fact that a “small group of social scientists” are proposing a set of “explosive ideas” about how genetics and biology may partly explain the economic success of different countries. We are told that the work is “yet to coalesce around version 1.0.” Personally, I don’t think scholars dealing with economics and politics are likely to be put out of a job by population geneticists any time soon. But there’s more, even including a discussion of whether different genetic groups go to war more frequently. On this point, the authors find that, in fact, related groups fight more often – perhaps because they are neighbours.

The final criticism I would make is that Conley and Fletcher’s enthusiastic language sometimes sounds good but doesn’t make much sense. When thinking about behavioural genetics, we are told, it is not “genes or environment” but “genes times environment.” Later, in a key section on genome modification, we receive the cheerful challenge: “Unhappy with your APOE4 [Alzheimer’s susceptibility gene] variant… Change it?” A molecular biologist like me wonders how, even with the remarkable new tools such as CRISPR, we would go about changing a gene in every cell of a living human body. This is crazy talk.


But we can forgive much of this because in every market there are treasures. I really liked the chapter on genotocracy. You have to think about Andrew Niccol’s film Gattaca here – about the benefits and the dangers of genetic pre-judgements (and prejudice). The belief that many people – perhaps everyone, at birth – may one day have their genomes sequenced and that hackers may release this information is probably realistic. Like the authors, I have concerns in this area. They don’t say this specifically, but I don’t think there will be much good news in genomes. Current genomics tends to focus on bad news – identifying genetic disease genes or risk of disease – which means that leaked genetic information is unlikely to reflect well on the individuals concerned. I do think that assigning genetic scores to things like “educational promise” and even “athletic promise” could be self-defeating, and the authors are right to highlight the issue. The section on genome modification with CRISPR is a bit odd in some places – changing a single disease-causing letter in the genetic code hardly makes “genetic history irrelevant in a tangible way” – but the new technology will certainly have profound effects.

To me, the epilogue – Genotocracy Rising, 2117 – is even better. Here, the prospect of parents having embryos sequenced before implantation is raised. I think this is possible and it could have major effects on society. The authors fixate a little on parents choosing the embryo with the highest IQ. The “cult of IQ worship” is certainly a problem: IQ is not more important than other qualities like empathy, sociability or even honesty, but it does already seem to be influencing people’s career prospects disproportionately. I don’t think it will be possible for parents to optimise their children’s IQs or beauty or personality any time soon – these things are just way too complicated and parents will only ever be able to choose between a handful of embryos – but I do think pre-implantation diagnosis will be used to reduce serious single-gene genetic disease.

Finally, the impact on society depends on scale and penetration. Already, the ultra-rich have all the advantages of the super-powerful – if you want to be president or a Hollywood star, it helps if your parents or partner was one of those things. But since most people are not in this super-elite category, the stratification – while real – doesn’t penetrate at every level. I think it will be the same for genotocracies – there will be some stratification but it won’t permeate all of society. Consequently, I am more relaxed about the future than either the authors of this book or those of The Bell Curve appear to be.

In other words, if you enjoy reading about the future, then you’ll find this book easily digestible and fun to read. But I wouldn’t lose much sleep over it. •

The post Lost in translation – or should that be transcription? appeared first on Inside Story.

]]>
Wrestling with Sir Ken https://insidestory.org.au/wrestling-with-sir-ken/ Wed, 24 Jun 2015 00:27:00 +0000 http://staging.insidestory.org.au/wrestling-with-sir-ken/

Dean Ashenden takes on the sixties, GERM, and the world’s best-known educational revolutionary

The post Wrestling with Sir Ken appeared first on Inside Story.

]]>
Ken Robinson is perhaps the most celebrated of schooling’s growing band of global gurus, presenter of the most-watched talk in the history of TED, commander of seven-figure speaking fees, profiled in Vanity Fair, and knight of the British realm. He is a prominent advocate of a “revolution” to “transform” schooling, a critic of the present “industrial” system of teaching, and an opponent of what he calls GERM (the global education reform movement) and its goal of “improving” a fundamentally outdated and dysfunctional educational form. He is by no means alone in holding these views. His new book, Creative Schools: Revolutionizing Education from the Ground Up, is a frontal assault in a gathering battle over what schooling is for and what it should look like.

Robinson makes two great contributions to this struggle. He grasps that something big is going on in and around schools, and he insists that the received way of conducting schooling is, at last, vulnerable. His account of what a “transformed” school does and can look like is incomplete but nonetheless inspiriting. There are, however, serious shortcomings in his understanding of present realities and future possibilities, and in his “theory of change.” It is possible to share his sense of urgency and possibility without subscribing to his understanding of how history works or his confidence that “time and tide are on the side of transformation.”

To begin with what Robinson is against. He is against what he calls the “industrial” approach to schooling, and he is against a “reform” agenda that derives from and reinforces that approach. “Industrial” schooling (he says) was installed to meet the social and economic needs of the nineteenth century, and is “wholly unsuited” to the twenty-first century. GERM – the rather tortured pun is intended – pushes schooling in exactly the wrong direction with “catastrophic consequences” for students and teachers, and compounds an “ever-widening skills gap between what schools are teaching and what the economy needs.” Standardised and standardising education crushes creativity and innovation, “the very qualities on which today’s economies depend.”

Moreover, and despite its reliance on a mass of research into “what works,” GERM itself doesn’t. Driven by “political and economic interests” including the OECD and its test-based league tables, national governments (remember Julia Gillard’s “top 5 by ’25”?), and giant testing corporations, the GERM prescription has delivered only modest, patchy and sometimes transient gains. The big problems of schooling – inequality, low student engagement and high attrition, teacher dissatisfaction – are as pervasive as ever.

And what is Robinson for? He is for a transformed system. That, he argues, is what really works. “The challenge is not to fix this system but to change it; not to reform it but to transform it.” That is required by both the emerging social and economic reality and by the very idea of “education.” The continuing cultural, social and development tasks of schools are central to his thinking, but so is the view that schools must prepare young people for a “profoundly” changed workplace by developing “twenty-first-century skills,” including flexibility, adaptability, initiative and self-direction, critical thinking and problem solving, and financial, economic, business and entrepreneurial literacy.

All this is consistent, in Robinson’s view, with the incontestable fact that “children have a powerful, innate ability to learn.” The school’s job is not to push them through a one-size-fits-all program but to build on this “learning power.” Within a familiar structure (arts, humanities, maths, science and so on) curriculum should be enacted in a quite new way. Entrenched distinctions between the academic and the vocational, between the formal and the informal curriculum, and between disciplined learning and the development of “creativity,” must be overcome. Doing and making should be accorded as much time and respect as study. Learning must be “personalised” to match the learner’s age, stage, interests and capacities. Schools must give all students a red-hot chance to find out what they are good at and passionate about. Students must learn with and from each other, and take full advantage of the resources of the home, the community and, of course, digital technologies.

To these ends, assessment should focus on developing learning and the learner, generating feedback and guidance rather than mere comparison and grades. It must be as concerned with each individual’s growth in understanding, insight and capacity as with the acquisition of propositional knowledge, as much a form of learning as a support to it.

Most important of all is the teacher. “Great teachers are the heart of great schools.” The teacher’s core and irreplaceable responsibility is to create the conditions in which learning can be generated while accepting that he or she is “not always in control of these conditions.” Teachers must ignore the false distinction between “traditional” and “progressive” pedagogies to draw on a range of fit-for-purpose techniques and approaches. What matters is getting the right approach for the purpose and the learner. Contra GERM and the accountability agenda, teachers must be trusted, respected and rewarded as professionals.

Those acquainted with Robinson’s earlier work will be familiar with this “critique of the way things are” and his “vision of how they should be.” But to these Robinson now adds a “theory of change.” It is a bold undertaking.

At the heart of this theory is a switch in what he refers to as a metaphor but others might think of as a “paradigm.” “If you think of education as a mechanical process that’s just not working as well as it used to,” he argues, “it’s easy to make false assumptions about how it can be fixed; that if it can just be tweaked and standardised in the right way it will work efficiently in perpetuity.” But it won’t, “because it’s not that sort of process at all.” Schooling is an organic process.

“Education is really improved only when we understand that it… is a living system and that people thrive in certain conditions and not in others.” Schools are “complex adaptive systems” that by their nature offer far more scope for innovation than is generally realised – and, what’s more, they can only be changed in and through the daily activity of those who live it. The culture of any given school comprises habits and systems that people act out every day.

“Many of these habits are voluntary rather than mandated,” he says, “teaching by age groups, for example, or making every period the same length, using bells to signal the beginning and end of periods, having all of the students facing the same direction with the teacher in the front of them, teaching math only in math class and history only in history class, and so on.”

Robinson really homes in on – indeed his argument depends on – change “within the system as it is.” In his theory, “revolutions don’t wait for legislation… they emerge from what people do at ground level.” Like most revolutions, “this one has been brewing for a long time, and in many places it is already well under way. It is not coming from the top down; it is coming, as it must do, from the ground up.”

Yes, the revolutionary will encounter system-level obstacles including “the inherent conservatism of institutions [and] schools themselves,” conflicting views about the sorts of changes that are needed, differences in “culture and ideology,” and “political self-interest,” and must therefore “press for radical changes” in system-level policies. But history is with the activist and the innovator. “[T]ime and tide,” Robinson declares, “are on the side of transformation.”


Robinson’s book often reads like a self-help manual. PowerPoint lists, twenty-five of them by my count, range from the three elements of academic work, the four purposes of schooling and the eight core competencies to ten tips on how to make your school more inviting. Superlatives (“great” schools, “wonderful” teachers, “inspiring” leaders, “extraordinary” innovations, and so on) are in ready supply. But Robinson also covers a great deal of complicated ground in an enviably accessible fashion. Anecdotes, examples and eyewitness accounts abound, some featuring the usual suspects (High Tech High, for example), many not. Few of those who work in and around schools, including older students as well as parents and teachers, will fail to find Robinson engaging, illuminating and perhaps even inspiring.

There are several points at which Robinson’s case is obviously vulnerable. When he claims that the GERM agenda doesn’t work, and that we do know what actually works, his adversaries will compare his anecdotes and generalisations with their own stockpile of closely researched evidence, including the evidence that the improvement agenda can work, and is little by little lifting its own game as well as that of the schools.

The proposition that what the economy now needs corresponds neatly with what school reformers have long wanted is convenient, to say the least. His picture of the labour market and the workplace of the future is as romantic as it is hazy. The apparent assumption that “profound” and ever-accelerating change is uniquely characteristic of our times is questionable. Indeed it could be argued that the kind of change to which Robinson alludes is occurring within a frame of stability and burgeoning wealth peculiar to the West over the past two or three generations.

There is also a quarrel to be picked with Robinson’s insistence that schools are organic and are not mechanical. It makes much more sense to see them as both, and other things as well. My own view is that schools are best seen as sites of production; they have much in common with other workplaces and work processes but also quite distinctive characteristics and purposes as producers of learners and learning. One among a number of advantages of a “production perspective” is the realisation that schooling is not just a preparation for work. It is work – around a fifth of most people’s working lives, in fact. That provides a better starting point for thinking about what needs changing in schools than focusing on “preparation” for the workplaces presumed to await them in some distant future. Another advantage, to which we return, is that a production perspective provides a better basis for understanding how technology will change teaching and learning.

But the really crucial question for Robinson’s argument against GERM and “industrial” schooling, and for “creative schools” and “transformation,” is this: is genuinely transformative change in schooling possible?

This is where Robinson’s high sense of the purposes and possibilities of schools, and his admirable support for genuine grassroots movements over GERM’s carefully crafted enlistment, get him into trouble. They carry him from an absolutely correct intuitive judgement to a “theory” so misleading as to verge – given his prominence and influence – on the irresponsible.

Robinson is correct in sensing, contra GERM, that schooling’s future will not be continuous with its past, and in proclaiming that a sea change laden with great possibilities is now under way. But his theory of change does not see what “transformation” is up against, or what is driving change at this particular moment, or what will be required if change is to be shaped in a way that he and many others (including me) would like to see.

Robinson’s theory can’t see what transformation is up against.

In their seminal essay “The ‘Grammar’ of Schooling: Why Has It Been So Hard to Change?” American historians David Tyack and William Tobin draw heavily on the work of their colleague Larry Cuban to argue that schools, like languages, possess a grammar. Just as the grammar of language organises meaning, so does the grammar of schooling organise “the work of instruction.”

“Here we have in mind, for example, standardised organisational practices in dividing time and space, classifying students and allocating them to classrooms, and splintering knowledge into ‘subjects,’” Tyack and Tobin say, and go on to suggest that over time the internal coherence of this grammar acquires external support. “Neither the grammar of schooling nor the grammar of speech needs to be consciously understood to operate smoothly,” they note. “Indeed, much of the grammar of schooling has become so well established that it is typically taken for granted as just the way schools are. It is the departure from customary practice… that attracts attention.”

All of this is correct, in my view, but nonetheless understates the reality of what “transformation” of the Robinson kind must contend with. The taken-for-granted image of the “real” school is just one of the struts and stays that have grown up around the grammar of schooling, particularly during its massive postwar expansion. It includes: a credentialling system that transmits the demands of universities directly into the schools’ curriculum, and connects schooling to a society-wide competition for advancement (or to avoid relegation); a physical infrastructure devoted to the classroom; a workforce dominated by a single category of worker, “the teacher,” industrially organised, and tenured; industrially backed regulation of the terms and conditions of teachers’ work in ways derived from the grammar (class sizes, contact hours, and so on), and which also frame students’ work; budgets largely absorbed by the salaries of a tenured, closely defined and highly regulated workforce, with little capacity to link resources with “policy,” and a consequent cumulative incrementalism that fuels a tendency for costs and problems to pile up faster than solutions; and a range of interest groups, none of which has the capacity to drive an agenda for the whole, but many of which have the power to single-handedly frustrate such an agenda (vide Gonski).

This means that any theory of transformative change has something rather more on its plate than “the inherent conservatism of institutions [and] schools themselves.” It must cope with a grammar of schooling, and the industry in which that grammar is embedded. Yes, “the system” is complex and adaptive, a culture enacted by individuals in their daily work, and shaped by their outlook and decisions. But it is also a heavily reinforced structure, a form and instrument of power. It is just this combination of flexibility and structure that gives “the system” its capacity to resist, deflect and absorb efforts at “transformation,” as Tyack, Tobin and Cuban are at pains to emphasise.

Thus Cuban has documented the emergence of “hybrid” pedagogies which reflect both teachers’ attachment to progressivist ideas and the hard facts of their work within the frame of class, classroom, subject and lesson. Tyack and Tobin point to the ebb and flow of experimentation, innovation and “alternatives,” which are often driven by charismatic leaders within the overall dominance of a stable grammar. They see the system as a whole operating so that “changes in the basic structure and rules” of the grammar of schooling, like the grammar of language, “are so gradual that they do not jar.” It might even be said that these familiar exceptions to the rule belong to the system’s fundamental logic, functioning as its safety valve, repair shop, and legitimation device – until now.

Robinson’s theory doesn’t see what is driving change or what is distinctive in the present moment in schooling.

In Robinson’s theory, “transformation” will come from grassroots innovation required by a shifting social, economic and technological context, and fuelled by idealism and hot gospelling. Well, yes, and no. Not really grasped in this account is the ever-expanding force of technology, and not around schooling so much as right in the heart of it, in quite unprecedented combinations of hardware and software that will increasingly embody and orchestrate teaching and learning.

It’s not that Robinson is unaware of that fact. The spread of the digital technologies, he writes, is “already transforming teaching and learning in many schools.” He includes “new technologies that make it possible to personalise education in wholly new ways” among the three distinctive features of the present moment in schooling, and Sugata Mitra, the Kahn Academy and the “flipped classroom” all make guest appearances.

But being aware of these developments is not the same as really understanding their weight and impact. The nomination of technology as one of the three “different this time” factors arrives in the book’s penultimate paragraph. The formal discussion of the new technologies is allocated just over a page, where it is treated as just one among “an abundance of emergent features” of schooling. Teachers, assessment, leaders and home influences, meanwhile, get whole chapters to themselves. The discussion of technology is, in short, a retrofit, glued onto an argument which took its essential shape decades ago.

Although Robinson refers over and again to the pervasiveness of technological change, and although he senses that the ground is moving under our feet, his working view of technology within schools is not all that different from that adopted by the industry: learning comes from teaching and teaching comes from the teacher, whose work will be supported and perhaps even empowered by the new technologies but isn’t replaced or even seriously disrupted. Technology does indeed seem to be supplementary if we look at it within the history of schooling. But what if we see both schooling and technology in the larger history of production? From that standpoint it appears that schooling is just now arriving at a point previously reached by one industry after another since the beginnings of the industrial revolution, the point at which technology becomes capable of not just supplementing human labour but substituting for some forms of that labour, and demanding the reorganisation of the rest.

Specifically, technology increasingly offers a distinct source and form of teaching labour. And that implies a quite different way of organising the work of teaching and learning, as can be seen in a preliminary way in “blended” schools, “virtual” educational programs such as the Kahn Academy, and indeed entirely “virtual” secondary schools. “Teaching” no longer comes from just “the teacher,” and therein lies the real threat to the received grammar.

That Robinson’s theory sees neither what transformation is up against, nor what is driving the big change, betrays a cast of mind that comes almost completely intact from the 1960s. The sixties are, if I may say so, an excellent place to start thinking about schooling, but as a place to finish, not so much. It would be unfair to suggest that Robinson, like the Bourbons, has learned nothing and forgotten nothing, but it is fair to say that he has remembered more of the world in which his outlook was formed than he has learned about technology and its inexorable movement from the margins to the centre of schooling.

Creative Schools is dedicated to Bretton Hall College, where as a young working-class trainee teacher he was exhilarated by the ideas of Alec Clegg and other luminaries of British progressivism. His picture of a “transformed” school, although given a contemporary gloss and rationale, belongs essentially to that era. It is to sixties progressivism that Robinson also owes his habitual dichotomies – creative versus industrial schooling; perfidious systems versus idealism at the grassroots; bottom-up change versus top-down; transformation versus mere improvement; the selfless workers in the vineyard versus the self-interested interlopers of “business” and “politics.”

The binary most central to Robinson’s case is his assumption that transformation/revolution means out with the old and in with the new. In fact, it’s more a case of the old colliding with the new – the immovable object of schooling versus the irresistible force of technology – with who knows what upshot.

We can be sure that getting change won’t be the problem, but that getting desirable change will be. We can be confident that schools will not be obliterated in the way of newspapers, for example – they perform irreducible functions including childcare and bringing children and young people together to grow up. We can also be sure that change, of whatever kind, will not obliterate the incumbent grammar and install another. Rather than talking about a “transformation,” therefore, we should talk about a transition,probably from a single dominant grammar to several competing grammars, including both the one Robinson doesn’t like and the one he does. In that transition GERM’s theory and practice of “school improvement” may have as much to offer as the ideal of “transformation.”

In this scenario, what will be up for grabs is mix and balance, and that will vary over time and place. Within schools, and secondary schools particularly, the trick will be in mastering a kind of meta-grammar, finding optimal combinations of several educational forms, with various attempts at “blending” being obvious examples. Within systems the problem will be to make that possible.

Robinson’s theory doesn’t see how change can be shaped.

If Robinson’s or other transformed grammars are to survive and flourish it will only be by combining top-down strategy with bottom-up movement.

Getting the right relationship between systemic and local action has proved elusive in most Western school systems most of the time. That is one reason why GERM, with its over-reliance on top-down engineering, has failed more often than not. The same will be true of the transformation idea if it can’t solve what is essentially the same problem. When they are pushed from the top, as the Gillard “revolution” found, the grammar and industry of schooling lock together and seize up like compressed cornflour. But as was so clearly demonstrated in the decades following the 1960s, grassroots, advocacy-driven efforts can thrive all over the place for a while, burning up huge quantities of energy, hope and idealism, and then dwindle.

If there is a way out of this conundrum in tepid political times such as these, it may be in making politics with the industry’s interest groups, the most powerful of which are not the “outsiders” that so worry Robinson but the industry’s employers and employees.

As things now stand, their power is contained by the industrial and regulatory regime they constructed and within which they conduct their relations. Is it possible that they might abandon this adversarial stasis to collaborate in pursuit of their joint and several long-term objectives?

These insiders confront together the irresistible force of technology-enabled and technology-magnified change. The clear lesson of history is that those affected by such disruptions will do a lot better for themselves (and, in this case, for their ideals and sense of professionalism) by using disruption rather than resisting it. Employers and employees could set out on a long march through the grammar’s legacy orgware, and particularly its regulation of teachers’ (and therefore students’) work and workplaces, the currently lopsided composition of the workforce, and the inflexible disposition of budgets and associated habits of thinking in terms of “effectiveness” rather than cost-effectiveness and opportunity cost. The industry might change itself in ways that permit and encourage new grammars to emerge.

This or something like it may offer a way of shaping the irresistible. There are precedents in Australia’s recent industrial history. There may not be a coherent way of shaping change. For Robinson’s efforts to set out an alternative to GERM’s Gradgrind theory of change we should be grateful, but a more successful attempt will reflect a much more developed sense of structure and power, of politics and history, and of technology and production, and be made of much tougher stuff. •

The post Wrestling with Sir Ken appeared first on Inside Story.

]]>
A contrarian takes on the internet, again https://insidestory.org.au/a-contrarian-takes-on-the-internet-again/ Sat, 21 Mar 2015 01:22:00 +0000 http://staging.insidestory.org.au/a-contrarian-takes-on-the-internet-again/

Books | Internet critic Andrew Keen might be the man for the times, but his new book fails to convince Ramon Lobato

The post A contrarian takes on the internet, again appeared first on Inside Story.

]]>
The American historian Melvin Kranzberg once wrote that “technology is neither good nor bad; nor is it neutral.” It’s a lovely observation that reminds us of the fundamentally social nature of technological change.

Little of what an invention might do to us, and for us, is predetermined; instead, its possibilities and dangers typically arise from how it is adopted, used and commercialised by humans – how, in Kranzberg’s words, it “interacts in different ways with different values and institutions.”

This lesson about the perils of determinism, something that all students of media history learn in their earliest lectures, is one that writer and Silicon Valley insider Andrew Keen seems hell-bent on challenging in his latest book, The Internet Is Not the Answer, an engaging but infuriating manifesto about digital culture.

Keen, a San Francisco–based writer and erstwhile dotcom entrepreneur, is the author of two other books about the internet, The Cult of the Amateur and Digital Vertigo. He has forged a reputation as a Silicon Valley insider-critic, a self-styled contrarian raging against the excesses of the West Coast elite. Well connected among industry figures, he hosts his own TechCrunch web TV series, Keen On, in which he chews the fat of internet culture with guests like Tim O’Reilly, Stewart Brand and Jaron Lanier.

The Internet Is Not the Answer is the latest instalment in Keen’s franchise, at the heart of which is a simple but shaky argument about the internet’s effects on society and culture:

Rather than creating transparency and openness, the internet is creating a panopticon of information-gathering and surveillance services in which we, the users of big data networks like Facebook, have been packaged as their all-too-transparent product. Rather than creating more democracy, it is empowering the rule of the mob. Rather than encouraging tolerance, it has unleashed such a distasteful war on women that many no longer feel welcome on the network. Rather than fostering a renaissance, it has created a selfie-centred culture of voyeurism and narcissism. Rather than establishing more diversity, it is massively enriching a tiny group of young white men in black limousines. Rather than making us happy, it’s compounding our rage.

The book begins with a compressed history of the development of the internet couched in a narrative of moral decline. Keen tells a story of the internet’s commercialisation, from its origins as a publicly funded piece of communications infrastructure run by a small coterie of geeks to today’s “social web,” in which value is created through commodification of our everyday communication.

As evidence, he offers some vivid snapshots of West Coast speculators including Sequoia Capital chairman Michael Moritz, an early investor in Google; Netscape founder Marc Andreessen; and the libertarian billionaire Tom Perkins, partner at the Kleiner Perkins Caufield & Byers venture capital firm and author of a recent, explosive Wall Street Journal opinion piece railing against “the progressive war on the American one per cent” in the age of Occupy.

The general narrative here is about capture of a utopian technology by venture capital, against a backdrop of Uber helicopters, private jets and exclusive Bay Area members’ clubs. “As Wall Street moved west,” writes Keen, “the internet lost a sense of common purpose, a general decency, perhaps even its soul.”

Keen then turns his attention to our “privatised network economy,” in which a small group of monopoly platforms dominate. He rehearses familiar claims about how users are exploited on social platforms, arguing that “it’s our labour on these little devices – our incessant tweeting, posting, searching, updating, reviewing, commenting and snapping – that is creating all the value in the networked economy.” He also reminds us of the ostensible erosion of “middle-class jobs” in a context of free user labour.

Later chapters explore the precarious situation of artists and writers whose livelihoods have been negatively affected by the internet, interspersed with angry critiques of selfie culture, cyberbullying, drones, 3D printing, and various other topics.

As Keen’s title and tone suggest, this is a populist manifesto rather than a sustained argument. Some of the claims are convincing, but Keen’s reliance on inflammatory rhetoric undercuts his credibility. Reminiscent of the work of net critic Evgeny Morozov, but far less satisfying, The Internet Is Not the Answer delights in taking the contrarian position on every conceivable aspect of internet culture.

A more serious problem with the book is its derivative nature. Keen’s opinions on a range of topics – from free digital labour to the “1 per cent economy” – come straight from the mind-hive of tech blogs and Silicon Valley watchers. The reference list is liberally peppered with articles from the Atlantic and the New Yorker. There is little in Keen’s book to interest readers who already follow those discussions.

For these reasons, I struggled to take the book seriously. There’s a certain charm in Keen’s crusading style, but it quickly becomes tiring.

But Keen’s work does seem to strike a chord with many readers. Newspaper reviews of The Internet Is Not the Answer have been surprisingly tolerant of his excesses. So perhaps the value of the book has more to do with how it captures popular anxieties than whether it advances new ideas.

Indeed, a charitable view might see the book as a compendium of current moral panics about internet culture – the kind of source that historians will one day turn to as a representative index of digital fears and phobias, circa 2015. Like a Luddite caricature from the nineteenth century, it distils social anxieties into memorable stories.

In this sense, Keen may be a man of his time. But readers searching for genuine insight into our wired world would be advised to look elsewhere. •

The post A contrarian takes on the internet, again appeared first on Inside Story.

]]>
The American dream, in 3D https://insidestory.org.au/the-american-dream-in-3d/ Thu, 14 Aug 2014 00:16:00 +0000 http://staging.insidestory.org.au/the-american-dream-in-3d/

Angela Daly reviews an award-winning documentary about a technology that could fundamentally change manufacturing

The post The American dream, in 3D appeared first on Inside Story.

]]>

In a quick-paced, high-tech world, new technologies and the people (usually men) behind them seem to captivate film-makers, with The Social Network and Jobs among the best-known examples. Now it’s 3D printing’s turn to be immortalised on the big screen in the form of Luis Lopez and J. Clay Tweel’s documentary Print the Legend, which has been playing at the Melbourne International Film Festival.

Print the Legend provides the viewer with the recent history of 3D printing – a technology, also known as additive manufacturing, that can seem to come straight from the realms of science fiction. 3D printing uses a digital file to produce three-dimensional objects of almost any shape by printing successive layers of material on top of each other.

While the technology has been around for a while – since the 1980s, in fact – it only broke into the mainstream once the patents on the initial inventions expired and consumer-oriented printers were developed that sell for the same price as a new computer. While consumer-oriented 3D printing may still be lacking the “killer app,” the technology has been used to print everything from houses to chocolate to biological material, along with more conventional items usually made from metal and plastics.

Print the Legend traces the growth and success of MakerBot, a consumer 3D printing company that began life as a tiny start-up created by a group of geeky friends yet was sold a few years later for US$600 million to Stratasys. (Along with 3D Systems, Stratasys is one of the big two 3D printing behemoths.) With its own Steve Jobs–like figurehead, Bre Pettis, MakerBot emerges from the documentary as a post-Fordist 3D-printed example of the American Dream.

The film also tracks the progress of another start-up, Formlabs, which grew out of another friendship group, this time of MIT graduates. Formlabs’s road to success has been more rocky: while the team seemed to have no problems in raising capital, the delivery of properly functioning machines to investors came only after a considerable delay, upon which the company was accused of patent infringement by 3D Systems itself, embroiling Formlabs in a lopsided struggle with a hint of David and Goliath about it. The dispute appears to be continuing even now, though there are rumours that the two companies may have been negotiating some kind of settlement in which Formlabs would be taken over by 3D Systems.

The “dark side” of the 3D printing story is told via Cody Wilson and his infamous Defense Distributed 3D printed gun project. The film details Wilson’s success in creating 3D printable designs for a viable weapon, and recounts the media furore and the response of law-enforcement agencies. While the film’s other subjects are keen for commercial success, Wilson seems content with being at the centre of a Second Amendment–fuelled political storm.

Print the Legend raises familiar themes for those already acquainted with trajectories of new technology and processes of innovation. First, the audience sees implicitly how policies regulating, and attitudes towards, intellectual property change as a company matures, with MakerBot as the case in point. Initially its printers were released on an open source/open hardware basis: the design files were publicly available and anyone could make changes that fed back into the development of the models. Once MakerBot grew larger and more attractive to investors, though, its intellectual property policy shifted to a more proprietary, or closed, model – a change that one of MakerBot’s founders termed a betrayal.

At Formlabs, meanwhile, we see a “patent war” in action through an attempt to stymie competition entering the 3D printing market. Such intellectual property battles are not news to those familiar with other areas of technology, with the protracted litigation between Apple, Google and Samsung over smartphones and tablets in various jurisdictions being a prominent example. As a maturing industry, 3D printing inevitably displays similar tendencies.

Cody Wilson’s story illustrates a different theme altogether, the successes and failures of state law-enforcement agencies in dealing with potentially dangerous produced by 3D printers. As a result of the media uproar regarding Wilson’s gun, its designs were removed from MakerBot’s popular design repository Thingiverse and the 3D printer that Wilson had been renting in order to create gun prototypes was removed from his possession. But the decentralised nature of the internet means that it is difficult, if not impossible, to prevent gun designs from being available online.

While off-the-shelf 3D printers could be fitted out with digital locks that block certain kinds of designs, this is not a foolproof way of preventing weapons from being manufactured. Adrian Bowyer’s RepRap project at the University of Bath, for instance, developed a 3D printer that could print most of its own components, which means that people can make their own 3D printers and get around any restrictions hardwired into commercial machinery. In other words, it’s unlikely that the intellectual-property or law-enforcement elements of 3D printing can be regulated with 100 per cent effectiveness.

While the United States is a principal locus of 3D printing, there have been important developments elsewhere in the world (and in some unlikely places), which means that Print the Legend’s tight geographic focus leaves out quite a lot. The RepRap project, which represents a radically different, public-spirited direction in 3D printing at odds with the idealistic capitalism inherent in for-profit start-up culture, is sadly missing. MakerBot’s initial printer offerings were influenced by RepRap’s designs, and the film’s failure to mention this leaves the MakerBot story incomplete. Nor does the film look at start-ups coming out of less likely places than MIT/Brooklyn hackerspaces – startups like Mcor, a paper 3D printing company based in a town with around 3000 inhabitants in Ireland. China’s burgeoning 3D printing industry is not mentioned at all.

Nevertheless, Print the Legend is a good introduction to the world of 3D printing and some of its personalities for those new to the subject; and for those with some prior knowledge it charts some of the successes and obstacles facing 3D startups and more established companies. What we’ll have to wait to see is whether all the hype signals a real change in the way the economy produces things, or whether it’s at least partly a speculative bubble. •

The post The American dream, in 3D appeared first on Inside Story.

]]>
The illusionist’s trick https://insidestory.org.au/the-illusionists-trick/ Fri, 25 Jul 2014 03:37:00 +0000 http://staging.insidestory.org.au/the-illusionists-trick/

Skype has shaped a professional and personal life across two continents, reports Virginia Lloyd

The post The illusionist’s trick appeared first on Inside Story.

]]>

Visiting Sydney from New York before Christmas, I dropped by the office of a client and former colleague. Her employer, a large law firm, recently moved to swanky new premises and she was keen to take me on a tour. As we strolled the eerily quiet corridors, the towering windows, antiseptic surfaces and noiseless elevator doors put me in mind of the inside of a spaceship. At any moment I half-expected the two of us to defy gravity and lift off from the gleaming polished floor.

The cost of maintaining the illusion of worker freedom through extravagant fit-outs seems to grow with every decade. The office’s split-level mezzanine and cafeteria exaggerated the sense of a space–time continuum. Designed as a hub for meetings of all kinds, the mezzanine encourages flexibility of human movement within the larger workplace, which remains tethered to that relic of twentieth-century work practices, the billable hour. Looking around, I felt a retrospective pang for the lifestyle extras a corporate job used to afford me. But having “consciously uncoupled” myself as a full-time employee from the corporate workplace eight years earlier, it felt like viewing Earth from deep space.

These days I write grist for my client’s marketing mill from my desk in Brooklyn. But as a freelance writer and editor over here, I’m about as rare as the common cold. Numerous cafes in my Crown Heights neighbourhood have become the home office away from home for many independent workers in today’s “knowledge economy.” With workers hunched at communal benches, wearing oversized headphones and staring into their laptops, these cafe-offices could be mistaken for call centres. I work from my bedroom, like I did as a student.

Because the majority of my freelancing is for Australian companies and authors, my working life orbits around Skype. Started in 2003 and named for the awkward progeny of “sky” and “peer,” Skype facilitates free calls between computers over the internet and provides additional “freemium” services. By opening a Skype credit account, for example, I could dial landlines from my laptop for two Australian cents per minute. For a consumer accustomed to the Rosetta Stone of her monthly Telstra bill, my Skype usage was not only a bargain but also straightforward to track.

Super-sizing my Skype account, I acquired a “Skype-in” telephone number for $60 that begins with the Sydney area code and diverts to my laptop for a local call cost to the dialler. I refer to this as my “magic number.” Clients enjoy the trick, though neither end of the line – or is it the optic fibre? – has a clue as to how it’s done.

Product consumers are accustomed to the fact that the things they buy are often manufactured at a great geographical distance, but in the service economy this is a recent and transformative change. In a recent blog post for the New Yorker, George Packer described the invisibility of the worker in today’s digital economy. Companies such as Amazon, Google and Facebook are “ubiquitous in our lives but with no physical presence or human face,” he wrote. “With work increasingly invisible, it’s much harder to grasp the human effects, the social contours, of the internet economy.” As one of the lesser stars of that universe, Skype’s workings as a corporation remain a mystery to me as its happy customer, in the same way that the logistics of my virtual office must baffle some of my clients.

For seven years now I have commuted around the world from the comfort of my bedroom-slash-office. (I try not to dwell on the fact that in Sydney I worked out of a dedicated study in my home; living in a New York apartment is all about sacrificing space.) Using Skype I have coached Australian authors living in Kenya, Melbourne, Los Angeles, the Gold Coast, Sydney and on a remote Queensland farm through their respective manuscripts, all of which have been, or soon will be, published.

When I first moved to New York in 2006, I learned that my professional experience beyond the borders of the United States counted for little. Though I found it relatively easy to find a junior-level job, it was immediately obvious that I’d need to earn more to survive. I got in touch with a few friends working for book publishers and large corporations back home and work trickled in. By 2009, with the floor of the global economy having collapsed, I had become dependent on Skype to stay afloat. Happily my physical location in New York proved no impediment to clients based in other parts of the world. A few have stuck with me since the early versions of Skype, when my voice sounded like it was at the end of a tin-can telephone.


Today Skype shapes the “social contours” of my professional and personal life. My typical working day splits into three shifts across two time zones. Mornings are for my own writing projects, afternoons for deadline-driven client jobs or errands. The third shift is the trickiest but the most crucial. By now it’s evening in New York, but Australia is only just flipping open its smart phone, arriving at work, checking email. Between three and five times each week I have a Skype meeting, which makes local dinner time a moveable feast.

My parents like to say of people they find incompetent, “He wouldn’t know what day it is.” Competence aside, no one can accuse me of that. My Skype working life demands I stay aware not only of the day, but the time of day in two places at once. I straddle the International Date Line like a time-travelling desk-jockey.

Perhaps all this is evidence of Dutch theorist Erik Veldhoen’s claim that the digital era makes work more independent of time and place. But having cultivated the illusion of access and availability, in another sense I feel chained to my desk, wherever I may roam, a satellite in virtual space.

Veldhoen predicts the end of the physical office environment for the vast majority of “knowledge economy” workers around the world. On his website, where he sets out his vision of what he dubs the New Way of Working, he writes, “The one-on-one relationship between the organisational structure and the building will be abandoned on all fronts.” Like all good futurists, Veldhoen’s relationship to technology is relentlessly positive, as evidenced by the title of his 2013 book, You-Topia: The Impact of the Digital Revolution on Our Work, Our Life and Our Environment. You-Topia is a tantalising prospect until you start considering the implications for workers’ rights. Or living a version of it yourself.

Back on Earth, where increasing numbers of workers compete for jobs in online content mills and freelance farms, the future of independent work looks less promising. As Nikil Saval writes in Cubed, his new history of the workplace, “The more radical prediction for the future of the office – that it will disappear altogether – might similarly offer either more freedom or only the illusion of it.”


In the United States, freelancers now constitute anywhere between 20 and 30 per cent of the workforce, a fast-growing but vulnerable group sometimes referred to as the “precariat.” “Some of these workers have chosen to leave the permanent workforce; most have been pushed out,” Saval writes. “In many cases they lack health insurance and are at constant risk of insolvency.” The Government Accountability Office estimated the number of freelancers at forty-two million in a 2006 study. Since then global economic conditions have added millions to this number, though there seems little political motivation to count them again. Freelancers are a powerless group, as well as a precarious one.

The absence of health insurance became urgent early this year when I experienced a sudden pain in my hip pocket: with the introduction of the Obamacare legislation I would face tax penalties if I did not take out insurance by April.

I decided to join the Freelancers Union. Established in 1995 to deliver benefits to independent workers, this self-described “Federation of the Unaffiliated” now boasts almost 250,000 members. While membership cost me nothing, I was dismayed to learn their health insurance plan for individuals began at US$471 per month. Until the Freelancers Union attracts millions of members, it will continue to boast neither political clout nor affordable insurance. Reluctantly, I found a cheaper plan elsewhere.

Admittedly I’m a lot more fortunate than most freelancers. My time is largely my own to organise and I have a variety of interesting and occasionally well-paid jobs. Skype has liberated me from the commute and the pointless meetings and the nine-to-five. But there have been unexpected consequences too. While I’ve developed a wide social network in New York, I can’t say the same about my professional one. My working week always begins on Sunday, and not because of church. I am often using Skype well into the evenings. It’s convenient but exhausting. I’m always “on.” Paradoxically, just like my former employer’s work environment, my home office is often a mirage of freedom from employment – with a less ergonomic chair.


Skype is an illusionist’s tool and a mixed blessing. It offers the chimera of proximity and the promise of flexibility, without delivering either. Sometimes the sight of its cheerful blue icon on my desktop makes me want to scream. And like any illusion, my working life depends on a sleight of hand. The trick lies in the physical world, in the “analog” network established over years of living and working in Australia. Another paradox.

I depend on Skype, not only financially, but also emotionally. It’s my lifeline to steady income and to the lifelong relationships that confirm me as an Australian despite my status as a dual citizen. Every expatriate daughter learns that part of being the one who goes away is the responsibility for staying in touch. Even if I don’t feel distant emotionally from the people I love back home, Skype can exacerbate the geographical distance I feel. It’s the opposite effect of looking in a side mirror: friends and family are further away than they appear on screen. So Skype does not make me feel as if I never left; it helps me sustain the illusion that, online at least, it is possible to exist in two places at the same time. •

The post The illusionist’s trick appeared first on Inside Story.

]]>
Coming, ready or not https://insidestory.org.au/coming-ready-or-not/ Tue, 19 Nov 2013 07:51:00 +0000 http://staging.insidestory.org.au/coming-ready-or-not/

Technology is going to drive the first revolution in schooling since the invention of the printing press, says Dean Ashenden. But it’s not just a matter of the machinery

The post Coming, ready or not appeared first on Inside Story.

]]>

IN THE unlikely setting of Perth in the early 1990s three colleagues and I set ourselves up as software developers. None of us had any significant experience or expertise in computing or business, but we did have a hot idea. School systems in Australia and elsewhere had at long last decided to introduce an outcomes-based curriculum, designed to allow each student to move at his or her own speed from the “mastery” of one outcome to the next. Our software would make the new curriculum work.

The problem in teaching to outcomes lay in keeping track of where each student is up to in each subject, and then finding “stage-appropriate” work for each of them to do. That’s where our software would come in. We called it KIDMAP to evoke the goal of giving the teacher a detailed record of each student’s latitude and longitude in every area of learning, and in case anyone missed the point we called our startup Mercator.

With the wisdom of hindsight I wish I had paid more careful attention to an American historian by the name of Larry Cuban. Cuban was the most prominent of a small group of scholars who had documented and explained what he called “constancy and change in the classroom.” From a Cuban perspective, outcomes and computers were merely the most recent in a long series of educational and technological fixes for the troubles of the classroom. Each had changed things somewhat, without really changing the way teachers (and therefore students) actually did their work. The brutal fact is that twenty or twenty-five students constitute a crowd, so teachers have to control and teach to the crowd. Teacher-centred instruction, Cuban argued, “is a hardy adaptation to the organisational facts of life.”

But that’s hindsight. At the time, we were on a roll. Within two or three years we had sold KIDMAP to the two biggest education departments in the country, a fact suggesting that their leadership hadn’t been reading Cuban either. On the strength of that improbable triumph – nearly half the schools in the country! – KIDMAP crossed the Pacific and landed in two “pilot” American school districts, one on the west coast, one in the east. We made enough of a ripple to find ourselves in Cupertino presenting our product to a significant fraction of Apple’s upper echelons (Apple was a niche outfit in those days). Should we bring in Steve, they wondered?

No, it soon emerged, we should not. Several of those gathered around the boardroom table gently informed us that we weren’t the first or only ones to have this bright idea, and that our version had all the limitations of its competitors. The content wasn’t there, teachers didn’t know how to do it, getting “outcomes-based” assessments into the software took too much time and effort, and school systems, for all their talk about “mastery learning” and “standards-referenced curriculum,” had little comprehension of what they wished for. Sure, there were problems of a software and hardware kind, but the real stopper was the orgware. This was the geeks’ version of the Cuban thesis.

It was one of those moments when the heart sinks. Our psychological strategy, naturally enough, was to talk about “teething problems,” including teachers who didn’t know how to open Word, classrooms with no computer or a machine that couldn’t run KIDMAP and Adobe Acrobat at the same time, and the odd bug in the software. (“Not a bug, madam, that’s a feature,” as our gallows humour had it.)

But the real problem was that when we asked system authorities to send us “outcomes-based” curriculum to load into KIDMAP they sent “outcome statements” so broad as to be meaningless, or so detailed as to be incomprehensible, and at either extreme cast in Educanto at its most opaque. When we asked for resources to link to each outcome statement so that teachers would have “stage-appropriate” stuff to give each student as he or she moved from one outcome to the next, we got a few PDFs, if anything at all. Every teacher-training workshop veered off into questions of educational philosophy and classroom management before we even got to morning coffee.

It was not just us developers of software for teachers who were in trouble. Software for students wasn’t doing so well either, a fact in which Cuban took fiendish delight. “Computers Meet Classroom: Classroom Wins,” he wrote in 1993, following it up with “Computers Make Kids Smarter – Right?” (1998); “Techno-Promoter Dreams, Student Realities” (2002); “Laptops Transforming Classrooms: Yeah, Sure” (2006); and, most recently, his book Oversold and Underused: Computers in the Classroom (2009).

Cuban’s thesis is supported by the findings of a recent meta-study of forty-five investigations into the extent to which digital technologies have made any difference to the “effectiveness” of schools and classrooms. In The Impact of Digital Technology on Learning, Steven Higgins and his colleagues survey the many forms of digital instruction and the difficulties of pinning down cause and effect in the ecology of schooling. They report that these technologies may bring an increase in effectiveness in some cases, but that increase may also be explained by the energy of the innovators rather than the innovation itself, or by the fact that the more effective schools are the first and best users of technology. For these and other reasons, they conclude, technology “enthusiasts” confront a “growing critical voice from the sceptics.”

Growing scepticism from the inside contrasts sharply with growing enthusiasm on the outside. In June of this year the Economist magazine made a bold and much-reported prediction: “New technology,” it declared, “is poised to disrupt America’s schools, and then the world’s.” The Economist would pack a punch even if it stood alone, but it doesn’t. Similar propositions have been advanced in influential US publications including the New York Times, Forbes magazine, the Wall Street Journal and the Huffington Post.

Once bitten I should be twice shy, but nonetheless it is my view that the Economist is much more likely to be right than the sceptics, not in consequence of “new technology” alone, but when those technologies are combined with educational ideas and techniques, financial imperatives, and political pressures. Indeed, a long, slow shift from one mode of educational production to another has already begun. Technology is going to drive the first revolution in schooling since the invention of the printing press nearly 600 years ago.


THE enabling factor is the machinery itself, different in three important ways from what KIDMAP depended on. First to arrive was the internet, a means by which any individual or group can reach any other as well as roam at will in the contemporary library of Alexandria. Second is a fusion of speed, portability, cheapness and ease of use exemplified by the touchscreen tablet. And third is the cloud, making all things digital more affordable and usable, particularly for organisations like schools.

The software is not as capable as the hardware, and its development is necessarily slower and more erratic. If we leave to one side applications that support administrative operations, software for schools has developed in two streams, “instructional” and “management,” the former designed for student use in the hope that more can learn more quickly, the latter directed towards much the same objectives, but via the teacher.

Both kinds of software have been transformed. On the instructional side, the old drill-and-practice routines of “computer-assisted instruction” and language labs have been joined by tutorials and mini-lessons of the kind popularised by the free, non-profit Khan Academy; by full-scale virtual courses of study that integrate video lessons, film clips, reading and exercises with assessment and feedback; and most recently by packages that deliver and manage extended sequences of complex learning.

The last of these combine “edware” – the educationists’ “developmental continua” – with “gamification,” the quasi-science of getting kids hooked and keeping them in “the zone of proximal development” as they advance from basic to competent to mastery. At its most sophisticated, gamification combines a carefully planned escalation of tasks and activity, guided and motivated by assessment, feedback and reward, with the capacity to switch students from one learning track to another depending on how well and how quickly they learn. It is “adaptive.” It is also social, again taking from the gaming industry its techniques of organising “players” into groups and teams to collaborate and compete.

The two streams of development, instructional and management, are now merging into “next generation learning platforms” or “learning ecologies,” to be deployed by a teacher operating, as one much-used analogy has it, less like a pilot than an air-traffic controller. The idea is that powered-up teachers will have “the curriculum” at their fingertips in digital form, together with a detailed profile of each student’s progress. The curriculum sets out the work to be done, standards to be reached, ground to be covered, or tasks to be completed, all linked to a wealth of “resources” for the student (everything from books to be read to semester-length courses of study) and for the teacher (lesson plans, teaching hints, assessment tools, guidance and the like).

Student profiles will be compiled not by the teacher after school but with data gathered from the students as they work, their every step forward, their every mistake and their every detour recorded effortlessly. (Coming soon: gaze tracking and pupil-

dilation measurement to indicate attention and comprehension.) These millions of pieces of information can be turned into insight with the help of the new sub-discipline of learning analytics, and made intelligible by 3D graphic displays. The idea is not so different from KIDMAP’s. The execution is light years away.

The traffic-controller image implies a clear division of labour between the controller and the pilots, but in practice the student will be powered-up too. So farsighted were we that KIDMAP allowed students to view their own record – with the teacher’s permission, of course. Soon students will be equipped by “personalised learning environments” to “manage their own learning,” as teachers have long wanted them to do. The lines between teaching and learning, between teacher and taught, will blur. To a degree not previously possible, students will be able to teach themselves, and each other. Learning can be crowdsourced.

Techno-sceptics sometimes forget that these are still very early days in the development of both software and content. Major educational publishers including Pearson, McGraw-Hill and Houghton Mifflin Harcourt have only recently swung their full attention to the digital future. They have been joined by industry giants such as Apple and outsiders like News Corp to take integration to its logical conclusion, tablets bundled with instructional and management software and proprietary content. Investment in educational technology almost disappeared after the global financial crisis but is now growing so rapidly that there is talk of a bubble. The Economist reports venture capital prowling around record numbers of startups (often based in Cupertino) with dinky names like Mathalicious, Chegg (homework help), Sharemylesson and Edmodo (share sites for teachers and others), Badgeville (gamification), Quizlet, Curriki (portal for free courseware) and DimensionU (interactive maths and science games). Apex predators including the big publishers and News Corp have swallowed specialists like Schoolnet (personalised learning), Wireless Generation (ditto), ALEKS (adaptive learning), and Bookette (online performance measurement). School-sector spending on ed tech in the United States is high ($17 billion per annum, equivalent to more than a third of Australia’s schooling budget) and rising. The inevitable hype and snake oil are finding their inevitable victims. Things will go on going wrong, and the current bubble may burst, but the surge is unstoppable.


MOST of this frenetic technological development is happening in the United States, and so is the most intense effort by schools and school systems to figure out what to do with it.

At one end of the spectrum is doing the same old thing in a brand new way: “projects” on PowerPoint instead of cardboard, googling instead of reaching for an encyclopaedia, using a keyboard instead of a pen, or an electronic whiteboard to do what could be done a century ago on a blackboard. Here the new technologies are not the least bit disruptive. They replace little and change less, except costs, which increase.

At the other end of the spectrum are “virtual schools,” which deliver a digital curriculum to students wherever they happen to be, sometimes supplemented by online tutors. Since most of the work of teaching is done in one time and place, the work of learning in another, a given amount of teaching effort can be made available to very large numbers of students (most spectacularly in the example of “massive open online courses,” or MOOCs, enrolling as many as 160,000 students at a time in university courses). In consequence, virtual schools spend relatively less on staff and buildings and more on technology and content; and less of the staffing budget goes to paying teachers, and more goes to online tutors and technical and administrative support. In the upshot, the per-student costs of virtual schools are typically much lower than those of conventional schooling. The catch is that virtual schools – or at least many in the largely unregulated US environment – seem to be less effective as well as less expensive, and are really suited only to upper secondary students or to home-schoolers.

At various points in between these two extremes are myriad approaches, the most prominent of which are the “flipped classroom,” “personalisation,” and “blending.” The flipped classroom gives students “virtual” material for homework so that class time can be used for higher-order review, discussion and extension. Personalisation uses digital technology to provide each student with stage-appropriate work, something only the most exceptionally capable teachers could hitherto do.

In both approaches Larry Cuban’s resilient class has once again found a way to combine constancy with change. They retain the familiar infrastructure (the classroom), the usual personnel (one adult, twenty-five or so students), the standard routines (the lesson), and the established regulatory regime (numbers of students per teacher and numbers of lessons per day). They are an important step forward in addressing the other side of the coin of teacher-centred instruction, the problem of the baffled student, and to the extent that they succeed they will lift “effectiveness.” The trouble is that new costs are added to old. Digital technologies may be a lot cheaper per unit but in the aggregate they’re not. Even after offsets from BYOD (bring your own device) and savings from cloud computing, digital technology is expensive – expensive to maintain and update as well as to buy.

At this early stage, blended schools seem to get the best of both worlds. “Blending” can refer to anything from using online tutorials or courses within a largely conventional curriculum to systematically planned combinations of virtual and conventional instruction. One version of “rotational” blending sees students spend some of their working day in conventional groups in classrooms, and the rest in learning labs where much larger groups of students work on personalised programs under the supervision of a relatively smaller number of staff, perhaps including lab monitors or tutors working to a “leading teacher.” Another variation on the theme has students go through two or three rotations per day, each comprising a period of virtual instruction followed by class time for consolidation.

Early evidence suggests that at least some blended schools may be improving “effectiveness,” particularly for disadvantaged students, while keeping costs lower. As in virtual schools, both staffing and budgets are differently arranged, with more money spent on digital technology and content, less on staffing, and greater differentiation in responsibilities and terms and conditions for staff. One much-reported case is Rocketship, a group of publicly funded charter schools. Blending in a 450-student Rocketship school saves around half a million dollars a year, the savings “repurposed” in ways including professional development, and 20 or 30 per cent higher pay for leading teachers.

Rocketship and some other blended schools are extending rotational blending into “flex.” The classroom and the lab are traded in on something more like a workshop or studio (or a Qantas Club lounge), a linked series of spaces allowing easy movement, and equipped for work by individuals and groups of students and adults formed according to task, need and capacity. A quite different mode of educational production is beginning to take visible form. We might borrow from Cuban to say: classroom versus computer, computer wins.


OF COURSE, it’s not really the computer that wins. A combine harvester will not make medieval strip-field agriculture more productive, yet an assumption of just that kind can be found in many ways of using (and researching) technology in schooling. When computers are added to classrooms and nothing changes the conclusion is that technology doesn’t work. In fact, it is schooling’s strip-field system that is not working.

Learning can usefully be thought of as a form of production through the work of young people and adults. The digital technologies are now capable of doing in schooling what technology has been doing elsewhere for centuries: they can reallocate, amplify and, above all, substitute for labour. Machines can now do some of the work that once required a teacher, and they can allocate other aspects of that work to students. They cannot substitute for the labour of learning, but they can change how that work is done, and they can help improve its organisation so that more of it is done in an optimal way at an optimal time.

That will happen only if and to the extent that labour is actually reallocated, reorganised and replaced. That is what blended and, more dramatically, virtual schools are doing. These schools are exploring ways of combining time, space, effort and tools both different from and disruptive of the class and the classroom.

It is significant that most of these explorations are being made in schools and groups of schools started from scratch. Another effort of the imagination is needed to change what already is into what can now be. That will include dismantling what Cuban calls the “organisational facts of life,” a dense lacework of struts and stays, many installed during the long boom of schooling, which holds the class and the classroom in place: ways of framing and sequencing work (“the curriculum”); the habits of mind and expectations of parents, students and teachers; physical infrastructure; budgets committed to paying a largely undifferentiated and tenured workforce; and the close regulation of the daily work of teachers and students via industrial negotiations and agreements.

There is little evidence to suggest that those responsible for steering Australian schooling have yet grasped the scale and interconnectedness of policy needed to exploit rather than merely “adopt” the digital technologies. A recent investigation into investment in learning technologies in one state found that considerable sums had been wasted because the government, lacking a “clear plan or framework,” had left departmental staff and school leaders with “little guidance on how future learning technologies initiatives can be appropriately planned and integrated.” The recently departed federal government sprayed $2.1 billion on the naive idea that the “digital revolution” could be prosecuted by putting more computers into schools. The incoming federal government has eschewed any talk of “revolution,” digital or otherwise, and has reasserted the traditional role and authority of the teacher in the classroom. Many of those actually responsible for running schools know that there’s more to it than buying computers or depending on the good old teacher, but tend to think of “technology” as just another item in a long to-do list, mainly a question of infrastructure and digital content.

Techno-enthusiasts make equal and opposite mistakes, illustrated by Beyond the Classroom, a report commissioned by Peter Garrett when he was federal education minister. The report is valuable in its sense that something very big is at hand, but troubling in its enthusiasm for any and all things digital and in its inability to be clear about the purposes or limits of the new technology, or about the priorities and sequence of its implementation.

A prerequisite to effective policy is getting clear about what the digital technologies are for. They are to some extent for themselves; like cars, they are something young people need to learn to drive. They are a boon to school administration and a school’s interaction with its community. And since the digital technologies are the ocean in which our fingerlings swim, they are of value in making schools seem less out-of-touch. But these are second- or third-order educational considerations.

The “twenty-first-century skills” case is more compelling, but easily overstated. The argument put in Beyond the Classroom is that skills or capabilities such as “creativity and imagination, critical thinking, problem solving and decision making, ICT literacy, and personal and social responsibility,” are central to the “twenty-first-century workplace,” and schools must therefore “harness the transformative potential of digital technology.”

With the partial exception of ICT literacy, however, the skills listed are cognitive and social, not technical. The digital technologies are an important new means of acquiring these skills and a new context of their use, but the skills or capabilities are not new in and of themselves, and they are certainly not new to schools. For at least fifty years teachers have tried to teach what are variously called “cross-curricular,” “generic” and “meta-cognitive” skills, most of them very like what are now referred to as twenty-first-century skills. In any event, skills or general capabilities can’t be learned in the abstract, and they are by no means the only things that schools are there to teach. “Skills” can only be acquired in and through learning “content” of intrinsic value. In the digital as in the pre-digital world, students must wrestle with, acquire and think about facts, events, formulae, theories, people, stories, poems, equations, and realities of many kinds.

Contrary to much digital advocacy, the digital technologies are tools to be used rather than instruments to be played. The main point of getting them into schools is not to prepare students for the twenty-first-century workplace but rather to exploit their potential as new and more productive means to the old educational end of getting young people, irrespective of postcode or genetic inheritance, to emerge after twelve years of schooling well on the way to being paid-up members of a rich intellectual, artistic and material culture.

And, contrary to much digital scepticism, these are seriously new means. Digital technology has no precedent in schooling except, perhaps, the invention of the printing press and the development of writing millennia before that.

The sheer novelty of technology-enabled change in schooling leaves the movement around it poorly equipped to work out what to do. Its language can win most arguments about ends, but it is practically clueless about the new means. It simply doesn’t notice the necessary things, or looks in the wrong direction altogether.

The currently dominant idea of “effectiveness,” for example, pays no attention to costs or to the relationship between cost and effectiveness, and its “what works” doctrine assumes that what has worked in the past will work into the indefinite future. In a similar way schooling’s focus on lifting “teacher quality” assumes that “the class” is here to stay, and that the only road to improvement is through the skills of just one of its twenty-six members rather than re-engineering the work of the other twenty-five.

Schooling could usefully borrow at least two ideas developed over centuries of experience of technological change in other areas of human activity. The first is the idea of workplace reform. That reform should start not with the work of teachers, as is so often assumed but with the work of the real producers, the students who comprise well over 90 per cent of schooling’s workforce. “Workplace reform” is an embracing concept, and a strategic one. Beginning from a view of how students can best be enabled to produce learning of the most valued kinds, it takes in everything from the content and organisation of the curriculum to workplace architecture to staffing structures and industrial relations to budgets. It makes possible thinking about an orderly, coordinated and sequenced process of change – big plans, small steps.

That process should be guided by a second conceptual borrowing, the idea of “productivity.” Often used as a euphemism for cuts or for working harder, productivity should be understood in educational as well as budgetary and industrial terms. It can require technology to earn its educational keep. “Productivity” insists that there is no intrinsic virtue in technology. It presses systems and schools to ask the question: which of the combinations of time, space, effort and tools available to us at this particular point in time is most likely to do the best educational job? Often, particularly in the near-term, the answer will be the relatively low-tech option of “blending,” using online tutorials, lessons and courses to provide students with more doable work and to free up teachers.


WORKPLACE reform directed at exploiting digital technology is likely to be both more and less disruptive in schooling than in other industries. More, because the classroom is so heavily entrenched and extensively defended, and because technology- enabled change is foreign to almost all involved. And less than in, say, agriculture, or higher education, because schooling is necessarily custodial, and social. Kids need to be looked after, and they need to be with other kids and with adults to grow up.

Technology-enabled workplace change will be resisted by at least some of the interests and institutions that prospered in the long boom of schooling as well as by schooling’s structures and culture. But sooner or later, well or badly, in ways that address need or reflect advantage, it will happen. It will be driven by governments looking to get off the treadmill of spending more and more in order to stay in much the same place; by the discrediting of the class-size reduction strategy and, in due course, the teacher-quality agenda; by big business; by competition between schools, systems and nations for “performance”; by the mysterious infection of every sphere of life with the digital virus; by the educational ideals of policy-makers and teachers; and by teachers’ long-thwarted professional ambitions. What is open for determination is the extent to which “policy” can use these complex vectors to do what my Perth colleagues and I, and many, many others have tried to do so that schooling is less inclined to purchase the success of some learners with the failure of others. •

I would like to thank Bill Hannan, Mal Lee and Sandra Milligan for their help in the preparation of this article. Needless to say, responsibility for it is mine alone. Thanks also to my KIDMAP colleagues, Russell Docking, Sandra Milligan and Paul Williams.

The post Coming, ready or not appeared first on Inside Story.

]]>
Mobile phone nation https://insidestory.org.au/mobile-phone-nation/ Thu, 14 Feb 2013 02:37:00 +0000 http://staging.insidestory.org.au/mobile-phone-nation/

With subscriber numbers heading for a billion, the disruptive impact of mobile phones in India could be enormous. In this extract from their new book, Robin Jeffrey and Assa Doron look at how the technology is unsettling domesticity, sexuality and morality

The post Mobile phone nation appeared first on Inside Story.

]]>

“IT IS the girls who have gone astray,” a village elder told a journalist after the rape of a girl near New Delhi in early 2012. “The girls... are so scantily clad that it’s shameful... Mobile phones have given a lot of freedom to these girls and that’s why they are behaving in a wild manner.” It is a common theme. The autonomy provided by the phone leads young people, especially girls, to elude the authority of those who would have controlled and disciplined them in the past. In this, as in many other ways, the mobile phone symbolises the disruption of Indian life by much wider economic, cultural and technological forces.

Before the mobile phone, landlines existed in India, but they were the preserve of the privileged (and even they had to wait years for a connection). The mobile phone, by contrast, is said to have reached a stunning 900 million subscribers since its full-blooded arrival in India just over a decade ago. Cheap mobile phones mean that Indians of every status are able to speak with each other as never before.

For governments and great corporations, and for entrepreneurs who would like to be great, the mobile phone represented an immense challenge and opportunity. Between 1993, when the technology began to be deployed in India, and 2012 the country had ten communication ministers. One of them was convicted of corruption and sent to prison; a second was also charged with corruption; a third faced probes that would take years to unravel; a fourth was murdered (though in circumstances not directly related to telecommunications); a fifth was undermined, overruled and rancorously removed. For governments, bureaucrats, regulators and politicians, telecommunications offered a bed of thorny roses, and it is these contests over decision-making and power that we try to understand in the first part of our book – the Controllers.

The mobile phone expanded faster than the automobile. It was cheaper, of course, but many more people were involved in the chain that connected manufacturers to customers. There was nothing natural about wanting to have a mobile phone: the technology was alien and calls were expensive. The process to build infrastructure and create demand involved trial, error and millions of dollars invested in what was still an unknown future. As the technology spread in the first decade of the twenty-first century, a vast enterprise bubbled up alongside it, with a cascade of occupations and jobs.

These were the Connectors, people ranging from the fast-living advertising women and men of Mumbai to small shopkeepers persuaded by their suppliers to stock recharge coupons for prepaid mobile services. In between were the technicians who installed transmission equipment; the office workers who found sites and prepared the contracts to install transmission towers (400,000 in 2010); the construction workers and technicians who built and maintained the towers; and the shop owners, repairers and secondhand dealers whose premises varied from slick shopfronts to roadside stalls only slightly more elaborate than those of the repair-walas who once fixed bicycles on the pavement. The Connectors ensured even those with limited purchasing power participated in India’s booming economy.

Once the mobile phone reached “the masses,” the masses became the third group in the chain, the Consumers. Mobile phones were used for business and politics, in households and families and to commit crime and organise terror. But the phone was only a tool. Its effects depended on the knowledge and resources of the people using it, and “middle men” usually started with advantages that “lesser” men and women did not share. In politics, the mobile phone was a device that allowed organisations that were already bound together by convictions to exert influence in a manner that hitherto was impossible. Fancy technologies alone don’t win elections, but cheap, easy-to-use technology gives people with common interests a powerful new weapon with the potential to mobilise and disrupt existing political and social structure.


AS THE technology entered people’s lives, they had to deal with its varied effects: on household economies, parenting practices, intimate relationships, youth culture and much else. Values and meanings – how people regarded “public” and “private,” or the proper roles of men and women in controlling technology – were reshaped in the process. In India, the cheap mobile phone enabled young couples to talk to each other unknown to disapproving elders or for daughters-in-law to talk to fathers-in-law as they had been unable to do in the past. Transactions like these occurred in tens of millions of families almost daily from the early years of the twenty-first century. As they accumulated, like grains of sand on a windswept beach, the dunes of social practice began to shift.

Beyond India’s cities, and among conservative people in the cities themselves, the mobile phone became a metaphor for changing values and practices related to domesticity, sexuality and morality. In a time of rapid change and disarray, certainties were challenged by ballooning consumerism, relentless migration and unprecedented access to information. The mobile phone embodied the ills of an anxious modernity.

In the cities, it became common to see middle-class women, dressed in Western-style business suits or jeans, using their mobile phones wherever they went. Advertising campaigns were quick to tap into these changes, using images of alluring women to promote mobile phones; makers of music videos incorporated the apparent liberation bestowed by the mobile phone into songs and dances.

For new, “liberated” women, the phone was portrayed as a perfect vehicle for gossip (gupshup), romance or the promotion of exciting social relations. Many songs and videos featured women – popularly known as mobile walis – speaking on their mobile phones to their lovers. Though available in CD/VCD shops and later on YouTube, they were most popular on mobile phones.

Music clips featured seductively clad women using mobile phones, dancing in come-hither style and singing lyrics peppered with double meanings. Well before it entered the mainstream music market popular Bhojpuri music had been characterised by “clever phrasing, double entendres, subtle innuendos and suggestive imagery that enabled it to convey taboo sexual acts and desires.” For at least one critic, though, the “raunchy flavour” of Bhojpuri music in VCD/DVD formats and on mobile phones was indistinguishable from soft pornography. Yet the music also retained its capacity to satirise the “modern condition” and laugh at the antics of both women and men as they coped with new times and customs.

One video clip begins with Tiwari, well-known the singer, daydreaming of a woman he met in a bar. It cuts to a scene where a glamorous young woman in a halter-neck top, tight jeans and loose hair dances seductively while drinking alcohol and talking on her mobile phone. This mobile wali is depicted as a daring, sexy tease: a woman who defies the norms that usually bind Indian women. She dances, smiles, drinks, smokes and wears skimpy clothes – all with a mobile phone in her hand. This is her style, as the chorus says:

Mobile in [her] hand, she has a smile on her lips.
She radiates style whenever she moves sideways, forwards, up or down.
Everyone, including neighbours are dying [from excitement]
[Because] the babe, having drunk beer... Oh baby, having drunk beer…
The baby (babe) dances chhamak-chhamak-chham.

The following scenes revolve around the woman who makes men drool as she struts around with a mobile glued to her ear. She is both objectified as a femme fatale and empowered as someone who can choose from those around her or from others at the end of her phone. The song continues:

Forever ready to explode with anger [and] swear words on your lips,
You move the way life moves out of one’s body [when one dies].
The cap worn back to front, dark sunglasses, the cigarette is Gold Flake [a famous Indian brand],
I’m working at trying [to seduce you], there is still some time to go
before we get married.

The young woman remains remarkably composed, comfortably entering male-only arenas and adopting male-dominated practices, such as drinking alcohol in a bar and smoking in public spaces, all this while talking on her mobile phone. Only among urban sophisticates could such conduct be imagined. The singer and his rustic male companions go to pieces under her spell. The main male character warns his friends: “She shoots Cupid’s arrows with her eyes.” True to the Bhojpuri genre of satire, the clip ridicules the lewd, drunken men at the same time as it reminds viewers of the challenges that new attitudes and technologies present to old values.

The clip vividly illustrates the confrontations with tradition that cheap mobile phones provoked. The panicking priest reminds viewers of the precariousness of religious structures and the frailty of people in authority. In the final scene, the priest succumbs to temptation and joins the men in a dance around the woman, who still holds her magic wand – her mobile phone. Portrayed as a loose, urban woman, the mobile wali breaks long-established rules of conduct, partly empowered by her mobile phone. It could lead a village elder to apoplexy.

Another video clip, Mobile Wali Dhobinaya, betrays a larger anxiety: that of the “village” divested of its men, who have increasingly moved to the cities in search of work. The sari-clad wife roams alone in the fields, with only a cell phone to communicate with her absent husband. The theme recurs in many video clips where the bemused Bihari migrant labourer arrives in the city. He finds a forbidding place, filled with voluptuous mobile walis, riding on scooters and confidently chatting on their mobile phones in public. This time, however, it is the Bihari bhaiya (village guy), a shadow of his former male self, who is depicted as helpless and confused at the sight of these city women with phones clapped to their ears.

We found more than a dozen popular songs at this time that highlighted how young men and women could connect through the mobile phone. The mobile wali was anything but the demure maiden presented to a select group of future in-laws prior to an arranged marriage. Rather, she was flirtatious, uninhibited and confident, challenging established social conduct and “traditional” values. None of this, of course, was “pornographic” or contrary to the law. Yet for guardians of old values, the unconstrained freedom enjoyed by the mobile wali led morality towards dark, wayward ways.

The mobile wali–style clips are relatively innocent. But some Indian manufacturers of handsets, eager to eat into Nokia’s dominance, have used racier material to advertise their phones. The Lava brand marketed its Lava 10 phone in 2010 with a television commercial in which a supermarket cashier gives customers their change in the form of teabags, a common solution to a shortage of small coins. Then a handsome young man, and his even more handsome Lava 10 mobile and its “sharp gun-metal edges,” come to the checkout. The winsome cashier abandons teabags as change and gives him a packet of condoms. Lava, the tag-line declared, “separates the men from the boys.” In 2012, Chaze Mobile, manufacturers of ultra-cheap cell phones, hired Sunny Leone, a Canadian citizen of Indian origin and a leading actor in pornographic videos, as their “brand ambassador” for a new range of multi-featured yet very cheap phones. Gambling on Leone’s notoriety, the company aimed “to position its product in an extremely cluttered low-end handsets market.”


IN INDIA, the mobile phone was not the old landline that had slipped into daily life in Western countries as unnoticed, in the words of sociologist Claude S. Fischer, as “food canning, refrigeration and sewage treatment” and become “mundane.” The mobile phone, as Clay Shirky argues in Here Comes Everybody: How Change Happens when People Come Together, now means that “the old habit of treating communications tools like the phone differently from broadcast tools like television no longer makes sense.” The potential to record and to broadcast, at one time limited to those who controlled presses and transmitters, was now available to the majority of people, even the poor.

Alongside music and screen savers featuring gods, WWF wrestlers and Bollywood stars, mobile phones have also brought cheap, full-colour, small-screen pornography to the masses. Pornography could be made available everywhere—from kaccha houses to penthouses. But though police and morality crusaders aimed mostly at the poor, the powerful too were vulnerable to the seductive properties of the cell phone. In an incident in the Karnataka state legislature that came to be dubbed as “Porngate,” two MPs were caught viewing what were said to be pornographic clips on a mobile phone while a debate was going on. The legislators belonged to the Hindu right-wing Bharatiya Janata Party, constant advocates of censorship in the name of preserving morality and Hindu values.

Mobile phones also facilitated crime and terrorism. Indeed, they created new crimes – harassment through text-messaging, for instance, and “faceless frauds” in which money disappeared without a victim ever seeing the criminal. And, as the Mumbai attacks in 2008 demonstrated, mobile phones enabled gullible young terrorists to be directed like human drones by remote “controllers.”

India experimented with a host of initiatives to establish mobile phone laws and cyber-security frameworks; but provisions were scattered through legislation, guidelines and rules. In 2012, proposals were made to establish a “telecommunications security testing laboratory” to certify that all telecom equipment conformed to government regulations and did not harbour illegal tapping or disruptive devices. Such an organisation, however, was many months or years away from functioning. State police forces established modest mobile cyber-crime labs that attended crime scenes and collected evidence effectively.

Indian governments, however, faced a problem that wealthy states such as those in Japan, western Europe and North America had not solved: how to mitigate the evils that mobile phones could generate while preserving their capacity to improve even a poor citizen’s ability to take advantage of the rights of democratic citizenship.

But mobile phones can both empower and disempower, and it can be a distraction to focus on questions of good or bad. The technology exists; immensely powerful economic forces, augmented by widespread social acceptance, have disseminated it widely; and it will only go away if a major cataclysm befalls humanity. We live with mobile telephony, and most of us relish the benefits. India in this sense is no different from other places. But its disabling inequalities and its diversity mean that the disruptive potential of the mobile phone is more profound than elsewhere and the possibilities for change more fundamental. •

The post Mobile phone nation appeared first on Inside Story.

]]>
The Apple farmer https://insidestory.org.au/the-apple-farmer/ Mon, 10 Oct 2011 03:05:00 +0000 http://staging.insidestory.org.au/the-apple-farmer/

Graeme Orr looks at responses to the death of the man who stood between consumers and the complexities of science, innovation and corporate strategy

The post The Apple farmer appeared first on Inside Story.

]]>

WE CRADLE them in our palms, to listen to music. We cup them to our ears, to converse with people far away. We nestle with them in bed, for entertainment. You may even have one in your lap to read this article. These gadgets seem ubiquitous, and many were either made by Apple or borrowed from them.

The reaction to the death of Steve Jobs, the co-founder of Apple, was like a flood: swift and overwhelming. Within hours, gigabytes of commentary and tributes were flashing around the world. The ABC even set up an online obituary – a webmento mori with ten sub-sites. It was not an obituary in the traditional sense of a sober reflection on Jobs’s personal and work history. Rather, according to the tenor of the times, it was an interactive multimedia site with “infographics,” “inspirational” video and a tributes page.

Much of the hagiography has been boilerplate. Prime Minister Gillard intoned that “all of us would be touched by products that he was the creative genius behind, so this is very sad news and my condolences go to his family and friends.” A little has been raving. Retailer Gerry Harvey compared Jobs, in a single sentence, to Einstein, Atilla the Hun, Alexander the Great and Moses. Some has come from rivals and colleagues searching for a personal tone, but unable to avoid boosterism for their industry. Bill Gates said that “the world rarely sees someone who has the profound impact Steve has had, the effects of which will be felt for many generations.”

But the bulk of the tributes have spilt from the mouths and fingers of everyday folk. Like the fan – one of the many who spontaneously gathered outside Apple stores – who lamented that “it feels like the end of the innovators.” Jobs’s death has instantly transformed him from a real person, who spent his adult life helping design and sell gadgets, into the emblematic hero du jour: a “genius entrepreneur.” What sense, if any, can we make of this? What does it tell us about the centrality of electronic devices to an age richer in communication than reflection? And what does it reveal about the human need to mythologise, even in a time of supposedly rational markets, technologised science and information exchange?

There are two easy explanations, though each is too easy to withstand scrutiny. One is that people, especially those of us living comfortably, are easily arrested by premature death. Material success and fame are the allures of a secular market economy, to the point that we forget they cannot guarantee a long and peaceful life rather than one ended painfully in its prime. Yet Jobs’s demise was hardly unexpected. He had endured pancreatic cancer for some years, and recently relinquished his position as chief executive in recognition of the irresistibility of the disease.

Another facile explanation is that the adulation of Jobs is an essentially American phenomenon, albeit one rubbing off on the wider West. The United States, as its politics reveals, is in the thrall of individualism, and the image of the single, almost Olympian, entrepreneur or inventor is one manifestation. But Jobs was not a mere celebrity, celebrated for being celebrated. He was truly at the forefront of an industry that has altered, irrevocably, the way many people communicate, and transformed our idea of the media.

Jobs was at the forefront of this development because the influence of Apple was disproportionate to the numbers of devices it sold. Leaving aside its original raison d’être, the Mac computer (which remains a distant second fiddle to the standard PC) Apple has sold only around 30 million iPads and 130 million iPhones, although its 300 million iPods have formed a quasi-monopoly. These are big numbers for devices that are not bottom of the range. But they have not changed the world; rather, they have enhanced the lives of some people (mostly the under fifties) within one class (the middle class) in some regions of the world (the industrialised rather than developing nations).

Those numbers do go some way to explain another big number. Apple’s shares recently exceeded US$350 each, so that it rivals Exxon as the most highly valued corporation in the world. Its real value, however, lies in its brand, resurrected in the 2000s and now standing at the intersection of good design and cool. Apple is mainstream hip. Jobs deserves the lion’s share of the credit for that marketing transformation.

We don’t normally lionise the people who are the best marketers. We could adapt the old saying about propaganda: “Fool me once, shame on you; fool me twice, shame on me.” Sell me a gadget once, that’s the market; sell me that gadget twice and I’m hooked and you’re rich. The digital world is a disposable world, with some owners upgrading their mobile devices annually. The iPhone, Apple’s signature product, has been through four “generations” in barely as many years. Many of those 300 million iPods were sold to folk like “SamCostello,” who went through seven of them.

Despite its hip badge and wholesome, fruity logo, Apple is one of the more controlling corporations in the field, a fact that increasingly riles not just smaller innovators, but also former acolytes who fret that the promise of the internet as a communal environment is being lost. Mega-businesses such as Apple and Facebook seek to monopolise content and access, to compartmentalise and monetise what began as an open and free space.

Less well acknowledged is the exploitation – not just by Apple but by the wider industry - of Asian, especially Chinese, labour. This problem is not complex and it is well understood. But it confronts us with the uncomfortable realisation that the toys facilitating our entertainment and productivity, and Apple’s inflated profits, are the product of working conditions that we would never endure, as well as designs that we cherish.

Jobs created nothing genuinely transformative. He did not invent the internet: that was done by relatively unhailed researchers in US universities and its defence force. Nor did he lay out the essential fabric of the web, as Tim Berners-Lee and Robert Caillau did at CERN, in France, in 1990. Now knighted, Berners-Lee has also been elected to the American Academy of Arts and Sciences, but he is no household name compared to Silicon Valley moguls such as Gates, Zuckerberg and Jobs.

What Jobs did do was build a vast company around principles of good design, both in hardware and appearance and, more importantly, in software and usability. The term “entrepreneur” is often misused. Recall George W. Bush allegedly claiming that “the French have no word for entrepreneur”? Besides the etymological irony, Bush seems to have wrongly imagined entrepreneurship to mean “risk-taking.” This common mistake is a conceit of individualistic capitalism. As Malcolm Gladwell wrote recently in the New Yorker, the successful entrepreneur is less a grand risk-taker than a methodical organiser and leader. She is someone who adroitly refines existing inventions and processes, packaging them cleverly in ways that fit or create a market. Apple and Jobs were more creative and less predatory than the examples Gladwell gives, but the description still fits. As lawsuits reveal, others seek to piggyback similarly on Apple.

Ultimately, Jobs was the human face of the often-dehumanised world of computer science and consumer electronics. People realise that gadgets like smartphones have transformed aspects of their lives, for both better and worse. Their insistent buzz intrudes on relationships and the experiences of the moment. They appeal to our banal desire to broadcast ourselves as much as they bring us together. But most who enjoy them forgive their addictiveness and cannot remember a life without them. Others find intimacy in their gadgets: in the simple physical presence of devices that engender a sense of ever-connectedness. It is these people who have led the rush to immortalise Jobs, because they want to thank someone for the gift of technology. Label this a “cult of personality,” as Mark Cohen did in The Drum, and see how personally offence is taken.

Though they may be distracting us to the point of decentring us, these gadgets are not transforming humanity in any fundamental sense. We still live by rituals and stories. The act of plugging into or checking these devices is so ingrained into the routines of millions that it has become an unconscious ritual. And chief among the stories we live by are myths, notably the myth of the great man, whose individual drive and genius changes the world. It is easier to pay homage to and heroicise Steve Jobs than to make sense of, let alone give thanks for, something as complex as the world of patents, geeks and corporate mergers that make up the computing revolution. •

The post The Apple farmer appeared first on Inside Story.

]]>
Text, text, text https://insidestory.org.au/text-text-text/ Thu, 23 Oct 2008 13:16:00 +0000 http://staging.insidestory.org.au/text-text-text/

Is the energy, liveliness and to-the-pointness of text-messaging already history, asks Richard Johnstone

The post Text, text, text appeared first on Inside Story.

]]>

The short list for this year’s Global Mobile Messaging Awards provides a window to the future. One of the finalists in the Innovation in Messaging category is SpinVox Messenger, an application that “automatically converts a voice message into text and delivers it directly to the recipient as an SMS, ensuring call completion (and) stimulating call continuity”. The spoken word segues into text of its own accord, without the need for human intervention. “Messaging,” comments one of the judges not quite illuminatingly, “is helping us to re-examine the importance of voice.”

The eventual winner in the Innovation category also does what it does automatically; in this case by “automatically saving every text and photo message on line.” The evanescence that was so much part of the essential character of mobile text messages is a thing of the past; “your taxi is on approach” can now live forever on the hard drive. In another example of the unlimited potential of text messages, we learn from pangolinsms.com that “Pangolin’s Interactive Messaging Unlimited software is perfect for concerts,” where it is used to display SMS text messages on giant TV screens onstage, a practice that has been growing in popularity over the last few years. Again something that seemed so innately characteristic of the text message, a private communication transmitted to a screen only just big enough to read from, is transformed into a public display. How did it all happen, and so quickly?

Text messaging, or SMS, began almost by accident. It was borne along in the early nineties in the slipstream of the mobile phone, as a by-the-way function – handy but to the phone companies not self-evidently essential – that rapidly and unexpectedly became a phenomenon in its own right. For many people, particularly younger people, texting became the preferred form of electronic or indeed any other communication. Against the discursiveness of email the text message, from within its miniaturised frame, got the essential point across with as little fuss as possible.

In a study of the mobile habits of psychology students at the University of Padova which appears in Mobile Phone Cultures, a volume of essays edited by Gerard Goggin of the University of New South Wales, Alberta Contarello and others note that of all the functions and qualities the participants associated with mobile communication SMS came first (ahead of, for example, convenience, reachability or fashion). The authors observe in passing that SMS is a “user design variant,” which is true to the extent that the explosion of interest in texting in the mid to late nineties was not anticipated. The market, in effect, voted with its thumbs.

As Alex S. Taylor and Jane Vincent point out in “An SMS History” (in Lynne Hamill and Amparo Lasen’s Mobile World: Past, Present and Future), “early SMS campaigns to promote the delivery as well as receipt of messages… positioned the service as a second-rate add-on to voice transmissions.” To them, “what now seems striking and somewhat peculiar is the idea that messages were not considered to be something that people would compose themselves on their mobile phones.”

In retrospect, the key factor in the take-up of texting seems to be what was thought of as its Achilles heel, the limitation to 160 characters per message. “Paradoxically,” Taylor and Vincent write, “the limit of 160 characters and the cumbersome and time-consuming multi-tap method for entering text… struck a chord with users, particularly younger ones.” Other forms of electronic communication – emailing, blogging – impose no meaningful constraints. You can go on forever if you want to. But texting was different, though it is changing now. The constraints were severe and in the way of constraints they also posed a challenge. If it couldn’t be said in 160 characters (fewer in some other languages) then perhaps it wasn’t worth saying.


THE GOLDEN YEARS of the text message already seem to be behind us, back in the late nineties and the first few years of the new century. This was the time when the text message was in its pure state, before MMS and before the capacity to send multiple messages and to include fancy accoutrements like attachments and embedded links; when the message just said what it had to say, unencumbered by qualifying clauses. A text message could be read rather than navigated or scanned, because there wasn’t enough there to navigate. In its abbreviated simplicity it offered a welcome alternative to the endlessness of everything else. The vexed question of when to stop writing – an email, a blog entry – just didn’t arise. You had to stop, because you ran out of space.

Some see this enforced brevity as the heart of the problem. For them, the text message and its cousin instant messaging embody everything that has gone wrong with language, as we forsake complexity and nuance and the time-consuming courtesies in favour of, linguistically speaking, the short sharp shock. For others, the text message holds the key to the revitalisation of language, as it encourages us to find all sorts of new and inventive ways to say more with less. It stops us rambling on. It makes us think. David Crystal, in his recent book on the subject, txting: the gr8 db8, inclines towards the latter view, but only in the face of what he calls the “extraordinary antipathy” to this new – or maybe not so new – way of writing.

In a comment on the Oxford University Press blog posted shortly after publication, Crystal laments the fact that the antipathy he identifies in his book continues unchecked by any evidence to the contrary. He quotes the remark made by a spokesman for the Head Teachers’ Association of Scotland who seemed uninfluenced by Crystal’s anti-alarmist findings: “Because of the rate in which text-speak is taking hold,” says the spokesman, “I shudder to think what letters will look like in 10 years’ time.” There is no need to panic, says Crystal. He makes the case for the energy and inventiveness of texting, adding reassuringly if not entirely consistently that the sorts of things that texting relies on – abbreviations, acronyms, vowel-less sentences, and all manner of language-play – have been around forever.

Another scholar of texting and mobile telephony – it’s a fast-growing field – comes to very similar conclusions. In Always On: Language in an Online and Mobile World Naomi S. Baron adopts a fairly relaxed tone on the where’s-it-all-heading question, reassuring the doom-sayers that “in reality, there are relatively few linguistic novelties specific to electronically mediated language that seem to have staying power.” In other words, it’s inventive, but not so inventive that it risks changing language out of all recognition. Indeed Baron sees some virtue in the way in which text messaging demands concision. By contrast, in a reference to electronic writing generally, she asks whether it could be that “the more we write online, the worse writers we become?”

Could it be, Baron wonders, that the “sheer amount of (electronic) text” being produced is “diminishing our sense of written craftsmanship”? In some ways this is counter-intuitive, particularly if we subscribe to the adage that practice makes perfect. If writing makes you a better writer, then surely even more writing makes you an even better writer. But only, perhaps, if you edit as you write. “Murder your darlings,” Sir Arthur Quiller-Couch famously said in his advice to authors, and the Cut function on Word came along to make it easier. The problem is that the darlings won’t stay dead. Those excised phrases and paragraphs often lurk underground, waiting to be resuscitated, and very often they are, not least because there is room for them. If the system allows it, why not say more? Guidelines for web writing typically begin with a version of the injunction “make it short,” but for all that the content of the average website just grows and grows. Only with the text message are you forced to keep it short, whether you want to or not.

Keeping it short means leaving out the non-essential bits. The more non-essential bits we leave out, the more concision can elide into compression, and clarity into confusion. Which is why, Baron notes, “scores of journalists are apparently in agreement that our linguistic prospects are bleak,” a reference to the sub-genre of opinion- and think-pieces that regularly declare that the linguistic end is nigh and that very soon we will each of us have no idea what anyone else is saying. And it is not only journalists who feel this way. The New York Times records James H. Billington, the librarian of Congress, drawing laughter at the launch earlier this year of a Pew Research Centre report into student writing when he expressed concern about “what he called ‘the slow destruction of the basic unit of human thought, the sentence,’ because young Americans are doing most of their writing in disjointed prose composed in Internet chat rooms or in cellphone text messages.”

David Crystal singles out his candidate for the most apocalyptic of all such cries of pain. It comes from the distinguished journalist and broadcaster John Humphrys, writing in the Daily Mail of 24 September 2007, who described texters as “vandals who are doing to our language what Genghis Khan did to his neighbours eight hundred years ago.” Even allowing for hyperbole, this seems extreme. But Humphrys’s rant is not quite as unfettered as this extract might suggest. What prompts his outburst is the news that the Oxford English Dictionary has dropped the hyphen from 16,000 compound words on the grounds that we are all too busy for hyphens. We must change the way we write, reports Humphrys, because “we no longer have time to reach for the hyphen key.” The objection, as it so often is, is not so much to texting – or fast food or fast anything – as it is to the pressure we are under and the way in which that pressure seems to speed up time. It’s not so much about the abbreviation itself, but about the need to abbreviate in the first place. Why shorten everything? What’s the rush?

Humphrys is particularly venomous when it comes to the abbreviations used in texting and email, describing them as “grotesque.” In the early days of text messaging it was all harmless enough: “tks” for “thanks”; “u” for “you”; 4 for “for.” But now it is much more complicated, as texters “have sought out increasingly obscure ways of expressing themselves.” Lists abound on the internet of acronyms, alphabetisms, and other combinations of letters and numbers that far exceed the lexicon that even the most committed texter would necessarily take the trouble to master. Abbreviation becomes an end in itself, rather than a means to an end. A kind of verbal nanotechnology. Webopedia’s “Text Messaging Abbreviations: A Guide to Understanding Online Chat Acronyms & Smiley Faces” was up to 970 entries at last count, but is looking for more. “If you know of a text message abbreviation that is not included in our list, please let us know.” This suggests an exercise that has gone beyond an anthropological record of practice to become a call for creativity and invention. PXT. Please explain that.

Abbreviated and otherwise modified language may look streamlined, but it can often take longer to put together than the conventional kind, and longer to pull apart. Ill-literacy, a San Francisco-based “collective of poets, emcees, and all-around fresh individuals,” zero in on this paradox when two of their number – Ruby Veridiano-Ching and Nico Cary – perform a stand-up routine on the subject of text-messaging that elicits cries of recognition from their mostly college audiences. “The great thing about text messaging,” says Cary in a performance that can be found in several iterations on YouTube, “is that you don’t have to respond right away. You can carefully craft each message so that you sound witty.”

Against the commonly stated view that texting is not really writing at all, but rather a hybrid of the written and the spoken, Cary emphasises its writerliness, the ability it gives to present a version of yourself in haiku form, rather than the version that comes across in unmediated speech. Some of this careful crafting that he talks about involves deciding whether or not to break the rules, or whether in fact breaking the rule has itself become the rule. “Should I spell ‘what’ ‘w-h-a-t’, ‘w-a-t’ or ‘w-u-t’”? he asks plaintively. And when, having finally sent the carefully crafted message, you don’t get a response for forty-five minutes, it can “drive you fuckin’ crazy”. So powerful is the illusion that texting is spontaneous and natural that even though it has taken three quarters of an hour to compose your message, once it’s been sent you expect an immediate reply.


In a study undertaken at the University of Plymouth, researchers divided their student participants into two groups, texters and talkers, according to the predominant use they made of their mobile devices. “The fact some people prefer texting to talking suggests that they get something out of texting that they cannot get from talking… They committed more time and effort to the process of message composition, writing longer messages and editing them more carefully, expressing things in their messages they may not have felt comfortable saying face-to-face.” This conveys the sense that texting is more deliberative than it appears, that it allows you to present another version of yourself than the one people see in front of them: an alternative to speech rather than a version of it. And it us best not to mix them up. As the netiquette advice provided on txtmania.com has it, “as much as possible, avoid texting while in a conversation with real people.”

Texting takes time, even when it doesn’t seem to. In the Pew Research Centre report that prompted the remark from the Librarian of Congress about the decline of the sentence, a ten year old is quoted as saying that “I put in 20 hours [per week] plus [texting]. I can’t even count because I mean it’s not like you’re spending a continuous hour writing/texting. It’s just like text, text, text while you’re doing other stuff.” The young respondent manages to sound both casual and pressured, valuing the instantaneousness and spontaneity of the medium while clearly spending a lot of time on it. It’s just one text message after another. “Text, text, text.”

A report prepared by ANU and the Australian Mobile Telecommunications Association concluded from questionnaire data that the jury was out on whether mobile phone use alleviated people’s sense of time pressure. “Nine per cent answered ‘Yes, a lot less’; 25% answered ‘Yes, a little less’; 15% answered ‘No, not much less’; 25% ‘No, not at all’ and 26% were unsure,” figures which, taken together, tend to leave the question open. Meanwhile, reports on the ease and utility of text messaging – its capacity to reduce the pressure – have tended to give way in recent times to reports of its dangers, with texting while driving coming in for particular censure.

There are many signs that text messaging as a personal communication tool, for all its continuing popularity, is being overtaken, or in some cases taken over. From personal to personalised. The Obama campaign has helped pioneer the political use of high-volume text messaging to provide the illusion of two-way traffic – “sign up to the right to receive text messages on your phone,” says barackobama.com – while in effect being simply a mobile mail-out. And, the campaign site advises, “for high volume text users” (a phrase which seems explicitly to acknowledge the addictive element in texting) there is Twitter. The Google tagline for Twitter succinctly indicates the speed with which texting is being supplanted by newer and more sophisticated verbal nouns: “Social networking and microblogging service utilising instant messaging, SMS or a web interface.” Obama Mobile won its category for the 2008 Global Mobile Messaging Awards. According to 160characters.org, the website for the SMS and Mobile Messaging Association, “Obama Mobile has set the gold standard for harnessing the power of mobile technology to engage supporters and to drive a political movement.”

But for some commentators, texting is already history. According to a post of 28 August 2008 by Ed Hardy, editor of brighthand.com (a site providing news, reviews and discussion on “handhelds and smartphones of all kinds”), “texting was created back in the 1990s because people wanted email on their phones, but the technology wasn’t available yet. That’s why engineers came up with a crippled version of email that phones could handle. The technology for full email access is available now, so SMS has really outlived its usefulness.” If he’s right, and he probably is, then instead of lamenting the role of text messages in the decline of language we will find ourselves mourning the energy and liveliness and to-the-pointness of texting. The plain vanilla text message, sent from one person to another, will rapidly come to seem like the telegram of the digital age. •

The post Text, text, text appeared first on Inside Story.

]]>