“we advocate for research that centers the people who stand to be adversely affected by the resulting technology, with a broad view on the possible ways that technology can affect people”
It doesn’t sound like a particularly radical statement, certainly not radical enough to warrant being sacked. It says no more than the standard ethical question, just because we can, should we? Yet this seems a radical question to some, and indeed two of the authors of the paper from which this quote comes have been sacked by their employer from top roles in ethics, seemingly for reasons closely connected to the paper (though the exact circumstances and reasoning remain a matter of dispute).
Those former employees are Timnit Gebru and Margaret Mitchell (not really masked by being named as Shmargaret Shmitchell in the list of authors of the paper), and their former employer was Google, known to investors as Alphabet. There are also three further anonymous authors of the paper, prevented by their employer (it is specifically stated as a single employer) from being named; most assume that that employer is also Alphabet/Google.

The article at the heart of the dispute is On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? [it also includes a parrot emoji as part of its title]. Stochastic Parrots is a short article, with a substantial catalogue of references, which contains few surprises. The Data Ethics Club in which I participate could see nothing at all that was truly controversial, but then we are perhaps a self-selecting crowd. Stochastic Parrots was prepared for the FAccT conference – the Association for Computing Machinery’s Fairness Accountability and Transparency conference due to start this coming week (it is to be discussed on March 10th, the last day of the conference) – and has been published by Emily Bender, a University of Washington academic who is another of the co-authors. The biggest surprise for most is that it is controversial enough to spark these departures.
The article points out that AI can use significant amounts of energy, and in a world of CO2 constraints, technology firms should not regard the use of endless amounts of data to train new algorithms as entirely cost-free. Again, it notes that the current trend for supposedly ‘natural’ language programs being trained on the entire Internet encodes the existing biases and unfairnesses built into current web activity, and further extends the domination of heavy users of tech while excluding those who are not part of the Internet community, those we might call ‘unwebbed’. It is like all the world were expected to adopt a West Coast US accent and verbal mannerisms – as if!
Both these concerns are clearly true and while they may be a little embarrassing for Big Tech they cannot be truly controversial.
The article goes on to criticise the large language programs. These, it argues, do not generate meaningful outputs but merely sewn-together repetitions of language they adopt from elsewhere, joined not by meaning but through statistical habit. They are, in short, the stochastic parrots of the title. Now this is a well-articulated criticism of a large area of activity for Google and its Big Tech peers, and the description (and emoji) makes it memorable, but it is hard to see it as a sackable offence for those charged with considering ethics to note that a work in progress has distinct weaknesses and doesn’t yet live up to the promises made for it.
So I suspect that it is the last aspect of Stochastic Parrots that is the genuinely controversial part, the raising of the fundamental ethical question of just because we can, should we? The fact that it was perceived as controversial reveals much about Google and Big Tech.
As well as the quote that heads this blog, Stochastic Parrots goes a little further: “we call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms. Work on synthetic human behavior is a bright line in ethical AI development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.”
So, it seems that Big Tech – or at least Google/Alphabet – is unwilling to countenance the very idea that some of its activities might have consequences that amount to extreme harms to people. And they seem to be unwilling to accept the idea that they should adopt a cautious approach of assessing what those harms might be before proceeding with their activities. The irony that these comments are very close to an articulation of the ‘don’t be evil’ mantra that framed Google at its creation is clear. It is also clear that ‘don’t be evil’ seems not to be applied by Alphabet now.
Such a denial of the possibility of risk and of the need to be cautious in the face of it seems a very dangerous position for such a powerful industry to take. And it flies in the face of the readily apparent harms from AI: perhaps it is no wonder that we are seeing such harms given the state of denial in the industry.
Ethics in AI remains a new field, worryingly. The overwhelming thinking does seem to be not to ask whether we should do it just because we can, but to do it simply because we can. A recent article in the New Yorker, Who should stop unethical AI?, sets out the trauma of beginning to consider ethics in AI. One quote, from an individual who was then an academic researcher but now works at Google, seems particularly telling: “Obviously, researchers are incentivized to pretend that there are no human subjects involved, because, otherwise, things like informed consent become issues, and that’s the last thing you want to do when you’re processing data with millions of data points.”
And there is a risk that ethics in AI may never get beyond its nascent phase. As Bender points out in an interview with MIT Technology Review, the experience over this paper may hinder further discussions of ethics in AI – it will have a “chilling effect” she is reported as saying. Having AI ethics specialists working for the Big Tech firms “has been beneficial in many ways. But we end up with an ecosystem that maybe has incentives that are not the very best ones for the progress of science for the world.”
Yet we all know that we need ethics in AI. We know that Big Tech has facilitated, indeed fuelled, a polarisation in politics, and society. The algorithm is the echo chamber. Big Tech has eroded privacy and increasingly exploits its knowledge of us to direct our consumer choices. As Stochastic Parrots worries, Big Tech risks reinforcing, not unwinding, existing unfairness in our society.
So we need ethics in AI. And part of that has to be more transparency. There is a huge irony in the fact that Big Tech is among the most secretive businesses in the world. It is ironic because largely it insists on no one else having secrecy, and thrives on undermining other intellectual property while keeping its own entirely hidden. The industry has at best a lax attitude to other’s copyright, whether those others are individuals or traditional media outlets (as has been seen most recently in the Facebook and Google spat with Australia). Big Tech has an avaricious attitude to every individual in the world’s personal data, repurposing it as the basis for a uniquely targeted advertising model.
Big Tech has even played a major role, according to Rana Foroohar in her searing Don’t be Evil: The Case Against Big Tech, in undermining the protection offered by the patent regime in the US. Since the passage of the America Invents Act in 2011, she reports, the US regime has swung from one of the easiest to gain a patent and to protect it to one where such protection is much harder.
Yet Big Tech also has an obsession with maintaining the privacy of its own organisations and of hiding the workings of its core business model, the algorithm, from the outside world.
Academics are used to testing their work in the outside world: in a sense, it does not exist unless it has been peer reviewed and published openly. Similarly, patents allow their owners unique benefit from their creativity for a period of years, on the basis of being made publicly available so that the knowledge can be freely available in due course and built upon by others. Open source software, which is publicly available and so can be tested and improved by developers around the world, is generally regarded as much more robust than secretive proprietary software: for example, it is not by chance that NASA chose Linux and other open source software to be the brains of Ingenuity, the drone currently flying itself in Mars’s scanty atmosphere.
Patent law developed because many inventions are easily replicable. New products can be reverse engineered and therefore copied, and society agreed that in order to foster creativity inventors need to be able to prevent such copying for enough time to generate a return from their invention. But the price of such exclusivity is publicity: the patent is made public and thus is freely available after the exclusive period ends. It can also be subject to testing and enquiry, and can be built on for future inventive leaps. A patent attorney friend notes that one of his first questions to a client or potential client is always do they actually want a patent at all. If reverse engineering isn’t possible so a secret cannot be readily discovered by the outside world, would it not be better to keep it as a secret and never release it to the public through the patent process? Coca Cola has proved over more than a century the value of a commercial secret that is kept as a secret. It is this model that IT giants – ironically given their obsession with freely revealing all the world’s information – are using.
I have previously lauded CEO of the British Academy Hetan Shah’s idea that there should be a data commons: that our data should not be owned by any business but rather available for common usage, perhaps after a period of commercial benefit (like the time-span of a patent) to enable the garnering of the data to be funded. But perhaps we should be brave enough to go further than this: all algorithms that affect more than a minimal number of people could be made subject to independent oversight by some properly funded regulator and accessible for academic research. This would enable others to test and challenge its quality, and to identify glitches and biases – in particular, to spot potential negative impacts of algorithms on society and individuals. It could prevent harms being hidden behind the veil of commercial interest.
Shah proposes that proprietary rights to data that is gathered should expire after 5 years, after which the data should be released to a charitable corporation and made available to all for research and study, “subject to scrutiny that ensure the data are used for the common good”. This argument is particularly strong, as Shah points out, for health data and other broad data sources that governments should never lazily allow to be the everlasting property of a single corporation.
Sunlight is the best disinfectant, we know. But little sunlight is currently allowed into the world of AI. These are black boxes from which light and attention are excluded.
Transparency is a protective model that has been attempted, to some extent at least, and in Europe at least. For the heart of some of the protections offered to consumers by the EU’s General Data Protection Regulation (universally known as GDPR) are the underlying principles of lawfulness, fairness and transparency. Under transparency, data processors are expected to be open about themselves and how they will use individuals’ data – this is often expressed as the individuals’ right to be informed.
Of particular relevance is Article 22 of GDPR, which protects individuals from abuse through solely automated decision-making. Such processing is only permitted in limited circumstances, including after having obtained explicit consent. Individuals are also given rights to challenge and insist on human intervention in the decision-making, and the processor is obliged to carry out regular checks to make sure that the automated decision-making is working as intended. All very sensible and fair.
This Article formed a key element of a recent British Institute of International and Comparative Law and Kings College London (KCL) Dickson Poon School of Law event on Contesting AI Explanations. Essentially, the key message of this fascinating session was that GDPR only goes so far, and that this is not far enough. The fact that Article 22 allows challenge only of solely automated decisions limits matters as few decisions as yet are solely automated. More significantly, though, the challenge can only be made where there is enough visibility of the AI activity to allow insight and challenge. It also happens that GDPR does not require disclosure of the detail of code, but of ‘meaningful’ information, a concept which is largely untested.
The comments of Robin Allen QC, of Cloisters chambers and co-founder of the AI Law Hub, were typical. He noted that transparency is helpful but often the use of AI is not visible so that it cannot be challenged. Take his example of a CV uploaded to the Internet; this may lead to interviews and job offers, but the individual will be unaware of all the occasions when an algorithm has screened them out for what might be arbitrary or wholly unacceptable reasons. Unless it is visible that this has happened, all current protections count for nothing.
The conclusion of most participants was therefore that GDPR does not in practice provide the protections that we might hope for. Rather, standard public law challenges and remedies are more often sought – such as judicial review. Some argue that this is beginning to amount to a presumption of disclosure for government and public bodies. However of course, such challenges are only possible where the AI is employed by public bodies, and do not exist in relation to the private sector.
We are left not just with holes in the application of the regulation, but with certain key conversations not even begun. The comments of Perry Keller, a reader in media and information law at KCL, were particularly striking. “We have not had enough of a debate about these harms,” he said. He noted that as well as the individual harm, there is a collective harm, a societal one, though GDPR considers only the individual aspects. “The excessive focus on the individual is not allowing us to focus on the societal one.” As Stochastic Parrots invites us to, we need to consider foreseeable harms, and explore whether their existence makes the new technology worthwhile.
We seem to have come a long way from don’t be evil. I can find no better ending than the first paragraph of Don’t be Evil: The Case Against Big Tech:
“’Don’t be evil’ is the famous first line of Google’s original Code of Conduct, what seems today like a quaint relic of the company’s early days, when the crayon colors of the Google logo still conveyed the cheerful, idealistic spirit of the enterprise. How long ago that feels. Of course, it would be unfair to accuse Google of being actively evil. But evil is as evil does, and some of the things that Google and other Big Tech firms have done in recent years have not been very nice.”
See also: The failures of algorithmic fairness
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, Emily Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell, Proceedings of FAccT 2021
Fairness Accountability and Transparency Conference
Who should stop unethical AI?, Matthew Hutson, New Yorker, February 15 2021
We read the paper that forced Timnit Gebru out of Google. Here’s what it says, Karen Hao, MIT Technology Review, December 4 2020
Don’t be Evil: The Case Against Big Tech, Rana Foroohar, Random House, 2019
Use our personal data for the common good, Hetan Shah, Nature 556 7, 2018
Contesting AI explanations in the UK, BIICL and Kings College London, 24 February 2021
Government Models and the Presumption of Disclosure, Joe Tomlinson, Jack Maxwell, Public Law Project, July 2020