What’s a fair use?

The media has reported that Meta (the Facebook, Instagram and WhatsApp company) has won a legal case on the use of copyrighted materials in training its AI models, that the use of copyright materials was a ‘fair use’. As often with the law, it’s a bit more complicated than that.

The case in question was Kadrey v Meta, and summary judgement was released last week (the judge, Vince Chhabria, deciding on the basis of arguments that the case did not need to go to jury trial because the plantiffs had not made a convincing case, enabling Meta to succeed in a call to dismiss it). The legal question at issue was whether the accepted abuse of copyrighted works in training AI amounts to a ‘fair use’. As well as considering fairness, the case opens a wider window on AI.

Before delving, I will note that I’ll continue to use the term AI, because it’s used in the case and the term is in general use for these emerging new technologies. But as both recent books The AI Con and AI Snake Oil (the two latest additions to my bookshelf) start off by making clear, there is no such single thing as AI. It is a catch-all term for a range of technologies – some of only very dubious effectiveness – and is really just a brand that is being deployed to raise (enormous amounts of) funding (two headlines from the Financial Times over this weekend cast light on the scale of this financing: Meta seeks $29 billion from private credit giants to fund AI data centres, and Nvidia insiders cash out $1 trillion worth of shares). The best known, and most used, of these new AI technologies are called large language models (LLMs), accurately described as stochastic parrots: models that simply put one word after another according to statistical models developed through their training.

Many legal systems favour the term fairness, and ‘fair use’ is a well-established concept in US law. The country’s Copyright Act (in 17 USC §107) clearly restricts fair use to usage “for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research”. It sets out four factors that should be considered in determining whether a given use is in fact fair:

1. the purpose and character of the use, including whether such use is of a commercial nature or is for non-profit educational purposes;
2. the nature of the copyrighted work;
3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
4. the effect of the use upon the potential market for or value of the copyrighted work.

Deciding what uses are fair is both a matter of law and of the specific facts, meaning that there are multiple cases that have considered these factors. The list of four factors is not exhaustive, but are assistants in reaching the overall conclusion. The fourth factor, whether the use risks substituting for the copyright materials in the marketplace, is generally seen to be the most important. Courts need to apply judgment and consideration is deciding on fair use; as ever, assessing fairness requires thought and judgment.

As judge Chhabria explains in his summary judgement:

“What copyright law cares about, above all else, is preserving the incentive for human beings to create artistic and scientific works. Therefore, it is generally illegal to copy protected works without permission. And the doctrine of “fair use,” which provides a defense to certain claims of copyright infringement, typically doesn’t apply to copying that will significantly diminish the ability of copyright holders to make money from their works.”

He is as rude as a judge ever gets about a fellow judge who reached a recent decision on a fair use case in relation to Anthropic, another AI firm (Order on Fair Use at 28, Bartz v Anthropic PBC, No. 24-cv-5417 (N.D. Cal. June 23, 2025), Dkt. No. 231). That judge was convinced by the argument that training AI was no different from – and had no more impact on the market for copyright products – than training schoolchildren to write. Chhabria says: “when it comes to market effects, using books to teach children to write is not remotely like using books to create a product that a single individual could employ to generate countless competing works with a miniscule fraction of the time and creativity it would otherwise take. This inapt analogy is not a basis for blowing off the most important factor in the fair use analysis.”

And surprisingly given his overall ruling, Chhabria is very clear that AI companies are breaching copyright law and are damaging the commercial market for copyrighted works. He seems very sure that AI companies fail at the fourth factor in assessing fair use: “by training generative AI models with copyrighted works, companies are creating something that often will dramatically undermine the market for those works, and thus dramatically undermine the incentive for human beings to create things the old-fashioned way”.

Chhabria also notes a simple flaw in one of the AI companies’ arguments: that applying copyright law will stifle the development of this technology. He notes that any finding that this use of copyrighted materials isn’t fair use does not bar that use, it just requires that AI companies need to reach a commercial agreement with copyright holders to compensate them for the – unfair – use of their materials. As he points out, these businesses project that they will make billions, indeed trillions, of dollars from AI services, so should be able readily to afford such licensing. Indeed, the court saw evidence that Meta initially sought to licence book materials for training purposes, and considered spending up to $100 million on doing so. This never happened because book publishers do not hold rights to this use of book materials – like other novel uses, the rights rest with the authors – so there is no central point or points for such a negotiation. The fact that AI companies are seeking direct commercial benefit from their use of copyright materials makes their burden in demonstrating fair use much harder.

Despite Chhabria’s conclusions that seem to strongly favour the copyright-holders who brought the case, he nonetheless found against them. The copyright holders are 13 authors who argued that their works had been used in training Facebook’s Llama LLM models. In essence they failed in their claim because their lawyers focused their efforts and arguments in the wrong place. They made their arguments predominantly under the first three of the four factors in §107 of the Copyright Act, and failed in those. While the fourth factor – the effect of the use on the potential market for the copyrighted work – is generally seen as the most important, that is not an argument they made strongly. They simply did not argue (or were at best “half-hearted” in those arguments) that their works had been used as the basis for a tool which might flood the market with similar works, undermining the value of their copyright, nor did they provide evidence to support such an argument. This was “the potentially winning argument” according to Chhabria; the (weaker) points actually deployed in argument before the court did not succeed.

Chhabria was clear:

“this ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful. It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one.”

It does seem ludicrous that the most valuable companies in the world should argue that it is fair for them to take stolen copies of books subject to copyright protection (the training materials were taken from so-called ‘shadow libraries’, of illegally scanned books) and make what they predict will be huge commercial profits as a direct result, while providing the copyright holders with no compensation. The fact that Meta explored licensing but found it too difficult and delaying helps support the case that this would be the right thing to do.

The Kadrey case reports one other specific element of the training of Llama models – that they are taught not to produce more than 50 words together that are repeated from any one source (even if provided with highly directive prompts to do so). The fact that this is a deliberate part of the training shows just how prone these technologies are just to leaning on what they have read. In a recent Financial Times interview, Professor Emily Bender, coiner of the term stochastic parrots and co-author of both the academic article that brought the term to prominence and of The AI Con, is quoted as calling LLMs “plagiarism machines”.

I have to admit that, as may be apparent from my recent reading habits, that I am an AI sceptic. I suspect that we will look back on this period with puzzlement, and wonder why we threw colossal amounts of computing power – and colossal levels of energy in our carbon-constrained world – at jobs that human brains are better at. AI is neither artificial nor intelligent: it isn’t artificial because it depends on human creativity in the training, and it also depends on significant, horrible, labour (typically cheap precarious labour in emerging economies) in cleansing the models of the filth that it produces because it has been trained on, among other things, the global sewer that is the Internet. It isn’t intelligent, it’s just reproducing others’ language patterns based on statistics, “haphazardly stitching together sequences of linguistic forms it has observed in its vast training data…without any reference to meaning” as the stochastic parrots paper put it. As Bender told the FT, we are “imagining a mind behind the text…the understanding is all on our end”. There will no doubt be jobs that AI technologies are useful for, but like any human tool it is tailored to its task, and not a general purpose vehicle for all activity. Currently we have a hammer and are making the mistake of seeing everything as a nail.

As a result, I suspect that much of the billions being deployed in AI currently will turn out to have been wasted. I should admit also that my view may be coloured by the fact that I entered the investment world exactly at the time of the dotcom bubble. While I avoided losing money in the dotcom bust, I also missed out on investment gains as that bubble inflated.

But this is a blog on fairness, not AI cynicism. The Kadrey decision did not conclude that Meta’s actions were fair, only that the copyright-holders had failed to deploy the arguments that might have shown how unfair the use of their materials was. This will clearly not be the last such case, and while the AI businesses will continue to deploy some of their investors’ millions into their defence, judge Chhabria’s legal conclusions suggest they will have a challenging time winning cases argued on the right basis.

Rather than finding that Meta’s use was fair, the Kadrey decision is highly suggestive that AI is not fair in its use and abuse of copyright materials. That feels right: fairness should always tend to rebalance power away from those with billions towards those of whom they take uncompensated advantage.

See also: Learning from the stochastic parrots
Amazon resurrects the worst of the industrial revolution
A just AI transition?

I am happy to confirm as ever that the Sense of Fairness blog is a purely personal endeavour

Kadrey v Meta, Case No. 23-cv-03417-VC, Summary Judgement 25 June 2025 (Docket Nos 482, 501)

The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, Emily Bender, Alex Hanna, Bodley Head, 2025

AI Snake Oil: What Artificial Intelligence Can Do, What it Can’t, and How to Tell the Difference, Arvind Narayanan, Sayash Kapoor, Princeton University Press, 2024

Meta seeks $29 billion from private credit giants to fund AI data centres, Eric Platt, Oliver Barnes, Hannah Murphy, Financial Times, 27 June 2025

Nvidia insiders cash out $1 trillion worth of shares, Michael Acton, Patrick Templeton-West, Financial Times, 29 June 2025

The Copyright Act, 17 USC

AI sceptic Emily Bender: ‘The emperor has no clothes’, George Hammond, Financial Times, 20 June 2025

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, Emily Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell, Proceedings of FAccT 2021

Finding fairness in health insurance

‘Beyond the regulatory boundary, “fairness” can be seen as an opportunity to generate value to both the enterprise and its wider community. Fairness frameworks can be aligned with corporate and brand values as part of the broader enterprise strategic and risk management framework.’

While the starting point for a recent report on fairness in health insurance is the regulatory consumer duty recently put in place by the UK’s Financial Conduct Authority (FCA), it’s clear that the paper is more interested in business risk and opportunity than in mere regulation. As this blog has long found, there are remarkable numbers among the challenges that modern business faces that can best be viewed through the lens of fairness – and many opportunities that can be uncovered by deploying that lens.

From those smart people at Milliman, the report, Fairness in UK health insurance: Developing a framework and best practices in health insurance, covers the full range of relevant issues. These include: fairness in the delivery of consumer relations, especially the treatment of vulnerable customers; fairness in the application of new technologies, including AI and algorithms; and considerations of structural biases in healthcare.

While the report is narrowly focused on health insurance, it naturally considers many of the wider fairness issues that apply in the field of health. Foremost amongst this is the sterling work of the wonderful Sir Michael Marmot (perhaps most accessibly in The Health Gap). Marmot’s consideration of the social determinants of health is reflected in this brief segment of the report:

‘With greater focus on protected characteristics, there has been a move to greater use of “lifestyle” factors as personal “choices.”
‘Certainly, smoking is widely accepted as a choice. Possibly exercise; but are postcode, income, occupation, education, children? Whilst at some point they may have been “choices”, at later points in life they may be fixed rather than changeable (or controllable).’

Marmot shows that many of these issues are not choices at all, but are the outcomes of inequalities in wider society. In brief, poor housing, work, education, income, are all among the clearest determinants of poor future health outcomes. Few of these are chosen, particularly in economies where meritocracy is failing. While people in poverty are unlikely to be accessing health insurance, Marmot’s analysis raises questions about how fair it might be to consider some of these lifestyle issues in the pricing of such insurance.

The use of technology in relation to healthcare is also a significant feature of the Milliman work. In particular, the paper argues that: “Greater personalised medicine and healthcare are likely to mean that trust, privacy and fairness are increasingly necessary parts of ensuring good customer outcomes.” Further, it is not enough that fairness be done, it must be demonstrable that it has been delivered in practice: “Risk-based pricing and underwriting approaches should be auditable with clear accountability and robust governance of decision-making processes.” It is certainly not enough to assume that AI or algorithms will generate fairness – as this blog has found previously, these technologies can encode and replicate pre-existing unfairnesses.

But it is in dealing with the FCA’s consumer duty requirements that the paper breaks most fresh ground. Given that these requirements are still new, we have not yet seen best practices developed. The paper is therefore one of the earliest detailed expositions that I have seen of how it might bite in practice, and what it may require of business. The Milliman summary of the application of the duty to health insurance seems clear and wholly appropriate to me:

“For UK health insurers this combines providing fair customer treatment and clear information with products that genuinely meet their needs. Particular attention is required to product design and suitability, customer communication, customer support and providing “fair value” as well as systems to track performance and make adjustments. There is also a need to give special attention to vulnerable customers.”

The paper later considers some of the detailed best practices that seem likely to be required under the duty, including “Being transparent with customers, in particular with terms and conditions around the inclusion or exclusion of pre-existing medical conditions, any moratorium and exclusion of any specific types of treatments (often highly complex, severe conditions)” and “Being clear and signposting these policies using laymen terms in the policy documentation so there is no ambiguity.”

And there is a particular need for fairness in the area of health insurance, a particular need for care regarding consumer duty. As the paper point out: “Almost all customers making health claims are vulnerable to some extent, with particular efforts required to meet the needs of the most vulnerable.”

Reinsurance firm Pacific Life Re was also recently inspired by the advent of the FCA consumer duty regime to consider the uneven distribution of insurance products across different communities. But its response is a rather more bluntly commercial one: “Understanding where communities are underserved, both within your own business, and that of the industry, offers an opportunity.” This has echoes of the fortune at the bottom of the pyramid thinking that seems slightly to have slipped from much business thought. But as the work of Citizens Advice continues to show, accessibility and pricing of various insurance products are not evenly distributed across consumer groups.

The Milliman paper also draws a fascinating analogy to the independent governance committees that are in place in the pensions provided by insurance companies. These IGCs have responsibility for overseeing the customer experience and defending consumer interests, with particular attention to value for money. The paper suggests that there is something in this model that would be worthwhile perhaps extending to health insurance: “We consider that these efforts can be a best practice governance framework for the health insurance sector and help tap into consumer groups’ understanding of “fairness” and how their expectations evolve over time.” IGCs are consumer champions in pensions – do we need something similar for other products?

For some, the very availability of commercial health insurance in the UK is itself a symbol of unfairness. Where there is (at least in theory) universal provision through our National Health Service, health insurance can often be seen as jumping the queues which seem the main form of rationing of that provision (and which can make its universality seem theoretical). But we have to recognise the reality of the use of health insurance – not least as a standard employee benefit from companies keen that their staff stay healthy. It is certainly better that it is deployed fairly than not. The Milliman paper provides a pathfinder for that, as well as useful insights into the consumer duty more broadly.

See also: Some thoughts on Rethink: build back fairer
Meritocracy’s unfair
FCA unpacks fairness: the Consumer Duty
The Consumer Duty II – the FCA further unpacks fairness
The failures of algorithmic fairness

I am happy to confirm as ever that the Sense of Fairness blog is a purely personal endeavour

Fairness in UK health insurance: Developing a framework and best practices in health insurance, Milliman, February 2025

The Health Gap: the Challenge of an Unequal World, Sir Michael Marmot, Bloomsbury, 2015

Build Back Fairer: The Covid-19 Marmot Review, Michael Marmot, Jessica Allen, Peter Goldblatt, Eleanor Herd, Joana Morrison. The Health Foundation, December 2020

Beneath the Surface: Serving the Underserved, Pacific Life Re, 2024

Fortune at the Bottom of the Pyramid: Eradicating Poverty through Profits, CK Prahalad, Wharton School Publishing, 2004

Discriminatory pricing: Exploring the ‘ethnicity penalty’ in the insurance market, Citizens Advice, 2022

A just AI transition?

Even given my cynicism regarding the current hype about artificial intelligence (AI)*, I have to admit that it’s very clear this new technology will transform the world of work. The societal excitement about Chat GPT and other large language models (LLMs) has been matched by corporate excitement. Companies across the world are experimenting broadly, many of them keen to deploy this as a cost-saving tool.

Of course, as with every technology shift, cost-saving comes in the form of replacing people with machinery. Efficiency means being able to do things more quickly with less human intervention. If the current experiments with AI deliver, perhaps companies will redeploy those humans to other work. More likely they will remove those people, and their costs, from their business.

That’s how business, and economies as a whole, operate: moving to more efficient ways of delivering what customers and society want, to enable higher profits or simply to enable companies to compete with rivals which are also trying to reduce their costs. On the whole, this is good for economies too as more efficiency allows national resources to the deployed to where they deliver most value.

But that redeployment takes time, and technology transitions are painful processes, for individuals and for society. Discussions of efficiency, cost savings or redeploying resources divorce us from the very real human and emotional impacts of these changes, which are of individuals losing their jobs and livelihoods, and subsequently struggling for money and self-esteem. Even where a technological shift does create new opportunities (which has been the case with every such transition previously and so seems likely once again), that takes time – time in which individuals feel unanchored, unvalued, and perhaps reach an age where further employment is unavailable to them. That can serve to destabilise society further. We shouldn’t let the economics blind us to the personal and emotional.

There is much talk in sustainable investment circles of the need for a just transition (sometimes a fair and just transition) to a decarbonised world, ensuring that care is taken to protect and support those individuals whose jobs are impacted by the dramatic shift to economic activity that must come as the world finally faces up to the realities of climate change. There is also likely to need to be a fair and just transition to an AI-enabled world.

Recent work from the International Monetary Fund (IMF) begins to open a window on this challenge, building on the sentiment of managing director Kristalina Georgieva in a blog from a year ago called AI Will Transform the Global Economy. Let’s Make Sure It Benefits Humanity. The organisation is developing approaches to consider which jobs – and which economies overall – will be impacted by the advent of AI.

Most recently, IMF staff considered impacts in Asia. A blog this month, How Artificial Intelligence Will Affect Asia’s Economies, tries to map this out in more detail, based on analysis of the breakdown of jobs in each economy. The blog reflects a deeper discussion within the analytical note attached to the Fund’s most recent Asia and Pacific Regional Economic Outlook. This analysis suggests a greater exposure to AI impacts in the region in what IMF jargon terms advanced economies, while emerging economies are likely to face lower impacts. However, they also seek to assess whether those impacts will be positive or negative for jobs: around half the impacts in advanced economies are where AI is complementary to the job, potentially driving economic benefits; meanwhile in emerging economies the majority of impacts are where there is much more likelihood of workers finding their jobs replaced. The economists hedge this analysis with language such as ‘low complementarity’ and ‘displacement’ of work, but the thinking is clear.

The language was more blunt in some earlier less detailed work, suggesting AI “could endanger 33 percent of jobs in advanced economies, 24 percent in emerging economies, and 18 percent in low-income countries”. Those conclusions look more worrying than the most recent analysis, but even the lower levels of estimated disruption are very significant.

According to the most recent analysis, there is also a gendered split in the potential impacts, again potentially exacerbating existing inequalities:

The blog reads:

“The concentration of such [complementary] jobs in Asia’s advanced economies could worsen inequality between countries over time. While about 40 percent of jobs in Singapore are rated as highly complementary to AI, the share is just 3 percent in Laos.
“AI could also increase inequality within countries. Most workers at risk of displacement in the Asia-Pacific region work in service, sales, and clerical support roles. Meanwhile, workers who are more likely to benefit from AI typically work in managerial, professional, and technician roles that already tend to be among the better paid professions.”

Georgieva was clear about the risks: “In most scenarios, AI will likely worsen overall inequality, a troubling trend that policymakers must proactively address to prevent the technology from further stoking social tensions.”

The IMF economists are increasingly clear about what needs to be done about these inequality risks. According to them, a just transition will require:

  • Effective social security nets
  • Reskilling programmes for affected workers
  • Education and training to enable effective application of the AI opportunity, particularly for those economies where AI is currently seen to have low impacts – so that the positive benefits can be enjoyed
  • Regulation to promote ethical AI use and data protection

The IMF, in its AI Preparedness Index, suggests that there is a broad spread in the readiness of global economies for this coming wave of technological disruption:

Again, as things stand it seems that the greatest likelihood is for AI to exacerbate existing inequalities. Preparing for this major economic shift will demand fresh policies and investment. These are significant challenges for world economies, for companies as they embed AI into their workflows, and for global investors, to rise to.

See also: Learning from the stochastic parrots
Amazon resurrects worst of the industrial revolution
Just transitions and gilets jaunes

*I have my doubts about each of the A and the I in artificial intelligence: calling an activity ‘artificial’ when it depends on the horrific grinding work of many people to scrub its results seems inaccurate; and calling it ‘intelligent’ when it is simply a logic puzzle about the likelihood of putting one word after another – the stochastic parrots as described in that prescient article (I particularly like the analogy of Emily Bender, one of the authors of that article and a professor at the University of Washington, that AI is reproducing text in the way people might if they had unrestricted access to the National Library of Thailand but without pictures or dictionaries to enable them actually to understand or translate the language). A more recent article touching on these matters is the excellent Ask me Anything! How ChatGPT got Hyped into Being, which among other things states this fundamental truth: “LLMs are not designed to represent the world. There is no understanding by the artificial agent (chatbot) of the meaning of the output it creates. It is us humans who create that meaning.” More directly, the word soups that I have been presented with by colleagues show very clearly the limits of the technology in doing anything without clear instruction and precise pre-existing materials to work with.

See also: What’s a fair use?

I am happy for confirm as ever that the Sense of Fairness blog is a purely personal endeavour.

AI Will Transform the Global Economy. Let’s Make Sure It Benefits Humanity, Kristalina Georgieva, IMF, 14 January 2024

How Artificial Intelligence Will Affect Asia’s Economies, Tristan Hennig, Shujaat Khan, IMF, 5 January 2025

Asia and Pacific Regional Economic Outlook, IMF, November 2024

Thought experiment in the National Library of Thailand, Emily Bender, Medium, 25 May 2023

Ask me Anything! How ChatGPT got Hyped into Being, Jascha Bareis, 2024

Amazon resurrects worst of the industrial revolution

In a small (a very small) way, I collect 18th and 19th century company tokens. These symbolise for me the worst of the financial exploitation of the industrial revolution. It was not enough for the industrialists to ruthlessly exploit the excess availability of labour and pay badly the new industrial workforces of their dark satanic mills*. In many cases they also paid not in money that could be spent anywhere, but in scrip – tokens that could only be spent at the company’s own store. Prices there reflected the guaranteed market, so workers were exploited over again. These practices were progressively abolished in England from 1831 onwards by the oddly-named (at least to modern ears) Truck Acts. There is similar legislation to bar such abuses elsewhere in the world – though not everywhere.

As we know, this financial exploitation sat alongside brutal working conditions where injuries and even death were common, accepted outcomes of the industrial process. That was a lack of health and safety gone mad.

We’ve known for a while that the AI revolution depends on a similar exploitation of the health and safety of workers. The stories of the employees of Sama in Kenya, who helped train ChatGPT, are disturbing. The human training of these supposedly ‘artificial’ intelligence systems (ChatGPT is no worse in this regard than its rivals) involves individual people being exposed to the worst things that the draft forms of AI machines produce. As the machines’ training materials are the entire internet, this replicates the biases of the present and prejudices of the past, and includes all the filth that humankind has produced in recent years. The human job is to tell the AI not to produce further paedophilia, repeat racist incitement, and so on – but in order to do that, people need to read and look at truly horrific material.

Sadly, the people who did this work are not treated well. Their mental health disorders are the equivalents of the fingers on the floors of cotton mills. These seem not to trouble those who are making epoch-making amounts of money, and little enters the public discourse so that it has minimal impacts on consumer use of these products.

But it turns out that such physical exploitation of people’s health isn’t all that’s going on in the current technological revolution. Amazon has revived the company scrip model. It pays some of its MTurk workforce in Amazon gift cards, and severely constrains how those gift cards can be spent so that the workers are unable to get full value from them. MTurk – mechanical Turk in full – is the name for the distributed self-employed workers who perform tasks that help test and train much modern IT and so ensure its smooth working. The name aptly reflects the 18th century supposedly mechanical chess board, called the Turk, that toured Europe playing matches. Instead of being an automaton, the Turk actually only worked because in place of a machine there was a skilled human chess player crammed uncomfortably into the space under the board.

In the same way, the human work that is necessary to help train current supposedly ‘artificial’ intelligence technologies suggests there is some artifice in calling them artificial.

The DAIR Institute (Distributed AI Research Institute in full) – the grouping formed by the authors of the Stochastic Parrots paper – have launched a Data Workers’ Inquiry trying to bring forward the stories of the people who are directly involved in facilitating the current technology revolution, and who all too often are its unhappy victims. Consistent with the DAIR philosophy, this includes putting the voices of the individual workers themselves at the heart of the work, and facilitating them in telling their stories in the forms they find most comfortable and appropriate.

One of the stories discussed on the launch webinar, and on which the Inquiry has published a short paper, highlights this issue. Though its author, Alexis Chávez, is from Venezuela, the use of gift cards as payment isn’t restricted to countries where currency or sanctions issues might limit payments in real money: Chávez shows that the practice applies in (at least) Brazil, Colombia, India, Kenya, Mexico, Pakistan, and the Philippines. The paper details the convoluted processes needed for these individuals to gain value from their gift card payments, which mean that they are in effect forced to take discounts of 20-30% in order to extract value. It’s like the mark-up in the company store.

And it’s hard to argue with Chávez: “Even though Amazon does not see them as employees but as independent contractors, it’s our right to be paid fairly and in a useful manner.” We fondly thought the worst of the financial practices of the industrial revolution were far behind us – they should be – but unfairness clearly persists in the very human side of the supposedly ‘artificial’ intelligence business.

The second event in the Data Workers Inquiry happens this week, and Chávez himself is due to speak on August 26th.

* This phrase is from William Blake’s preface to his lengthy 1804 poem in praise of John Milton, words that are now known to us as Jerusalem. Please consider supporting the campaign to save Blake’s cottage in the West Sussex village of Felpham.

I am happy to confirm as ever that the Sense of Fairness blog remains a wholly personal endeavour.

See also: Learning from the Stochastic Parrots

Mental Health and Drug Dependency in Content Moderation, Fasica Berhane Gebrekidan, the Data Workers Inquiry, June 2024

Click Captives: The Unseen Struggle of Data Workers, Wilington Shitawa, the Data Workers Inquiry, June 2024

The African Women of Content Moderation, Botlhokwa Ranta, the Data Workers Inquiry, June 2024

OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic, Billy Perrigo, Time, 18 January 2023

Data Workers Inquiry

The Distributed AI Research Institute (DAIR Institute)

The Impact of Gift Card Payments on MTurk Workers, Alexis Chávez, the Data Workers Inquiry, June 2024