“Early Thoughts on Generative AI” Prepared Remarks of Commissioner Alvaro M. Bedoya, Federal Trade Commission Before the International Association of Privacy Professionals
vale decisamente la pena.
Longtermisti e criti-hype-isti scansatevi.
Ormai i policymaker hanno imparato il giochino.(non tutti, ma sempre più)
Last, let’s turn to the idea that the creators of this technology are “a little bit scared” to quote the CEO of ChatGPT. Personally, and I say this with respect – I do not see the existential threats to our society that others do. Yet when you combine these statements with the unpredictability and inexplicability of these models, the sum total is something that we as consumer protection authorities have never reckoned with.
Let me put it this way. When the iPhone was first released, it was many things: a phone, a camera, a web browser, an email client, a calendar, and more.
Imagine launching the iPhone – having 100 million people using it – but not knowing what it can do or why it can do those things, all while claiming to be frightened of it.
That is what we’re facing today. So we need to think quickly about how these new dynamics map onto consumer protection law.
- The Adult (Human) in the Room
And so, I’ll offer four observations that double as notes of caution.
- First, generative AI is regulated.
- Second, much of that law is focused on impacts to regular people. Not experts, regular people.
- Third, some of that law demands explanations. “Unpredictability” is rarely a defense.
- And fourth, looking ahead, regulators and society at large will need companies to do much more to be transparent and accountable.
Let’s start with that first point. There is a powerful myth out there that “AI is unregulated.” You see it pop up in New York Times op-ed columns, in civil society advocacy, and in scholarship. It has a powerful intuitive appeal — it just sounds right. How could these mysterious new technologies be regulated under our dusty old laws?
If you’ve heard this, or even said it, please take a step back and ask: Who does this idea help? It doesn’t help consumers, who feel increasingly helpless and lost. It doesn’t help most companies. It certainly doesn’t help privacy professionals like you, who now have to deal with investors and staff who think they’re operating in a law-free zone.
I think that this idea that “AI is unregulated” helps that small subset of companies who are uninterested in compliance. And we’ve heard similar lines before. “We’re not a taxi company, we’re a tech company.” “We’re not a hotel company, we’re a tech company.” These statements were usually followed by claims that state or local regulations could not apply to said companies.
The reality is, AI is regulated. Just a few examples:
- Unfair and deceptive trade practices laws apply to AI. At the FTC our core section 5 jurisdiction extends to companies making, selling, or using AI. If a company makes a deceptive claim using (or about) AI, that company can be held accountable. If a company injures consumers in a way that satisfies our test for unfairness when using or releasing AI, that company can be held accountable.
Civil rights laws apply to AI. If you’re a creditor, look to the Equal Credit Opportunity Act. If you’re an employer, look to Title VII of the Civil Rights Act. If you’re a housing provider, look to the Fair Housing Act.
- Tort and product liability laws apply to AI. There is no AI carve-out to product liability statutes, nor is there an AI carve-out to common law causes of action.
AI is regulated. Do I support stronger statutory protections? Absolutely. But AI does not, today, exist in a law-free environment.
Here’s the second thing. There’s a back-and-forth that’s playing out in the popular press. There will be a wave of breathless coverage – and then there will be a very dry response from technical experts, stressing that no, these machines are not sentient, they’re just mimicking stories and patterns they’ve been trained on. No, they are not emoting, they are just echoing the vast quantities of human speech that they have analyzed.
I worry that this debate may obscure the point. Because the law doesn’t turn on how a trained expert reacts to a technology – it turns on how regular people understand it.
At the FTC, for example, when we evaluate whether a statement is deceptive, we ask what a reasonable person would think of it. When analyzing unfairness, we ask whether a reasonable person could avoid the harms in question. In tort law, we have the “eggshell” plaintiff doctrine: If your victim is particularly susceptible to an injury you caused, that is on you.
The American Academy of Pediatrics has declared a national emergency in child and adolescent mental health. The Surgeon General says that we are going through an epidemic of loneliness.
I urge companies to think twice before they deploy a product that is designed in a way that may lead people to feel they have a trusted relationship with it or think that it is a real person. I urge companies to think hard about how their technology will affect people’s mental health – particularly kids and teenagers.
Third, I want to note that the law sometimes demands explanation – and that the inexplicability or unpredictability of a product is rarely a legally cognizable defense.
What do I mean by that?
Looking solely on laws that the FTC enforces, both the Fair Credit Reporting Act and the Equal Credit Opportunity Act require explanations for certain kinds of adverse decisions.
Under our section 5 authority, we have frequently brought actions against companies for the failure to take reasonable measures to prevent reasonably foreseeable risks. And the Commission has historically not responded well to the idea that a company is not responsible for their product because that product is a “black box” that was unintelligible or difficult to test.
I urge companies who are creating or using AI products for important eligibility decisions to closely consider that the ability to explain your product and predict the risks that it will generate may be critical to your ability to comply with the law.
Fourth and last, I want to end on a call for a maximum of transparency and accountability.
I recently saw that the technical report accompanying the GPT-4 rejected the need to be transparent about the building blocks of the technology. It says:
“Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”
This is a mistake. External researchers, civil society, and government need to be involved in analyzing and stress testing these models; it is difficult to see how that can be done with this kind of opacity.
- Focusing on Threats Today
I keep thinking about the now-infamous survey conducted by Oxford University last year that found that the median expert gave a 5% chance that the long-run effect of advanced AI on humanity would be, and I quote, “extremely bad (e.g. human extinction).” I also keep thinking about the GPT-4 white paper’s focus on “safety.”
I’m worried that inchoate ideas of existential threats will make us – at least in the short and medium term – much less “safe.” I’m worried that these ideas are being used as a reason to provide less and less transparency. And I worry that they might distract us from all the ways that AI is already being used in our society today.
Automated systems new and old are routinely used today to decide who to parole, who to fire, who to hire, who deserves housing, who deserves a loan, who to treat in a hospital – and who to send home. These are the decisions that concern me the most. And I think we should focus on them.