It may seem like AI adoption has taken off rapidly, but there are some notable holdouts.

Emily Bender, a UW linguistics professor, and Alex Hanna, the research director of the Distributed AI Research Institute and former Google AI ethicist, would like readers to take away from their new book, “The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want” that AI isn’t want it’s marketed to be.

Longtime collaborators, cohosts of the Mystery AI Hype Theater 3000 podcast and vocal AI critics, Bender and Hanna, want to take hyperbole out of the conversation around AI, and caution that, frankly, intelligence isn’t artificial.

Their at times funny and irreverent undressing of AI into “mathy maths”, “text extruding machines”, or classically, “stochastic parrots” aims to get us to see automation technologies for what they are and separate them from hype.

This Q&A has been edited for clarity and length.

For many, the concept of AI harm can be hard to imagine. What’s a good way to ground people’s understanding that there are also negative consequences associated with AI?

Bender: I think it’s always helpful to keep the people in the frame. The narrative that this [automation] is artificial intelligence is designed to hide the people.

The people involved are everything from the programmers who made algorithmic decisions, to the people whose creative work was appropriated, even stolen, as the basis for things. Even the people who did the data work, so the content moderation, so that the system outputs what users see, don’t have horrific stuff.

Hanna: This term AI is not a singular thing. There’s kind of a gloss on many different types of automation, and the thought that there’s a tool that’s just writing emails, upstages that this term is being leveraged. These are systems used in things as broad as incarceration, hiring decisions, to outputting synthetic media.

Just like fast fashion or chocolate production, a whole host of people are involved in maintaining this supply chain.

That AI-generated email, or text, for this difficult thing I don’t want to write, know that there’s a whole ecosystem around it that’s affecting people, labor-wise, environmentally, and in other guises.

The book highlights countless ways that AI is extractive and can make human life worse. Why do you think so many are singing the gospel of AI and embracing such tools?

Bender: It’s interesting that you use the phrase singing the gospel.

There are a lot of people who have drawn connections between, especially talk of artificial general intelligence and Christian eschatology, which is the idea that there is something we could build that could save us.

That could save us from everything from the dread of menial tasks to major problems we’re facing, like the climate crisis, to just the experience of not having answers available. Of course, none of that actually plays out. We do not live in a world where every question has an answer.

The idea that if we just throw enough compute and data at it, and there’s the extractivism, we’d be relieved of that, and be in a situation where there is an answer to every question at our fingertips.

Hanna: There’s a desire for computing to step in and really wow us, and now we have AI for everything from social services to healthcare to making art. Part of it is a desire to have a more “objective” type of computational being.

Lately, there’s been a lot made of ‘the crisis of social capital’, ‘the crisis of masculinity, the crisis of ‘insert your favorite thing here’ that’s a social phenomenon.

This goes back to Robert Putman’s book “Bowling Alone” and a few weird results in the 2006 general social survey, which said people have fewer close friends than they used to.

There’s this general thesis that people are lonelier, and that may be true, but AI is presented as a panacea for those social ills.

When there are a lot more things that we need to focus on, that are much harder, like rebuilding social infrastructure, rebuilding third spaces, fortifying our schools, rebuilding urban infrastructure, but if we have a technology that seems to do all of those things, then people get really excited about it.

Language is also a large focus of the book, and you codified the doomer, boomer, and booster camps. Can you say more about these groups? What about readers who won’t recognize themselves in any of these groups?

Bender: The booster versus doomer thing is really constricting.

This is the discourse where that’s supposed to be one-dimensional incline, where on one end you have the doomers who say, ‘AI is a thing and it’s going to kill us all!’ And on the other end, AI boosters say, ‘AI is a thing and it’s going to solve all of our problems!’ and the way that they speak often sounds like that is the full range of options.

So you’re at one end or the other, or somewhere in the middle, and the point we make is that actually, no, that’s a really small space of possibilities. It’s two sides of the same coin, both predicated on ‘AI is a thing as is super powerful’ and that is ungrounded nonsense.

Most of the space of possibilities, including the space that we inhabit, is outside that.

Hanna: We hope the book also gives people on that booster and doomer scale, a way out of that thinking.

This can be a mechanism to help people change their minds and consider a perspective that they might not have considered. Because we’re in a situation where the AI hype is so — this is a term I learned from Emily — “thick on the ground”, that it’s hard to really see things for what they are.

You offer many steps that people can take to resist the pervasive use of AI. What does one do when your workplace, or online services you use, have baked AI functionality in everyday processes?

Bender: In all cases when you’re talking about refusal, both individual and collective, it’s helpful to go back to values and why we’re doing what we’re doing.

People can ask a series of questions about any technology. It is important to remember that you have the agency to ask those questions.

The inevitability narrative is basically an attempt to steal that agency and say, “It is all powerful, or it will be soon, so just go along with it, and you’re not in a position to understand anyway.” In fact, we are all in a position to understand what it is and what values are involved in it.

Then you can say, ‘Okay, you’re proposing to use some automation here, how does that fit with our purposes and our values, and how do we know how well it fits? Where is the evaluation?’ Too much of this is ‘Oh, just believe us.'”.

There are instances where people with a very familiar and technical understanding of AI, motives notwithstanding, still overstate and misunderstand what AI is and what it can do. How should laypeople with a more casual understanding think about and talk about AI?

Bender: The first step is always disaggregating AI; it’s not one thing.

So what specifically is being automated? Then, be very skeptical of any claims because the people who are selling this are wholeheartedly embracing the magical sound of artificial intelligence and very often being extremely cagey at best about what the system actually does, what the training data was, and how they work.

Hanna: There’s a tendency, and it’s partially economic, partially just because some people are so deep in the sauce that they’re not really going to see the forest for the trees.

AI researchers are already primed to see those things through a certain light. They’re thinking about it through primarily engineering breakthroughs, more efficient ways to learn parameters, or to do XYZ tasks within that field, but they are not really the people focused on specialized fields like nursing, for instance.

People should take pride in and be able to use their expertise in their field to combat the AI hype. One great example of this is National Nurses United, which wrote explainers about AI and disaggregated between AI and biometric surveillance, passive listening, and censors in the clinicians’ office, and what that was doing to nursing practice. So, not buying into hype and leaning into one’s own expertise is a really powerful method here.

In your respective circles, what has been the reaction to the book thus far?

Bender: People are excited. Where I sit in linguistics, which is really an important angle in understanding why it is that the synthetic text extruding machines in particular are so compelling. The linguists that I speak to are excited to see our field having this role at this moment.

Hanna: I’ve had great reactions. A lot of my friends are software developers, or they’re in related fields since I went to undergrad in computer science, and a lot of my friends growing up were tech nerds, and almost to a T, all of them are anti-AI.

They say, ‘I don’t want Copilot,’ ‘I don’t want this stuff writing my code,’ ‘I’m really sick of the hype around this,’ and I thought that was the most surprising and maybe the most exciting part of this.

People who do technical jobs, where they’re promised the most speed or productivity improvements, are some of the people who are most opposed to the introduction of these tools in their work.

Share.
Exit mobile version