Last week, Google announced that it is creating a new external ethics board to guide its “responsible development of AI.” On the face of it, this seemed like an admirable move, but the company was hit with immediate criticism.
Researchers from Google, Microsoft, Facebook, and top universities objected to the board’s inclusion of Kay Coles James, the president of right-wing think tank The Heritage Foundation. They pointed out that James and her organization campaign against anti-discrimination laws for LGBTQ groups and sponsor climate change denial, making her unfit to offer ethical advice to the world’s most powerful AI company. An open petition demanding James’ removal was launched (it currently has more than 1,700 signatures), and as part of the backlash, one member of the newly formed board resigned.
Google has yet to say anything about all of this (it didn’t respond to multiple requests for comment from The Verge), but to many in the AI community, it’s a clear example of Big Tech’s inability to deal honestly and openly with the ethics of its work.
ETHICS BOARDS AND CHARTERS AREN’T CHANGING HOW COMPANIES OPERATE
This might come as a surprise if you’ve followed recent debates over AI ethics. In the past few years, tech companies certainly seem to have embraced ethical self-scrutiny: establishing ethics boards, writing ethics charters, and sponsoring research in topics like algorithmic bias. But are these boards and charters doing anything? Are they changing how these companies work or holding them accountable in any meaningful way?
Academic Ben Wagner says tech’s enthusiasm for ethics paraphernalia is just “ethics washing,” a strategy to avoid government regulation. When researchers uncover new ways for technology to harm marginalized groups or infringe on civil liberties, tech companies can point to their boards and charters and say, “Look, we’re doing something.” It deflects criticism, and because the boards lack any power, it means the companies don’t change.
“Most of the ethics principles developed now lack any institutional framework,” Wagner tells The Verge. “They’re non-binding. This makes it very easy for companies to look [at ethical issues] and go, ‘That’s important,’ but continue with whatever it is they were doing beforehand.”
Think of it like CEO Jack Dorsey’s repeated assurances that’s he thinking hard about Twitter’s problems with abuse, harassment, and neo-Nazis. He keeps thinking, and things on the site stay pretty much the same. At a certain point, all of this contemplation looks like a substitute for actual policy change.