December 5, 2020
Google AI researcher's exit sparks ethics, bias concerns
Prominent artificial intelligence scholar Timnit Gebru helped improve Google's public image as a company that elevates Black computer scientists and questions harmful uses of AI technology.
But internally, Gebru, a leader in the field of AI ethics, was not shy about voicing doubts about those commitments—until she was pushed out of the company this week in a dispute over a research paper examining the societal dangers of an emerging branch of AI.
Gebru announced on Twitter she was fired. Google told employees she resigned. More than 1,200 Google employees have signed on to an open letter calling the incident "unprecedented research censorship" and faulting the company for racism and defensiveness.
The furor over Gebru's abrupt departure is the latest incident raising questions about whether Google has strayed so far away from its original "Don't Be Evil" motto that the company now routinely ousts employees who dare to challenge management. The exit of Gebru, who is Black, also raised further doubts about diversity and inclusion at a company where Black women account for just 1.6% of the workforce.
And it's exposed concerns beyond Google about whether showy efforts at ethical AI—ranging from a White House executive order this week to ethics review teams set up throughout the tech industry—are of little use when their conclusions might threaten profits or national interests.
Gebru has been a star in the AI ethics world who spent her early tech career working on Apple products and got her doctorate studying computer vision at the Stanford Artificial Intelligence Laboratory.
She's co-founder of the group Black in AI, which promotes Black employment and leadership in the field. She's known for a landmark 2018 study that found racial and gender bias in facial recognition software.
Gebru had recently been working on a paper examining the risks of developing computer systems that analyze huge databases of human language and use that to create their own human-like text. The paper, a copy of which was shown to The Associated Press, mentions Google's own new technology, used in its search business, as well as those developed by others.
Besides flagging the potential dangers of bias, the paper also cited the environmental cost of chugging so much energy to run the models—an important issue at a company that brags about its commitment to being carbon neutral since 2007 as it strives to become even greener.
Google managers had concerns about omissions in the work and its timing, and wanted the names of Google employees taken off the study, but Gebru objected, according to an exchange of emails shared with the AP and first who co-authored the 2018 facial recognition study with Gebru.
"She deserves more than Google knew how to give, and now she is an all-star free agent who will continue to transform the tech industry," Buolamwini said in an email Friday.
How Google will handle its AI ethics initiative and the internal dissent sparked by Gebru's exit is one of a number of problems facing the company heading into the new year.
At the same time she was on her way out, the National Labor Relations Board on Wednesday cast another spotlight on Google's workplace. In a complaint, the NRLB accused the company of spying on employees during a 2019 effort to organize a union before the company fired two activist workers for engaging in activities allowed under U.S. law. Google has denied the allegations in the case, which is scheduled for an April hearing.
Google has also been cast as a profit-mongering bully by the U.S. Justice Department in an antitrust lawsuit alleging the company has been illegally abusing the power of its dominant search engine and other popular digital services to stifle competition. The company also denies any wrongdoing in that legal battle, which may drag on for years.
© 2020 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.