![]() The researchers compared the way Big Tech (large technology companies like Google, Amazon, and Facebook) is funding and leading AI research with how big tobacco companies funded research in an effort to dispel concerns about the health effects of smoking. ![]() Gebru cited The Grey Hoodie Project, a research paper by Mohamed Abdalla of the University of Toronto and Moustafa Abdalla of Harvard Medical School. There are also concerns about tech companies funding AI research at academic institutions. While many AI researchers work in academia, Gebru said that in her experience, that avenue poses its own concerns related to gatekeeping, harassment, and an incentive structure that doesn’t reward long-term research. Since leaving Google, Gebru has been working to develop an independent research institute. "Because when you start censoring research, then that's what happens, right? The papers that come out end up being more like propaganda.” Problems extend outside of Big Tech “We can't have the current dynamic that we have and expect any sort of nonpropaganda tech to come out of tech companies," she said. It’s important to hold tech companies accountable from the outside, Gebru said. So if you survive, it's because maybe you're not poking … at a thing that they find super important.” “What I learned is that it's impossible, because the moment you push a little hard, you're out. “I was thinking, ‘Okay, maybe I can carve out a small piece … that is safe for people in marginalized groups,’” she said. She said some people had doubts that she would be able to change a company as large as Google. A paper she co-authored with MIT Media Lab researcher Joy Buolamwini explored bias in facial recognition algorithms.Īfter joining Google in 2018, “I had issues from the very beginning,” Gebru said. Gebru’s research centers on unintended negative impacts of artificial intelligence. “The moment you push a little hard, you’re out” Ultimately, she said, the goal is better and more equitable artificial intelligence. Gebru refused to so without a fuller explanation from Google, which led to Google announcing her departure.ĭuring her recent talk, Gebru highlighted what she views as the labor rights concerns of AI workers, how to protect them, and why academia isn’t always a better route for researchers. Google’s search engine runs on such a large language model.Ĭiting concerns, Google told Gebru to retract the paper from a conference or remove her name and the name of other Google researchers, according to The New York Times. Gebru was forced out at Google last December (Gebru said she was fired, while Google said she resigned) after co-writing a paper about the risks of large AI language models, such as environmental impacts and the difficulty in finding embedded biases. “It’s all the incentive structures that are not in place for you to challenge the status quo,” she said. The situation imperils both the rights of AI workers at those companies and the quality of research that is shared with the public, said Gebru, speaking at the recent EmTech MIT conference hosted by MIT Technology Review.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |