The G7 has just agreed a (voluntary) code of conduct that AI companies should abide by, as governments seek to minimize the harms and risks created by AI systems. And later this week, the UK will be full of AI movers and shakers attending the government’s AI Safety Summit, an effort to come up with global rules on AI safety. 

In all, these events suggest that the narrative pushed by Silicon Valley about the “existential risk” posed by AI seems to be increasingly dominant in public discourse.

This is concerning, because focusing on fixing hypothetical harms that may emerge in the future takes attention from the very real harms AI is causing today. “Existing AI systems that cause demonstrated harms are more dangerous than hypothetical ‘sentient’ AI systems because they are real,” writes Joy Buolamwini, a renowned AI researcher and activist, in her new memoir Unmasking AI: My Mission to Protect What Is Human in a World of Machines. Read more of her thoughts in an excerpt from her book, out tomorrow. 

I had the pleasure of talking with Buolamwini about her life story and what concerns her in AI today. Buolamwini is an influential voice in the field. Her research on bias in facial recognition systems made companies such as IBM, Google, and Microsoft change their systems and back away from selling their technology to law enforcement. 

Now, Buolamwini has a new target in sight. She is calling for a radical rethink of how AI systems are built, starting with more ethical, consensual data collection practices. “What concerns me is we’re giving so many companies a free pass, or we’re applauding the innovation while turning our head [away from the harms],” Buolamwini told me. Read my interview with her

While Buolamwini’s story is in many ways an inspirational tale, it is also a warning. Buolamwini has been calling out AI harms for the better part of a decade, and she has done some impressive things to bring the topic to the public consciousness. What really struck me was the toll speaking up has taken on her. In the book, she describes having to check herself into the emergency room for severe exhaustion after trying to do too many things at once—pursuing advocacy, founding her nonprofit organization the Algorithmic Justice League, attending congressional hearings, and writing her PhD dissertation at MIT. 

She is not alone. Buolamwini’s experience tracks with a piece I wrote almost exactly a year ago about how responsible AI has a burnout problem.  

Partly thanks to researchers like Buolamwini, tech companies face more public scrutiny over their AI systems. Companies realized they needed responsible AI teams to ensure that their products are developed in a way that mitigates any potential harm. These teams evaluate how our lives, societies, and political systems are affected by the way these systems are designed, developed, and deployed. 

CONTACT US

We're not around right now. But you can send us an email and we'll get back to you, asap.

Sending

Log in with your credentials

Forgot your details?