How they did it: The team asked language models where they stand on various topics, such as feminism and democracy. They used the answers to plot them on a political compass, then tested whether retraining models on even more politically biased training data changed their behavior and ability to detect hate speech and misinformation (it did).

Why it matters: As AI language models are rolled out into products and services used by millions, understanding their underlying political assumptions could not be more important. That’s because they have the potential to cause real harm. A chatbot offering health-care advice might refuse to offer advice on abortion or contraception, for example. Read the full story

—Melissa Heikkilä

Read next: AI language models have recently become mixed up in the US culture wars, with some calling for developers to create unbiased, purely fact-based AI chatbots. In her weekly newsletter all about AI, The Algorithm, Melissa delves into why it’s a nice concept—but technically impossible to build. Read it to find out more, and if you don’t already, sign up to receive it in your inbox every Monday. 

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 A woman was wrongfully arrested after a false face recognition match
It’s notable that every person we know this has happened to has been Black. (NYT $)
+ The movement to limit face recognition tech might finally get a win. (MIT Technology Review)
 
2 AI startups are fighting dirty 😈
We’re talking fake names, competitors posing as customers, and even bombing Zoom calls. (NYT $)
+ It’s all starting to look a lot like a bubble. (WP $) 
 
3 A vote in San Francisco could change the future of driverless cars 
All eyes are on whether the state board will approve a huge expansion of autonomous taxis on Thursday. (NBC)
Big tech companies are struggling to win over local residents and public officials. (WSJ $)
 
4 Is Texas’ electricity grid going to be able to handle electric vehicles?
There are reasons to be optimistic, not just for that state but the US as a whole. (The Atlantic $)
 
5 Criminals are enthusiastic early adopters of AI tools
On the dark web, they claim to have created two large language models that can assist with illegal activities. (Wired $)
Criminals are also using AI-generated books to scam people. (NYT $)
We are hurtling toward a glitchy, spammy, scammy, AI-powered internet. (MIT Technology Review)
 
6 The era of plentiful cheap stuff may be coming to an end 
Maybe that’s not a wholly bad thing, frankly, for the sake of the planet. (WSJ $)
 
7 People are keen to recreate Black Twitter elsewhere
There’s been a giant exodus from the site. But where should folks go? (WP $) 
 
8 Big cities need to change
To thrive, they need to reinvent themselves to be more than just places where people work. (Vox)
What cities need now. (MIT Technology Review)
 
9 WhatsApp is working on 32-person voice chats
Sounds like pure chaos! (The Verge
 
10 Even Zoom is making employees go back into the office
Ironic, perhaps. But not that surprising. (Quartz $)

Quote of the day

CONTACT US

We're not around right now. But you can send us an email and we'll get back to you, asap.

Sending

Log in with your credentials

Forgot your details?