In recent years, intelligent autonomous weapons—weapons that can select and fire upon targets without any human input—have become a matter of serious concern. Giving an AI system the power to decide matters of life and death would radically change warfare forever.

But intelligent autonomous weapons that fully displace human decision-making have (likely) yet to see real-world use. Even the “autonomous” drones and ships fielded by the US and other powers are used under close human supervision. 

However, these systems have become sophisticated enough to raise novel questions—ones that are trickier to answer than the well-­covered wrangles over killer robots. What does it mean when a decision is only part human and part machine? And when, if ever, is it ethical for that decision to be a decision to kill? Read the full story.

—Arthur Holland Michel

If you’re interested in reading more about AI’s role in making life or death decisions, check out my colleague Will Douglas Heaven’s piece exploring the relationship between mortality and automation.

What’s changed in the US since the breakthrough climate bill passed a year ago?

When President Biden signed the Inflation Reduction Act into law one year ago this week, he set aside an estimated $369 billion to fight climate change. Researchers and others involved in climate have been speculating about what that vast amount of cash will all mean ever since.

CONTACT US

We're not around right now. But you can send us an email and we'll get back to you, asap.

Sending

Log in with your credentials

Forgot your details?