The AI Act was conceived as a landmark bill that would mitigate harm in areas where using AI poses the biggest risk to fundamental rights, such as health care, education, border surveillance, and public services, as well as banning uses that pose an “unacceptable risk.” 

“High risk” AI systems will have to adhere to strict rules that require risk-mitigation systems, high-quality data sets, better documentation, and human oversight, for example. The vast majority of AI uses, such as recommender systems and spam filters, will get a free pass. 

The AI Act is a major deal in that it will introduce important rules and enforcement mechanisms to a hugely influential sector that is currently a Wild West. 

Here are MIT Technology Review’s key takeaways: 

1. The AI Act ushers in important, binding rules on transparency and ethics

Tech companies love to talk about how committed they are to AI ethics. But when it comes to concrete measures, the conversation dries up. And anyway, actions speak louder than words. Responsible AI teams are often the first to see cuts during layoffs, and in truth, tech companies can decide to change their AI ethics policies at any time. OpenAI, for example, started off as an “open” AI research lab before closing up public access to its research to protect its competitive advantage, just like every other AI startup. 

The AI Act will change that. The regulation imposes legally binding rules requiring tech companies to notify people when they are interacting with a chatbot or with biometric categorization or emotion recognition systems. It’ll also require them to label deepfakes and AI-generated content, and design systems in such a way that AI-generated media can be detected. This is a step beyond the voluntary commitments that leading AI companies made to the White House to simply develop AI provenance tools, such as watermarking

The bill will also require all organizations that offer essential services, such as insurance and banking, to conduct an impact assessment on how using AI systems will affect people’s fundamental rights. 

2. AI companies still have a lot of wiggle room

When the AI Act was first introduced, in 2021, people were still talking about the metaverse. (Can you imagine!) 

Fast-forward to now, and in a post-ChatGPT world, lawmakers felt they had to take so-called foundation models—powerful AI models that can be used for many different purposes—into account in the regulation. This sparked intense debate over what sorts of models should be regulated, and whether regulation would kill innovation. 

CONTACT US

We're not around right now. But you can send us an email and we'll get back to you, asap.

Sending

Log in with your credentials

Forgot your details?