But much of Oasis’s plan remains, at best, idealistic. One example is a proposal to use machine learning to detect harassment and hate speech. As my colleague Karen Hao reported last year, AI models either give hate speech too much chance to spread or overstep. Still, Wang defends Oasis’s promotion of AI as a moderating tool. “AI is as good as the data gets,” she says. “Platforms share different moderation practices, but all work toward better accuracies, faster reaction, and safety by design prevention.”

The document itself is seven pages long and outlines future goals for the consortium. Much of it reads like a mission statement, and Wang says that the first several months’ work have centered on creating advisory groups to help create the goals. 

Other elements of the plan, such as its content moderation strategy, are vague. Wang says she would like companies to hire a diverse set of content moderators so they can understand and combat harassment of people of color and those who identify as non-male. But the plan offers no further steps toward achieving this goal.

The consortium will also expect member companies to share data on which users are being abusive, which is important in identifying repeat offenders. Participating tech companies will partner with nonprofits, government agencies, and law enforcement to help create safety policies, Wang says. She also plans for Oasis to have a law enforcement response team, whose job it will be to notify police about harassment and abuse. But it remains unclear how the task force’s work with law enforcement will differ from the status quo.

Balancing privacy and safety

Despite the lack of concrete details, experts I spoke to think that the consortium’s standards document is a good first step, at least. “It’s a good thing that Oasis is looking at self-regulation, starting with the people who know the systems and their limitations,” says Brittan Heller, a lawyer specializing in technology and human rights. 

It’s not the first time tech companies have worked together in this way. In 2017, some agreed to exchange information freely with the Global Internet Forum to Combat Terrorism. Today, GIFCT remains independent, and companies that sign on to it self-regulate.

Lucy Sparrow, a researcher at the School of Computing and Information Systems at the University of Melbourne, says that what’s going for Oasis is that it offers companies something to work with, rather than waiting for them to come up with the language themselves or wait for a third party to do that work.

Sparrow adds that baking ethics into design from the start, as Oasis pushes for, is admirable and that her research in multiplayer game systems shows it makes a difference. “Ethics tends to get pushed to the sidelines, but here, they [Oasis] are encouraging thinking about ethics from the beginning,” she says.

But Heller says that ethical design might not be enough. She suggests that tech companies retool their terms of service, which have been criticized heavily for taking advantage of consumers without legal expertise. 

Sparrow agrees, saying she’s hesitant to believe that a group of tech companies will act in consumers’ best interest. “It really raises two questions,” she says. “One, how much do we trust capital-driven corporations to control safety? And two, how much control do we want tech companies to have over our virtual lives?” 

It’s a sticky situation, especially because users have a right to both safety and privacy, but those needs can be in tension.

CONTACT US

We're not around right now. But you can send us an email and we'll get back to you, asap.

Sending

Log in with your credentials

Forgot your details?