Metaverse virtual worlds lack adequate security precautions, critics say
Internet security experts are sounding the alarm about harassment and security in the metaverse as companies invest heavily in and promote the benefits of virtual spaces.
The Metaverse, generally thought of as virtual worlds in which people can interact, has become one of the hottest topics in tech in recent months, spurred by Facebook co-founder Mark Zuckerberg’s vision for the Metaverse as a place where people can do “almost anything”. You can imagine.” In October, Facebook was rebranded as Meta (although Facebook is still the name of the company’s primary social network).
A variety of virtual spaces already exist, and it’s in these worlds that experts are already seeing signs of trouble.
Researchers from the Center for Countering Digital Hate (CCHD), a nonprofit organization that analyzes and seeks to disrupt online hate and misinformation, spent nearly 12 hours recording activity on VRChat, a platform virtual world accessible on Meta’s Oculus headset. The group recorded an average of one offense every seven minutes, including instances of sexual content, racism, abuse, hate, homophobia and misogyny, often in the presence of minors.
The organization shared its data logs and some of the recordings with NBC News, describing more than 100 incidents in total.
Meta and many other companies seek to capitalize on these new worlds, particularly around creativity, community, and commerce, using immersive technology (often a headset worn to simulate a first-person field of view). But CCHR and other critics worry that, as in the company’s past, Meta is prioritizing growth over the safety of its users.
“I’m afraid it’s incredibly dangerous to have children in this environment,” said Imran Ahmed, CEO of CCHD. “Honestly speaking, I would be very nervous as a parent about Mark Zuckerberg’s algorithms babysitting my kids.”
Ahmed specifically pointed to reporting issues, noting that CCHR was only able to report about half of its recorded incidents to Meta, criticizing the company for its lack of traceability or ability to identify a user in order to report them. , and the absence of consequences for wrongdoing.
Meta did not respond to questions about these reporting issues.
Meta’s current security features include the ability to mute and block people, or transfer to a safe zone to give a user a break from their surroundings. When submitting a report, it will include a recording of the last minutes of a user’s experience to include as evidence. CCHR researchers pointed out that it was a cumbersome process to file a report quickly.
These features also seem to overlook the possibility that a user may not be able to activate security precautions quickly and easily if they experience some sort of breach.
Such was the case for Nina Jane Patel, who described being virtually groped and harassed recently in Meta’s Horizon Venues space.
“Within 30 seconds, I was suddenly surrounded by three avatars with male voices, who were saying sexual innuendo to me,” Patel said. “Before I knew it, they were groping my avatar. They were touching the top and middle part of my avatar, and then a fourth male avatar was taking selfies of what was going on.
Patel knows these virtual spaces well. She’s VP of Metaverse Research for Kabuni, a UK-based immersive technology company, and even noted that she wants to celebrate the positive potential of virtual worlds.
Despite her familiarity, she was shocked by the behavior and became pissed off. She said she wasn’t able to activate blocking features quickly enough, fumbling with her controller and eventually quitting Venues.
“They said things like, ‘Don’t pretend you don’t like him. That’s why you came here,” Patel said. “It was quite shocking that people were using this space to sexually and verbally harass people and act out their violent tendencies,” noting that when she’s been in these spaces before, she’s heard children’s voices.
Patel wrote a blog post about her experience, and it went viral – inspiring other women to describe similar interactions they had faced in the metaverse.
As a result, Meta introduced a new default “personal limit” setting to its spaces, requiring avatars to stay nearly 4 feet apart. Nkechi Nneji, a spokesperson for Meta, told NBC News that it “makes it easier to avoid unwanted interactions like this, and we’re sorry this happened.”
“Horizon Venues should be safe,” the spokesperson said in an email, “and we are committed to building it that way. We will continue to make improvements as we learn more about how people interact in these spaces, especially when it comes to helping people report things easily and reliably.
Ahmed and Patel acknowledged that platforms cannot be expected to be immediately perfect once they launch, but also noted the wider implications for the metaverse if security precautions are not prioritized. .
“As virtual reality becomes more and more realistic,” Ahmed said, “it becomes more damaging, as the psychological impacts of this environment seem more real and are harder to discern from our offline reality. And as such, they can cause enormous harm to young people.
Patel said it was important not to repeat previous mistakes made by tech companies.
“We need to have a zero tolerance approach to this or we will repeat our mistakes of the internet where anonymity was prioritized, and accountability and security were abandoned,” Patel added.