Gorilla Tag finally fixed its voice moderation problem

An official Gorilla Tag screenshot made to look like a "monke" is wearing a pink support headset
(Image credit: Original screenshot: Another Axiom, added art: Nicholas Sutrich)

A new partnership between GGWP and Another Axiom — the developers of Gorilla Tag — means that those annoying kids in Gorilla Tag can no longer call you the "hard R" word and get away with it. GGWP announced in a March 17 press release that Gorilla Tag's Vivox-powered voice chat would now be "proactively monitored" by the company's advanced context-aware AI engine.

Proactive moderation means that offensive words are not only reported as such but that a "360-degree view of the user's positive and negative behavior" is also included in that report. From a cursory glance, it seems a bit like Big Brother or Minority Report. Still, it's a huge step forward in ensuring that the millions of Gorilla Tag players who frequent the game every month feel like the developer has their backs when dealing with offensive players.

This should be a big upgrade over the previous system, which mainly relied on user reports and was sometimes poisoned by "group think" and peer pressure. If you've ever played any online game for more than a few minutes, you'll know that online voice chat has a huge moderation problem. This issue is compounded in games like Gorilla Tag, where voice chat is the only real way to communicate with other players.

The bar of entry for a free-to-play game like Gorilla Tag is incredibly low. All you need to do to play the game is install it. The account system is already built into headsets like the Meta Quest 3S, making them even easier to play than free mobile games that often require external accounts. It's this ease of play that's created a huge shift in players joining free-to-play games more often on the Quest, causing new problems for companies trying to moderate the space.

Actions speak louder than words (usually)

An official screenshot of four different players in Gorilla Tag showcasing different outfits and headgear

(Image credit: Another Axiom)

It's alarming enough to join a game and hear screaming kids all around you, but it's even more alarming to hear those same children call you a racial slur or talk like a sailor as if such behavior were completely normal. Both of these situations can be off-putting enough to make someone quit a game forever, which is just part of why moderation like GGWP's is necessary.

But Another Axiom says it isn't just relying on AI to moderate. We've seen AI moderation fail in the past, so the company is also partnering with Arise to include a human component to the equation. This paid crew of moderators is around 24/7 to address issues and provide support, both before and after someone gets banned.

Of course, it's not just the kids who say this stuff that's the problem. Sometimes, kids will be pressured into saying something they don't understand, getting them banned. It's these situations where GGWP and Arise can really help pick apart the nuance, giving light bands to the person who said something offensive but also helping get to the root of the matter by figuring out the players who did the pressuring in the first place.

Proactive AI moderation, in conjunction with paid human moderation, may finally solve the problem of offensive players.

Another Axiom already allowed a wave of previously banned players back into the fold a few weeks back, noting that players who are banned in the future can seek appeals via the official support page or the game's official Discord server. This should be a good test of the system, ensuring that players who previously caused offense have learned their lesson (or not).

As a player who just started their virtual career in the developer's newest game, Orion Drift, it's music to my ears to see this kind of moderation finally coming onboard. It feels like this should have happened years ago, but better late than never, I'd say. I was told this tech is also coming to Orion Drift, so players in that game can rest easy knowing there will be swift repercussions for bad behavior.

One of the biggest reasons I haven't typically allowed my son to play online games with voice chat is because people can be so unhinged online. Without the threat of getting punched in the face — something that would happen in real life if you talked like some of these kids do online — people can get incredibly rude and downright nasty. We see it on social media all the time, and it's worse in an unmoderated voice chat.

Gorilla Tag Official Trailer - YouTube Gorilla Tag Official Trailer - YouTube
Watch On

I'm really glad to see Another Axiom implementing policies that not only protect our kids but also any player who bears the brunt of this kind of language. You'll even find a new "Forest Guide" player in the game's Discord, ensuring that players who are overtly friendly and helpful get recognized for their actions, while offensive jerks just get banned for life.

I would love to see Meta partner with developers to extend this kind of technology to the rest of the best Meta Quest games. While Gorilla Tag's massive community means that this kind of thing is more common than in smaller games — there's always a percentage of downright offensive players, so larger player bases simply mean more bad players — the issue is still present elsewhere.

This, of course, isn't just a VR problem, but as voice chat is the primary means of communication in virtual worlds, it's vitally important to implement technology like this to ensure future players stay around for the long haul instead of leaving over a problem that could have been avoided.

Nicholas Sutrich
Senior Content Producer — Smartphones & VR
Nick started with DOS and NES and uses those fond memories of floppy disks and cartridges to fuel his opinions on modern tech. Whether it's VR, smart home gadgets, or something else that beeps and boops, he's been writing about it since 2011. Reach him on Twitter or Instagram @Gwanatu