Riot Games, developer of Valorant (a free first-person shooter), will begin monitoring voice communications from players on July 13th (via PCGamer).
It's to train language models for its eventual use in evaluating player reports across all of its games, according to the game company.
In addition, riot states it will review recordings when players report someone for making abusive or offensive comments. This should allow Riot to determine if the player violated its policies and then take appropriate action.
Riot won't start assessing player reports using these recordings yet. Instead, it's currently using the information to build the beta version of the system that it plans to release later in the year.
Therefore, the Riot will evaluate only the conversations between English-speaking Valorant users in North America at this time. You can opt-out by disabling voice chat or using another communication tool like Discord.
Riot states in its announcement that "we know that before it can even consider expanding this tool" and that "if mistakes do happen, we have systems to correct any false positives or negatives."
Riot states that the system will not "actively monitor live game communications" and "potentially listen and review voice logs" when you are reported for disruptive behaviour.
Riot also states that it will delete the information once it has resolved the issue, similar to how it handles reports over its text-based chat system.
Despite this, players will be concerned about privacy. It's similar to the Vanguard anti-cheat system, which monitors your activities inside and outside Valorant.
Valorant's plan to report players is not the only way it is trying to stop them. Riot began allowing Valorant users to add certain words or phrases to a "muted word list". This is supposed to block abusive content from the chat.