Originally Posted by Lyte
When negative behavior occurs, we know that the faster a player receives feedback, the better their chances of reforming. With that in mind, we’re building and iterating on a new instant feedback system that delivers actionable feedback and appropriate punishment to players that need it most. In the future, we expect instant feedback to take on rewards as well, but in step one we’re focused on reform-oriented feedback and punishment.
The system’s initial tests kick off today in NA and take aim at verbal harassment. The system delivers reform cards (notifications that link evidence of negative behavior with the appropriate punishment) that help players address their negative behavior. Your reports help the instant feedback system understand and punish the kind of verbal harassment the community actively rejects: homophobia, racism, sexism, death threats, and other forms of excessive abuse. These harmful communications will be punished with two-week or permanent bans within fifteen minutes of game’s end. Here’s how it works:
- Teammates or opponents of the offending player send reports and the system validates them to make sure they aren’t false
- The system examines the case and determines whether the behavior deserves rejection or punishment based on community-driven standards of behavior
- The system fires a reform card through email, sharing the chat log of the offending player (we scrub other players’ names and chat logs) and the punishment for the behavior
The player behavior team will be on deck, hand-reviewing the first few thousand cases the instant feedback system sorts through. If the test goes smoothly, we expect instant feedback to roll out to all regions shortly. Like we mentioned up top, this test marks the beginning for instant feedback. Upgrades will allow it to shoulder more reform and punishment responsibilities and even reward positive play. Here’s a peek at where we’re headed:
- In-client reform cards
- Follow-up notifications for players who reported a player who was punished
- Upgrades for chat and ranked restrictions
- Upgrades to recognize negative gameplay behaviors like intentional feeding
- Recognition of honors and rewards for positive behaviors and communication
We’ll hang around the comments to answer questions and hear your feedback. We’ll come back soon to share more about the vision for player behavior, including plans for the Tribunal.
I really think this is going down the wrong path. Having it weight the words said differently isn't a positive feature. Vocabulary doesn't efficiently correlate to negativity, and often the most innocuous words are the most negative. Gg easy is far more infuriating and negative than getting called a fag ever was. I hope efforts are put forth to try and actually curb negative behavior, not just try and punish people for saying naughty words. We're not children here. Remove the afks, the feeders, and the incessant negative players. Don't try and police language.
The system does not just look at words, and tries to learn phrases. In fact, "gg easy" is considered a very negative phrase by the system today.
Secondly, we do have testing in place for the system to identify and punish intentional feeders and other forms of gameplay toxicity.
Lyte how do you plan on solving False Positives? Especially when it's done automatically, and within 15 minutes of a game ending, there are sure to be issues. Do you have a certain criteria that they need to fill to get automatically banned/perma-banned?
The system analyzes in-game data like chat logs, and then assesses reports and tries to "validate" the reports. In addition, every report and honor in the game is teaching the system about behaviors and what looks OK or not OK, so the system continuously learns over time. If a player shows excessive hate speech (homophobia, sexism, racism, death threats, so on) the system might hand out a permanent ban to the player for just 1 game. But, this is pretty rare!
In terms of false positives, we recently flew in Player Support and Player Behavior team members from all around the world to hand-review thousands of chat logs, and we saw false positive rates in the 1 in 6000 range. So, we know the system isn't perfect, but we think the accuracy is good enough to launch.