Twitter wants to help you alleviate the stress of negative comments with these new tools
ETX Studio
September 28, 2021 16:48 MYT
September 28, 2021 16:48 MYT
AS many of us are all too well aware, hateful comments on Twitter have become a frequent issue. That's why the social network has developed a slew of features designed to banish them. The latest ones involve filtering options in order to reduce negative tweets. It looks like a step in the right direction, but there could be some kinks to work out. We take stock.
On Twitter, online hate is commonplace. Between trolls and hate raids, nobody is truly safe from receiving a negative comment at some point.
And the phenomenon has become so omnipresent that the platform has developed a multitude of features intended to combat these users.
The social network has just announced the development of "Filter" and "Limit," two options that will allow users to better control the exchanges in tweets and hopefully no longer find themselves having to confront social network trolls.
Paula Barcante, designer at Twitter, unveiled these two new moderation features in a thread on September 24: "We're exploring new controls called 'Filter' and 'Limit' that could help you keep potentially harmful content and people who might create that content way from your replies. These are early ideas, so we'd love your feedback."
With "Filter", users will be able to block comments containing aggressive language and accounts with which they generally have no interaction.
The added bonus of this feature is that it will finally allow these negative comments to be hidden from view, and not just for the user concerned.
These tweets will now be visible only to their authors. A first for the social network since until now users could only limit the visibility of negative tweets for themselves.
"Let's say someone tries to send a potentially harmful or offensive reply to your Tweet. When you have Filter on, no one would be able to see their reply (except for them)," Paula Barcante explained.
With "Limit," users will be able to target accounts that have most often violated Twitter's community rules by using harmful or repetitive language.
However, the platform recognizes the limitations of such features, which could be misused. Being automated, the effectiveness of these options may not catch absolutely everything.
Which is why Twitter is also looking into the possibility of allowing the user to check blocked tweets in order to validate them manually.
"These would be automated controls so they may not be accurate 100% of the time. That's why we're also exploring the option to let you review any replies we've filtered and accounts we've limited to undo those actions."
For Twitter, these features will help users regain control over discussions and avoid being exposed to accounts known for their negative comments.
While this system could be beneficial for users, it remains to be seen how it would work for politicians' accounts and tweets about politics.