views
Twitter is testing new labels on the platform to let know users when they are interacting with a bot account. The company announced the development in a tweet, and the aim is to make it “easier to identify GoodBots and their automated Tweets with new labels.” Twitter bots are accounts controlled by bot software that may appear a legitimate profile at times. Their primary purpose is to tweet and retweet content for specific goals on a large scale. Twitter bots can be helpful for services such as broadcasting weather emergencies in real-time, sharing informative content en masse, and generating automatic replies via direct messaging. These can be also used to troll or for political propaganda to influence others.
In a blog post, Twitter explains that new labels will allow users to identify “good bots from spammy ones” as it aims to bring “transparency.” Currently, the labels are being tested by select accounts, and the broader rollout will follow later. Twitter had earlier used similar labels to segregate state-affiliated accounts. These political labels contained information about the country the account is affiliated with and whether it is operated by a government representative or state-affiliated media entity. However, the company faced criticism for failing to mark accounts of other countries due to political pressure. It is unlikely that the new feature would curb trolls or hate speech unless the company intervenes and takes actions against accounts that flout its terms against hate speech. However, it can bring transparency as it is not always easy to spot an account that is actually a bot profile.
What’s a bot and what’s not? We’re making it easier to identify #GoodBots and their automated Tweets with new labels.Starting today, we’re testing these labels to give you more context about who you’re interacting with on Twitter. pic.twitter.com/gnN5jVU3pp
— Twitter Support (@TwitterSupport) September 9, 2021
Meanwhile, Twitter is testing another feature that automatically blocks hateful messages as the platform comes under increasing pressure to protect its users from online abuse. Users who activate the new ‘Safety Mode’ will see their “mentions” filtered for seven days so that they don’t see messages flagged as likely to contain hate speech or insults. The feature will initially be tested by a small number of English-speaking users, Twitter said, with priority given to “marginalized communities and female journalists” who often find themselves targets of abuse.
Read all the Latest News, Breaking News and Assembly Elections Live Updates here.
Comments
0 comment