Twitter is to launch an anti cyberbully policy to act against violent threats as part of renewed efforts to tackle abuse.
The firm will still require a complaint to be made before it blocks an account, but it said it was also attempting to automatically make a wider range of abusive tweets less prominent.
The problem is not limited to Twitter – in March, a study of 1,000 UK-based 13 to 17 year olds by broadband provider Europasat indicated that nearly half of those surveyed had been sent abusive messages over the internet.
In February, Twitter’s chief executive Dick Costolo highlighted the issue when he sent a memo to staff telling them that “we suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years”.
Twitter’s rules now state that it may act after being alerted to tweets that contain “threats of violence against others or promote violence against others”.
Twitter will tell some abusers to verify their phone number and delete several tweets before lifting a temporary ban.
By making its criteria more vague than before, the platform can now intervene if, for example, someone says that a victim ought to be beaten up.
It had previously required the aggressor to have provided specific details, such as the fact they planned to commit the act using a baseball bat at the victim’s place of work, before it would respond.
“Our previous policy was unduly narrow, and limited our ability to act on certain kinds of threatening behaviour,” wrote Shreyas Doshi, Twitter’s director of product management, on the firm’s blog.
“The updated language better describes the range of prohibited content and our intention to act when users step over the line into abuse.”
In addition, Twitter will begin freezing some abusers’ accounts for set amounts of time, allowing those affected to see the remaining duration via its app. Abusers may also be required to verify their phone number and delete all their previous offending tweets in order to get their account unlocked.
The firm said it could use this facility to calm situations in which a person or organisation came under attack from several people at once, where it might not be appropriate to enforce permanent bans on all involved.
While such decisions would be taken by Twitter’s staff, the company said it had also started using software to identify tweets that might be abusive, based on “a wide range of signals and context”.
Such posts will be prevented from appearing in people’s feeds without ever having been checked by a human being. However, they will still show up in searches and remain subject to the existing complaints procedure.
A side-effect of this could be that some abusive tweets become harder to detect.
The UK Safer Internet Centre, which represents a number of campaign bodies, welcomed the move.
“These are really good steps,” said Laura Higgins, the organisation’s online safety operations manager.
“Regrettably some people might fall foul of bad behaviour before Twitter can put some of these safeguards in place, but at least it is always looking for new solutions.”
“In cases when there is massive amounts of abuse and it’s all of a similar theme, I think the new system will be good at picking it up, and that’s great. But it would be good to hear what will happen to that data once Twitter has it.”
The announcements build on other recent changes made by Twitter, including hiring more workers to handle abuse reports and letting third parties flag abuse.