Far-right target critics with Twitter's new media policy

Far-right groups are using Twitter's new private media policy to target anti-extremism accounts, activists say.

The social media giant last week said images or video of private individuals shared without permission would be removed on request.

Activists report that members of the far-right are using the policy to have accounts identifying them suspended.

Twitter said a number of accounts were mistakenly suspended following a flood of "coordinated and malicious reports".

New policy
The introduction of the new rule last week is an extension of Twitter's policies to prohibit "doxxing" - the publishing of private information such as home addresses - without consent.

The company said it would consider the context in which images were posted.

For example, exceptions would be made if the tweets were about public figures, or if the media in question was captured during public gatherings such as protests or sporting events.

However, in the days following the introduction of the new policy, a group of far-right activists reportedly began urging their followers on services like Telegram and Gab to file reports against anti-extremism accounts.

These accounts included those used to identify neo-Nazis and white supremacists, monitor extremists and document the attendees of hate rallies.

The aim appeared to be to get these accounts suspended, and have their personal photos removed.

'Malicious' reports
The Washington Post also reported that one user on the alternative social network Gab had claimed to have filed more than 50 reports, telling others: "It's time to stay on the offensive."

And Atlanta Antifascists, one of the more prominent anti-fascist accounts on Twitter, tweeted last week it had been reported for exposing the identity of a "White Student Union" organiser.

It added Twitter had locked it out of its account until it deleted the post.

"Already, neo-Nazis are using the new policy to attempt to shut down their critics," its statement read.

"Twitter's policy is an attempt to shield white power and far-right organizers from public scrutiny. It is unacceptable, but unsurprising."

A spokesperson for the company told the BBC it had mistakenly suspended accounts under the new policy following a flood of "coordinated and malicious reports".

They added its "enforcement teams made several errors" in the aftermath.

Twitter declined to confirm how many reports have been filed to date, but said that "a dozen erroneous suspensions" had occurred.

Twitter said the errors had been corrected, and an internal review had been launched to ensure its new rule was "used as intended."

'Longstanding tactic'
Carl Miller, research director of the Centre for the Analysis of Social Media at think tank Demos, said it was common for researchers and journalists to be targeted in this way.

He said: "The abuse of online flagging and reporting mechanisms is a longstanding tactic, probably originating in online gaming before finding more political uses.

"Especially when platform enforcement is automated, it is vulnerable to being spoofed."

Mr Miller said Twitter would probably move away from automation, and manually assess reports to ensure that the actual enforcement of the policy is in line with its spirit.

He added: "Longer-term, I suspect they will try to use machine learning and other forms of analytics to learn what coordinated flagging looks like and begin to work that into the process."

Social media consultant Matt Navarra added that this incident underlined how complex and challenging online content moderation is for social media platforms.

He said: "Whilst Twitter's efforts here are well-intended, the repercussions and unintended consequences of the policy are clear.

"Twitter has armed those users it hoped to weaken, and now they are using that new policy as a weapon to counterattack."