Stock Groups

Exclusive-Facebook to target harmful coordination by real accounts using playbook against fake networks By Reuters

[ad_1]

2/2
© Reuters. FILEPHOTO: A Facebook logo was seen on April 30th by attendees of the F8 developers conference at Facebook Inc in San Jose California. REUTERS/Stephen Lam/File Photo

2/2

By Elizabeth Culliford and Fanny Potkin

(Reuters) – Facebook (NASDAQ:) is taking a more aggressive approach to shut down coordinated groups of real-user accounts engaging in certain harmful activities on its platform, using the same strategy its security teams take against campaigns using fake accounts, the company told Reuters.

This is the first report of a new strategy that uses tactics normally used by Facebook security teams to shut down networks involved in manipulating public opinion, like Russian troll farm accounts.

This could impact how Facebook handles coordinated political movements and violations of its rules. It comes at a moment when Facebook’s handling of abuses on its platforms has been under intense scrutiny by global legislators and civil society organizations.

Facebook stated that it will now use the same network-level approach to groups of real, coordinated accounts that violate its rules. This could be done through mass reporting where multiple users report an account or target’s content to shut it down. Or brigading which is an online form of harassment in which users coordinate their actions to harass someone through comments or posts.

A spokeswoman for Facebook said that the expansion was still in its initial stages. It means that Facebook’s security team could now identify key movements behind such behavior, and can take more drastic actions than the company might normally, such as removing individual posts or accounts.

BuzzFeed News released a Facebook leaked internal report in April about the company’s involvement in the Jan. 6, riot at the U.S. Capitol. It also discussed the challenges it faced in curbing the rapidly growing ‘Stop the Steal’ movement. One of the conclusions was that Facebook had “little policy regarding coordinated authentic harm.””(https://

Facebook’s security experts, who are separate from the company’s content moderators and handle threats from adversaries trying to evade its rules, started cracking down on influence operations using fake accounts in 2017, following the 2016 U.S. election in which U.S. intelligence officials concluded Russia had used social media platforms as part of a cyber-influence campaign – a claim Moscow has denied.

Facebook called this “coordinated and inauthentic behavior”, which was a banned activity that involved fake accounts. Security teams began releasing monthly reports describing the massive takedowns. Security teams deal with specific threats, like fraud or cyber-espionage networks and overt influence operations.

According to sources, security teams had been discussing how they should respond at network level when large numbers of user accounts are breaking company rules.

Reuters covered the activities of the Vietnam Army’s Online Information Warfare Unit in July. These actions included mass reporting accounts to Facebook and using their real names.

Facebook faces increasing scrutiny from employees, global regulators, and lawmakers to end wide-ranging abuses. The company has been criticized by others for its censorship, bias against conservatives and inconsistency with enforcement.

A new Facebook network disruption model that allows authentic accounts to be hacked raises questions about the impact of changes on public discourse, online movements and election tactics.

There have been many high-profile examples of coordinated activity in the United States last year. These include teens using TikTok and K-pop fan members to disrupt a rally held for Donald Trump in Tulsa. Oklahoma. Political campaigns also paying online meme-makers.

[ad_2]