Stock Groups

Facebook knew about, failed to police, abusive content globally

[ad_1]

2/2
© Reuters. FILEPHOTO: This illustration shows a 3D-printed Facebook logo placed on a keyboard. It was taken in March 25, 2020. REUTERS/Dado Ruvic/Illustration/File Photo

2/2

Brad Heath, Elizabeth Culliford

(Reuters] – Facebook (NASDAQ) employees warn that Facebook is failing to protect abusive content from countries most likely to harm it as it races to become a global platform. According to interviews with five employees and company documents viewed by Reuters, employees at Facebook have been warning for many years.

Facebook strives to be the most popular online platform in the world for more than ten years. Facebook currently has more than 200 countries, and more than 2.5 billion users. They also post content in over 160 languages. Its efforts to stop its products becoming conduits of hate speech, inflamatory rhetoric and misinformation have failed to keep up with the company’s global expansion.

Reuters has seen internal company documents that show Facebook knew it had not hired enough people who have the knowledge and language skills to spot objectionable content from users in developing countries. These documents showed that Facebook’s artificial intelligence systems used to remove such content are often not up to the job, and that it has not made it simple for global users to report posts violating its rules.

Employees warned that these shortcomings could hinder the company’s ability to fulfill its promises to stop hate speech and any other violations of rules in countries ranging from Afghanistan to Yemen.

One employee posted a last-year review on Facebook’s internal messaging board about how the site identifies abuses. He reported that there were “significant gaps in some countries” at high risk for real-world violence.

They are part of a collection of disclosures Facebook whistleblower Frances Haugen made to Congress and U.S. Securities and Exchange Commission. The former Facebook product manager, Haugen left Facebook in May. Reuters was among a group of news organizations able to view the documents, which include presentations, reports and posts shared on the company’s internal message board. The Wall Street Journal reported their existence for the first time.

Mavis Jones, Facebook spokesperson, stated in a statement that more than 70 languages are used to review Facebook content. She also mentioned experts who deal with humanitarian and human right issues. These teams work to prevent abuses on Facebook’s platform from being perpetrated in areas where violence and conflict are high.

Jones stated, “We are aware of these challenges and are proud to have done the work that we have so far.”

However, the Facebook cache contains detailed documents that show how Facebook employees over the years raised concerns about the company’s technology and tools. These were aimed at preventing or blocking speech from violating its standards. The material expands upon Reuters’ previous reporting https://www.reuters.com/investigates/special-report/myanmar-facebook-hate on Myanmar and other countries https://www.reuters.com/article/us-facebook-india-content/facebook-a-megaphone-for-hate-against-indian-minorities-idUSKBN1X929F, where the world’s largest social network has failed repeatedly to protect users from problems on its own platform and has struggled to monitor content across languages. https://www.reuters.com/article/us-facebook-languages-insight-idUSKCN1RZ0DW

A lack of screening algorithms to screen for language usage in countries Facebook has identified as most vulnerable for real-world violence or harm due to abuses of its site was one of the many weaknesses highlighted.

Two former employees told Reuters that the company labels countries as “at-risk” based upon variables such unrest, ethnic violence and number of users. People said that the system seeks to redirect resources to those areas in which abuses of its site might have the worst impact.

According to Jones, spokesperson for Facebook, the company reviews these countries and assigns priority every six months. It does this in compliance with United Nations guidelines that aim at helping businesses prevent and remediate human rights violations within their operations.

UN experts who investigated a campaign of massacres and expulsions targeting Myanmar’s Rohingya Muslim community claimed Facebook was used widely to promote hate speech. According to Reuters, a former employee said that the company had increased its presence in fragile countries because of this. Facebook claims it could have done more in order to stop the platform from inciting violence offline in that country.

Ashraf Timeon, Facebook’s ex-head of Middle East policy and North Africa in 2017, left the company. He said that the company’s strategy to achieve global growth was “colonial” because it is focused solely on the monetization of the Middle East without any safety precautions.

Facebook users who are not based in the United States or Canada account for more than 90%.

LANGUAGE ISSUES

Facebook has long touted the importance of its artificial-intelligence (AI) systems, in combination with human review, as a way of tackling objectionable and dangerous content on its platforms. These types of content can be detected by machine-learning systems with different levels of accuracy.

However, Facebook has had difficulty with automated content moderation in languages other than English, Canada, or Europe. The documents submitted by Haugen to the government reveal this. The lack of AI systems that can detect inappropriate posts in many languages on Facebook’s platform is a major problem. The company, in particular, did not possess the screening algorithm known as “classifiers”, which could have been used to spot misinformation in Burmese or hatred speech in Ethiopian languages Oromo or Amharico in 2020. This was revealed in a document.

These gap can lead to abusive posts being spread in countries where Facebook has considered the likelihood of harm real-world high.

Reuters found Amharic posts that were used to refer to Ethiopian ethnic groups and issue death threats this month. The country has been in conflict for nearly a year between rebel Tigray forces and the Ethiopian government. This conflict has resulted in thousands of deaths and displacements more than 2,000,000 people.

Jones, a spokesperson for Facebook said that the company has now implemented proactive detection technology in Oromo/Amharic to identify hate speech and had hired more experts in “language and country expertise,” as well as people who’ve worked in Myanmar or Ethiopia.

An undated document was shared by Facebook employees, which an individual familiar with disclosures stated was from 2021. It also included examples of fear-mongering and anti-Muslim narratives spread via the site in India. This includes calls for the expulsion of large numbers who are Muslim minorities. According to the document, “Our inability to classify Hindi and Bengali content means that much of it is not flagged or taken into account.” This year, employees also pointed out the absence of Urdu or Pashto classifiers to filter problematic content from users in Afghanistan, Pakistan and Iran.

Jones claimed that Facebook had added hate speech classificationifiers to Hindi and Bengali in 2018, and 2020 for Bengali, as well as classifiers of violence and incitement for Hindi and Bengali. According to Jones, Facebook now offers hate speech classifications for Urdu and Pashto.

The documents reveal that Facebook’s human reviews of posts are crucial in determining nuanced issues like hate speech. However, there are gaps between key languages as well. The unpublished document described the difficulties Facebook had with Arabic-language languages from several “at-risk” nations, making it “playing catchup.” Even within the Arabic-speaking reviews, it was clear that “Yemeni (Libya), Saudi Arabian, and all Gulf states are missing or having very low representation.”

Jones of Facebook acknowledged that moderation for Arabic-language content presents “a huge set of challenges.” Although she acknowledged that Facebook had made significant investments in its staff the past two years, Jones said “we still have a lot to do.”

Three former Facebook employees who worked for the company’s Asia Pacific and Middle East and North Africa offices in the past five years told Reuters they believed content moderation in their regions had not been a priority for Facebook management. According to these people, leadership didn’t understand the problems and failed to allocate enough resources and staff.

Jones from Facebook said that California’s company is tackling abusers outside of the United States in the same way as domestically.

Facebook stated that it employs AI to prevent hate speech from being spread in more than 50 languages. Facebook stated that it uses AI to decide where and how much AI should be deployed in a country. The company declined to reveal how many countries did it not have hate speech classifications.

Facebook says that 15,000 content monitors review material sent to them by global users. Jones stated that “Adding more language competence has been a major focus for us.”

It has employed people to review content in Somalia, Oromo and Tigrinya over the last two years. This year, there were moderators who could also be fluent in Haitian Creole and 12 other languages.

Facebook refused to specify whether it required a minimum of three content moderators in order for any language on its platform.

TRANSLATION LOST

Facebook users have the power to find content that does not comply with its standards. While the company created a tool to help them do it, it acknowledged that this can prove costly and time-consuming for people living in places without internet access. According to documents provided by Reuters and the digital rights advocates who spoke to Reuters, there were also bugs and design issues, as well accessibility problems for certain languages.

Next Billion Network (a consortium of tech civic society organizations working mostly in Asia, Africa, and the Middle East) stated that the network has reported numerous problems with Facebook’s reporting system over recent years. These included a technical flaw that prevented Facebook’s content management system from seeing objectionable video and photo captions in certain posts. This issue was preventing serious violations such as death threats, which were contained in these posts’ text, the group said and a former Facebook employee spoke to Reuters. According to them, the problem was solved in 2020.

Facebook stated that it is working to improve its reporting system and values feedback.

It remains difficult to cover language. The documents include a January Facebook presentation that concluded, “There is a large gap in Hate Speech reporting in local languages for users in Afghanistan.” After two decades of U.S. military presence in Afghanistan, the recent withdrawal has sparked an internal power struggle. According to the presenter, so-called “community standard” rules – which govern how users post in Afghanistan’s major languages Pashto or Dari – have been removed from Afghanistan.

A Reuters Review this month revealed that Facebook community standards aren’t supported in nearly half of the languages it supports, including 110 languages with prompts and menus.

Facebook claimed that the rules will be available in 59 languages before the end, with another 20 by 2022.



[ad_2]