Facebook has removed more than three billion fake accounts in the October 2018-March 2019 period, saying that about 5 percent of its monthly active users were fake.
Facebook disabled 1.2 billion accounts in Q4 2018 and 2.19 billion in Q1 2019.
“For fake accounts, the number of accounts we took action on increased due to automated attacks by bad actors who attempt to create large volumes of accounts at one time,” Guy Rosen, Facebook’s vice president for integrity, said in a blog post on Thursday.
According to Rosen, for every 10,000 times people who view content on Facebook, 11 to 14 views contained content that violates the platform’s adult nudity and sexual activity policy.
“We estimated for every 10,000 times people viewed content on Facebook, 25 views contained content that violated our violence and graphic content policy.
“During the second half of 2018, the volume of content restrictions based on local law increased globally by 135 percent from 15,337 to 35,972.
“This increase was primarily driven by 16,600 items we restricted in India based on a Delhi High Court order regarding claims made about PepsiCo products,” said Facebook.
In the second half of 2018, Facebook identified 53 disruptions of Facebook services in nine countries, compared to 48 disruptions in eight nations in the first half of 2018.
“This half, India accounted for 85 percent of total new global disruptions,” said the company.
In this period, on Facebook and Instagram, the company took down 2,595,410 pieces of content based on 511,706 copyright reports; 215,877 pieces of content based on 81,243 trademark reports; and 781,875 pieces of content based on 62,829 counterfeit reports.
“In Q1 2019, we took action on about 900,000 pieces of drug sale content, of which 83.3 percent we detected pro-actively. In the same period, we took action on about 670,000 pieces of firearm sale content, of which 69.9 percent we detected pro-actively,” added Rosen