Thursday, May 30

Facebook cracks down on fake accounts



Facebook Inc said on Thursday it suspended 30,000 accounts in France as the social network giant steps up efforts to stop the spread of fake news, misinformation and spam.

The move, which comes 10 days before the first round of a hotly contested French presidential election, is among the most aggressive yet by Facebook to move against accounts that violate its terms of service, rather than simply respond to complaints, reports the Reuters.

Facebook is under intense pressure in Europe as governments across the continent threaten new laws and fines unless the company moves quickly to remove extremist propaganda or other content that violates local laws.

The pressure on social media sites including Twitter, Google’s YouTube and Facebook has intensified in the run-up to the elections in France and Germany.

Facebook already has a program in France to use outside fact-checkers to combat fake news in users’ feeds.

Also on Thursday, Facebook took out full-page ads in Germany’s best-selling newspapers to educate readers on how to spot fake news.

US intelligence agencies have determined that the Russian government interfered with the US election last year in order to help Donald Trump win the presidency. Officials say a similar campaign is under way in Europe to promote right-wing, nationalist parties and undermine the European Union.

In a blog post, Facebook said it was acting against 30,000 fake accounts in France. It said its priority was to remove suspect accounts with high volumes of posting activity and the biggest audiences.

Two people familiar with Facebook’s process said the company had strengthened its formula for detecting deceptive accounts being run by automated means. As an example, the new process considers accounts that have smaller circles of friends and that therefore had been less of a priority previously.

A key motivator was the need to get tougher on misinformation ahead of the French elections, the people said, although the move also targets accounts that generated spam for financial gain.

Facebook’s Technical Program Manager on the Protect and Care Team Shabnam Shaik said,

#People come to Facebook to make meaningful connections. From the beginning, we’ve believed that can only be possible if the interactions here are authentic and if people use the names they’re known by.

We’ve found that when people represent themselves on Facebook the same way they do in real life, they act responsibly. Fake accounts don’t follow this pattern, and are closely related to the creation and spread of spam. That’s why we’re so focused on keeping these inauthentic accounts and their activity off our platform.’

‘Protecting authenticity is an ongoing challenge — one that requires vigilance and commitment. Staying ahead of those who try to misuse our service is a constant effort led by our security and integrity teams, and we know this work will never be done.

We build and update technical systems every day to make it easier to respond to reports of abuse, detect and remove spam, identify and eliminate fake accounts, and prevent accounts from being compromised. This work also reduces the distribution of content that violates our policies as well as other deceptive material, such as false news, hoaxes, and misinformation.

In recent years, we’ve continued to make progress in these areas. We made it significantly more difficult for people to sell fake likes on Facebook, and developed sophisticated systems to help block automated programs (or ‘bots’) from trying to create fake accounts. Overall, our security systems run in the background millions of times per second to help block suspicious activity.

By constantly improving our techniques, we also aim to reduce the financial incentives for spammers who rely on distribution to make their efforts worthwhile. But we know we have to keep getting better.

We’ve made some additional improvements recently, and want to explain them here today. These changes help us detect fake accounts on our service more effectively including ones that are hard to spot.

We’ve made improvements to recognise these inauthentic accounts more easily by identifying patterns of activity without assessing the content itself. For example, our systems may detect repeated posting of the same content, or an increase in messages sent.

With these changes, we expect we will also reduce the spread of material generated through inauthentic activity, including spam, misinformation, or other deceptive content that is often shared by creators of fake accounts.

In France, for example, these improvements have enabled us to take action against over 30,000 fake accounts. While these most recent improvements will not result in the removal of every fake account, we are dedicated to continually improving our effectiveness. Our priority, of course is to remove the accounts with the largest footprint, with a high amount of activity and a broad reach.

This effort complements other initiatives we have previously announced that are designed to reduce the distribution of misinformation, spam or false news on Facebook.

We’ve found that a lot of false news is financially motivated, and as part of our work to promote an informed society, we have focused on making it very difficult for dishonest people to exploit our platform or profit financially from false news sites using Facebook.

While authenticity is a cornerstone of Facebook and protecting it is essential to building informed communities and promoting civic engagement, we must always work to do better. In that spirit, these are just some of the early results that we expect our new advances to deliver and that will serve as a stepping stone as we continue to iterate and improve.

The company is using automated pattern-recognition to identify repeated posting of the same content and increases in messaging.

Thursday’s action follows other moves by Facebook to make it easier for users to report potential fraud and hoaxes.