Facebook takes another fight before the upcoming midterm elections, fighting against voters' false information. Natasha Abellard of Veuer has the story.
As voters vote in the mid-term elections, social media companies have a further concern: the protection of this process.
After reports of bogus accounts and fake news that penetrate social networks during the 2016 presidential election, companies like Facebook and Twitter have doubled efforts to prevent election manipulation.
Not only is the validity of the information on their platforms, but also the confidence of their users.
Business Insider Intelligence Digital Trust Report 2018, released in August, reported that over 75% of respondents said Facebook was "extremely likely" or "very likely" to show them misleading content such as "fake news, scams or clicks on the bait. " much better, with more than 60% of respondents agree that the platform has misleading content.
In January 2017, reports were reported that foreign entities, such as the Russian Internet Research Agency, used social media platforms to spread false and divisive information throughout the 2016 campaign. Until September 2017, Facebook announced that it had linked over 3,000 political ads running on its platform between 2015 and 2017 in Russia. Facebook later said that over 10 million users had been exposed to ads.
Hello! We have complete interim election coverage here. Let's start!
Last September, Facebook and Twitter executives filed accusations before the Congress that the use of their platforms by foreign workers could have affected the outcome of the presidential election.
Representatives for Facebook and Twitter said that after the 2016 elections, companies have stepped up efforts to detect and remove counterfeit accounts and protect users from false information.
Yoel Roth, head of Twitter's integrity, said the company has outgrown the "coordinated manipulation platform", or people and organizations using Twitter to mislead other users and spread false information.
More: Twitter, Lyft, Bumble and Tinder: How Technology and Social Media Companies Can Change Elections this Year
More: Twitter sheds more users amid cleansing, but revenue and profit outweigh expectations
More: "False social", "false searches" are the new "false news" as Trump attacks technologically in front of the media
During the 2016 campaign, misinformation appeared electronically in the form of bogus accounts and electronic publications that spread over-views, among others. By mid-November, experts say the techniques are similar, but people who are misinforming have become smarter.
Social networks also have.
"We have not seen a fundamental shift in what the bad actors do, but in 2016 it was like breaking into a house with the door very open and now there is at least one dog in which it will bark," said Bret Schafer, social media analyst in the Alliance for Democracy, a group of diplomatic national advocates.
Schafer said that social networking efforts to protect their platforms and users have created a "layer of friction" that makes it more difficult to conduct misinformation campaigns. Efforts include eliminating "bad actors" using bogus accounts for spreading misinformation and requiring political advertisers to verify their identity by providing a legitimate mailing address.
Facebook has developed a multifaceted approach to the integrity of the elections. The company has nearly doubled its security team before mid-2018 and takes a more active role in identifying "coordinated non-authentic behavior", according to Brandi Hoffine Barr.
"We now have more than 20,000 security and security people, we have developed advanced systems and tools for detecting and ending threats and we have developed backstops … to help us cope with any unforeseen threats as quickly as possible, said Hoffine Barr.
Many of the company's efforts begin with the detection and removal of counterfeit accounts. In May, Facebook said it threatened nearly 1.3 billion bogus accounts in the first half of 2018. Since these accounts are often the source of incorrect information on the site, Facebook said it fights the spread of false news by subtracting them.
Facebook also announced in October that it had removed 559 pages and 251 accounts to break the platform's rules for "spam and co-ordinated non-authentic behavior," which includes creating large networks of accounts to mislead other users. On Facebook, this may look like people or organizations that create false pages or bogus accounts.
More: Facebook seeks to fight electoral manipulation lost a big problem, critics say
More: We read each of the 3,517 Facebook ads bought by the Russians. Here we found what we found
Hoffine Barr described Facebook's work as a "continuous effort" and noted that the company is not working alone.
"Before the upcoming midterm elections, we work closely with federal and state officials as well as other technology companies to coordinate our efforts and exchange information," he said.
Two weeks before the interstices, Facebook unveiled a misinformation campaign from Iran that tried to sow dissent on issues such as President Donald Trump and immigration. There is currently no evidence that the campaign was linked to the Iranian government.
Twitter has also taken action against bad actors, recently clearing accounts that the company had previously locked for "suspicious behavioral changes". In a October 1 post blog, Twitter executives reported three "critical" areas of their efforts to maintain the integrity of the elections.
The first, an update on Twitter's rules, includes expanding Twitter content as a fake account. The company is currently using some criteria to do this, including whether the profile uses stolen or copied photos and deliberately provides misleading profile information. The second category is described as "detection and protection," and involves recognizing spam accounts and improving the ability of Twitter to ban users who violate its policies.
The most visible efforts are "product developments". From giving users the control over the order of their schedules by adding one election tag for candidates' accounts, this category has to do with Twitter users being updated.
Roth said the company is also exchange of information on "potential state actions" with researchers to learn more about attacks on the integrity of elections.
"Our goal is to try and stay ahead of new challenges," said Roth. "Protecting public debate is our core mission."
Because these efforts by social networking companies are fairly new, it can be difficult to measure their impact. Facebook highlights a recent study by Stanford University and New York University where researchers have found that users' interactions with bogus websites have fallen more than half in Facebook after the 2016 elections.
Schafer said one of the big differences since 2016 is the decline in automated activity. He said that Twitter, in particular, has gotten "far more aggressive" for closing the bots.
However, he mentioned that social networks are in a "complicated situation" when it comes to content regulation. Excessive regulation and criticizing oppression. Few of your platform rules are not applicable.
"If we do not want to get them into the field of active content regulation and decision-making about what is and is not expensive, you have to accept that there is some" bad activity "that will happen on the platform" Schafer said.
And while these companies are destroying misinformation providers, others emphasize tools that help users distinguish the event from fiction.
On October 2, New.News launched with New York technology launched a Google Chrome and Firefox browser extension called Newstrition, which provides users with information about media publishers for each given article, as well as control of third party events.
The tool was developed in collaboration with Newseum and the Freedom Forum Institute, and both non-profit institutions were dedicated to maintaining the first amendment.
More: Donald Tramb calls for more politeness as he attacks the media and Democrats in the Charlotte Rally
Unlike the traditional fact-checking tools that characterize the articles as true or false, Our.News CEO Richard Zack said the Newstrition tool allows people to see the validated information for each given article and make that determination.
"One of the things we found that I did research and talked to people … is that the public feels they are not part of the (news) process," Zack said. "They feel they are not heard in many ways".
Newstrition also invites users to give their opinion on news articles through a system of public ratings. Like Amazon or Yelp! the individual responses are then gathered to show "public consensus," said Jacques.
"In fact, we say," We want to hear your thoughts. We want to know what you think. You are part of the process, "he said.
The extension has already received thousands of downloads, as well as attention from media companies. Jacques said that although Our.News does not have the freedom to discuss specific issues, the company is talking with several big news publishers to incorporate the tool into their web pages.
"If people can not understand what to believe, then this undermines the entire first change," said Jacques. "It undermines freedom of the press as an institution".
Moving forward, social networks and experts say one thing is clear: this is far from the end of the misinformation campaigns. Schafer said that although the issue is particularly timely during an election year, these efforts are made daily.
"It is not like these accounts appear somewhat before the elections and then go back to hibernation and they will return in 2020," he said. "They will work every day, losing people's trust in democracies or democratic institutions, or simply flirting party debates."
More: These are the liberal memorials that Iran used to target Americans on Facebook
More: Facebook exploits political influence campaigns from Iran, Russia before US intermediaries
Read or share this story: https://www.usatoday.com/story/news/politics/elections/2018/11/03/facebook-twitter-elections-interference/1806308002/