Facebook's effort to stamp out "fake news" on its pages may come a little too late. Last week the company announced that it was launching a program to alert readers when they had "liked" a page that was rated as fake news. In that way, the company said, it could educate readers about how to determine fake news and the importance of understanding its potential impact.
"It is important that people understand how foreign actors tried to sow division and mistrust using Facebook before and after the 2016 US election," the company said in a blog post.
But for a company that has traded on the concept of spirited sharing of everything from favorite recipes to the latest late-night gossip, dissecting just what is fact and what is fantastic still seems to be a tall order.
And the largest problem may be the company's stance about fake news itself.
In November 2016, Facebook CEO' Mark Zukerberg's said he felt it was a "pretty crazy idea" to think that people's voting habits could have been influenced by fake news on the Internet. His position has since changed, in part because Facebook found thousands of postings from questionable sources, including Russian operatives.
But there have also been more countries that have reason to think their elections are being tampered with. Italy, which is gearing up for national elections in a few months, is the latest country to question Facebook's ability to monitor paid advertisements and postings from foreign operators.
According to New York Times, Facebook has reportedly told the Italian government that it is readying a "task force" to work on the problem and that they would be on hand to fact-check postings and minimize problems.
But according to Facebook's most recent communique to the New York Times, even that tidbit of news can't be confirmed. Facebook refused to confirm whether there would be a task force on hand in Italy, stating only that it couldn't comment on the matter.
And other efforts by the Internet company to explain how it will eliminate fake information from its website have received criticism as well. Earlier in November, the company said it would be installing software that would provide "trust indicators" for postings and advertisements. The little "i" would pose as a go-to place for finding out more about the posting's "trust" rating. But as Mashable writer Kerry Flynn points out, that may not be what readers want -- or need.
"For starters, it's doubtful anyone will actually bother clicking that little icon," said Flynn. And, not every poster will be featured under that program.
And that may be because Facebook is still uncovering a trove of data about postings that were, in effect, fake news all along.
What will be interesting to see is how rooting out the legions of fake news posters will affect its profit margin. The company is already bracing for a 45-60 percent rise in operating expenses next year due to the amount of work this process has taken. The question at this point is whether Facebook can really build a product that can differentiate between intentional inaccuracy and the human love for fantastic tall tales.
Jan Lee is a former news editor and award-winning editorial writer whose non-fiction and fiction have been published in the U.S., Canada, Mexico, the U.K. and Australia. Her articles and posts can be found on TriplePundit, JustMeans, and her blog, The Multicultural Jew, as well as other publications. She currently splits her residence between the city of Vancouver, British Columbia and the rural farmlands of Idaho.
We're compiling all data!