Telecommunications

How do you solve fake news problem in the post-truth era?

How do you solve fake news problem in the post-truth era?
How can the problem of fake news be addressed in a post-truth world?
How can the problem of fake news be addressed in a post-truth world?
View 6 Images
An example of how Facebook proposed to flag disputed articles
1/6
An example of how Facebook proposed to flag disputed articles
An example of how users would be able to flag potential fake news stories on Facebook
2/6
An example of how users would be able to flag potential fake news stories on Facebook
Google's CrossCheck program to battle the spread of fake news
3/6
Google's CrossCheck program to battle the spread of fake news
The back end of 2016 really showed an uptick in references to the word 'post-truth'
4/6
The back end of 2016 really showed an uptick in references to the word 'post-truth'
Facebook proposes to add this flag to disputed stories before users share them
5/6
Facebook proposes to add this flag to disputed stories before users share them
How can the problem of fake news be addressed in a post-truth world?
6/6
How can the problem of fake news be addressed in a post-truth world?
View gallery - 6 images

As Germany and France move towards elections in 2017, pressure has been mounting on tech giants such as Google and Facebook to tackle the challenge of fake news. But in the age of highly politicized news cycles the question everyone is asking is how can truth be safely separated from fiction without censoring genuine alternative news sources?

In the first part of our investigation we looked at the history of fake news and discovered that this is not a new problem. We also examined the fake news economy that has arisen as digital communication fostered a more fragmented media landscape.

The Facebook and Google approach

After a contentious 2016 US election, Facebook faced strong criticism for not better policing the torrent of fake news articles that were circulated on its platform. Chief executive Mark Zuckerberg had long insisted that Facebook was not a media publisher, but rather a technology company, and users were primarily responsible for the content that was spread over the platform. But by late 2016, the problem was too great to ignore and Facebook announced a raft of measures to tackle the spread of misinformation.

Zuckerberg was careful in stating that "we don't want to be arbiters of truth," before laying out a series of protocols that were being developed. These included disrupting the economics of fake news websites by blocking ad sales to disputed sites, and bringing in a third-party verification process to label certain articles as false or disputed.

An example of how Facebook proposed to flag disputed articles
An example of how Facebook proposed to flag disputed articles

From early 2017, Facebook began testing these new measures in the US and rolling them out more widely across Germany and France. Users will be able to flag articles that are suspected as fake news, which would forward the story to a third-party fact checking organization that would evaluate the accuracy of the piece. If the story is identified as fake it will be noticeably flagged as "Disputed by 3rd Party Fact-Checkers" and users will see a warning before they choose to share the story further.

An example of how users would be able to flag potential fake news stories on Facebook
An example of how users would be able to flag potential fake news stories on Facebook

Google has also launched an initiative dubbed "CrossCheck," which will initially run in France across early 2017 in the lead up to the country's presidential election. CrossCheck partners with a number of French news organizations with a view on fact checking articles that users submit that are believed to be false. Much like Facebook's plan, articles that are deemed false by at least two independent news organizations will be flagged as fake news in users' feeds.

Google's CrossCheck program to battle the spread of fake news
Google's CrossCheck program to battle the spread of fake news

The post-truth era

While these strategies are certainly a welcome approach from the tech sectors in trying to deal with this issue, we are faced with an increasingly existential crisis over what even constitutes fake news. When the President of the United States accuses an organization such as CNN of being fake news then it allows certain sectors of the community to attack the veracity of who is fundamentally determining what is and isn't true.

Oxford Dictionaries declared "post-truth" to be their word of the year for 2016, defining it as an adjective, "relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief."

The back end of 2016 really showed an uptick in references to the word 'post-truth'
The back end of 2016 really showed an uptick in references to the word 'post-truth'

It seemed 2016 was not only a year of political destabilization, it was also one of fundamental philosophical destabilization too. Even objectively credible sourced were undermined, as even previously trustworthy scientific sources were violently politicized. For every scientist saying one thing, there was sure to be someone that could be dredged up to counter that point – with no regard to their own qualifications, or lack thereof. Climate data, immunizations, and even crowd-size estimates were all perceived as being up for debate in some circles as scientists were accused of harboring political agendas. Unreported information that contradicted the general consensus was immediately claimed as being suppressed.

We have now reached a point where it's fair to ask, how can we even determine a news story as factually verified when the sources of facts are themselves in question?

Can AI help?

Many organizations are currently looking to technology to help find a way to more clearly separate the truth from the fiction. ClaimBuster from the University of Texas is an algorithm that analyzes sentences, in real time, for key structures or words that correspond with factual statements. The system does not rate the truth of statements, but it can identify whether a comment is check-worthy on a scale of 0 to 1.0. It's an early step towards a system that can identify if someone is making factual claims and then move to a process where those claims can be automatically verified.

While AI systems such as these are undoubtedly useful for live fact checking of statements that are verifiable, they fall short when considering that a great deal of news reporting surrounds truthful recounting of witnessed events or citations of newly compiled statistics.

Additionally, the interpretive quality of journalism cannot be easily replaced by artificial intelligence. In discussing this issue of how the news can be abstracted through the language used to recount it, Guardian reporter Martin Robbins uses the example of President Trump's recent executive order temporarily halting access to the United States to persons from certain countries. Some news organizations have referred to the order as a "Muslim ban" and while the order itself specifically targets certain countries and not religion, Robbins explains, "To say that Trump's order is a ban on Muslims is technically false, but to suggest that it doesn't target Muslims is equally disingenuous."

Robbins describes our current climate as resulting in, "a sort of quantum news," where stories can be both true and false at the same time. When agenda can be loaded into language so heavily, how can we trust any third-party to verify stories for us? The danger is that political agendas can easily be slipped into articles that can still be considered truthful reporting. Could Facebook determine that a story about Trump's immigration ban that uses the language "Muslim ban" be fake news? What possible hope could an algorithm have in deciphering the intricacies of human language?

It's up to all of us

In a conversation with Twitter CEO Jack Dorsey on December 2016, controversial whistleblower Edward Snowden worried that these attempts by technology platforms to classify the truthfulness of content could lead to censorship.

"The problem of fake news isn't solved by hoping for a referee," Snowden said. "We have to exercise and spread the idea that critical thinking matters now more than ever, given the fact that lies seem to be getting very popular."

Regardless of our political leanings, we all need to approach information with a critical and engaged perspective. The issue of fake news is not necessarily a partisan one. In addition to fake news peddlers who are purely profit driven, recent reports have signaled that fake news has begun to skew more left with President Trump in power and liberals are to blame for perpetuating the spread of misinformation as well as conservatives.

Critical thinking is an apolitical act and ultimately we are all responsible for the information we consume and share.

Fake news has existed as long as people have told stories, but with the internet dramatically democratizing the nature of information transmission, now more than ever, the onus is on all of us to become smarter, more skeptical and more critical.

View gallery - 6 images
22 comments
22 comments
VikashNaresh
I posted this on the 11th of November on FB after being bombarded with fake posts. I had a couple of beers.
News from social media are not always correct...People should be held accountable for false or manipulative reporting or sharing such articles which are not from an organisation which is registered to proper authorities where they can be held accountable. News agencies can register with Facebook and their stories can be shared. Facebook can charge a fee to these agencies annually. If for example someone posts a fake news and it gets reported negatively by any user. If not registered with Facebook....will disappear. But from a registered organisation even if false will be posted. Hence news organisation will be held accountable if fake. So people can share news from credible news organisations within Facebook. Facebook can become a great source of news. A simple algorithm can be incorporated to cater this. Of course I may have missed some details so please comment.
Martin Winlow
This 'fake news' thing is beginning to take on an air of mass hysteria.
Say some guy you don't know very well comes up to you and says "OMG!!! Did you hear about..(blah, blah, blah).!!!" Do you just accept it and start Tweeting frantically, trying to be the first person in your social group to 'have the inside gen'? Not if you aren't some sort of half-wit lemming you don't! You would check out the story and do your 'due diligence' lest it turns out to be complete twaddle and you end up looking a twit! So, why don't we all do this when we hear about some fantastic story that plops into our email in-tray or that we read in some stupid 'lifestyle magazine, or tweet etc?
Time to just all simmer down a bit and read some decent news sources - if there are any left. BBC? I'm not even sure about them anymore but at least they don't have to rely on click-bait and advertising revenue to exist - unlike possibly *any* other news source on the planet.
In short - get a grip!
Daishi
I am middle of the road politically but I think censorship inherently favors left leaning views. It's simply less offensive to say "I think we should be helping all people in the world in need of help" than it is to say "We should look out for our own citizens first even if that means neglecting people in other regions of the world". There are lots of people in both camps but you rarely see celebrities and public figures come out in support of right leaning views. The fact is that it's really hard to attack an individual for saying "we should help all people in the world" as a terrible person but that's not always possible to do and hard choices sometimes have to be made. The problem with censorship is that it tends to be enforced more heavily against more offensive views and less so against less offensive views but who is right isn't always determined by which persons views were less offensive. The idea that Mike Brown had his hands up and the officer just murdered him execution style was the biggest fake news story in the US that I know of recently and even after the facts were well known many on the left including media refused to call it false. There was evidence supporting the truth available from the beginning but it was the truth that was censored on reddit and social media because people found it offensive. The less offensive opinion isn't necessarily the right one and because of that censorship will ALWAYS be applied unequally.
VirtualGathis
@Martin Winlow - You are making a dangerous an incorrect assumption in your post here. You are assuming that the majority of people who use the internet and social media in particular exit the application and voluntarily perform a "fact check" of "due diligence". Outside of the academic world I've seen near zero people who do that. So the hysteria over fake news is not uncalled for, if for no other reason than to make the masses aware that they MUST fact check EVERYTHING they see. Then there are supposedly reputable sources that are completely fabricating events like "The Bowling Greene Massacre" that never even happened. At that point how can an ordinary person even distinguish reality from fakers?
MBadgero
"Post-Naive Era" would be more appropriate than "Post-Trust Era". People have to learn for themselves how to judge honesty.
Bob
I think this article totally misses the real problem. This is a moral crisis. If you have definite beliefs about what is right and wrong, then you understand what real truth is. If you have no definite moral compass, then any story that benefits your cause is easily accepted as the truth and anything that offends you must be a lie and needs to be censored. Traditionally, the laws and truth were based on the "Ten Commandments" rather than man's individual interpretation of what is right and wrong. Once man forsakes the Ten Commandments, he is on a slippery slope downward towards the lowest common level. If lying is to man's advantage then he will lie. Without an absolute moral authority, there will be no solution.
TimStoltenburg
CNN and many other MSM outlets are the main propagators of "fake news". I'm a young guy, but I'm sure it's been going on a while. This past summer is when I first experienced first-hand how deep this thing goes. I was shadow-banned on facebook for sharing r/the_donald, unreddit, and wikileaks information. I also had multiple accounts on reddit banned for sharing links to unreddit (it allows you to see what comments have been deleted). We have reddit working with an intelligence company called Stratfor, who was working for the Clinton Foundation. We had a thing called "correct the record" (CTR) which was a super PAC hired by Clinton (this isn't just about her: it's about people with money/power exerting influence on the news) to vote/downvote, and promote or obscure news based on its favorability (or harm) to a particular organization/individual.
We had CNN colluding with the DNC, giving out questions to one side in a debate, and I'm sure more that hasn't been exposed. We had CNN tell us outright that "reading the wikileaks is illegal...[for citizens], as well as planting their own crew to act as protesters.
We had most of the MSM, (including NPR!! -whom I used to trust!) artfully, tactically publishing certain stories while holding back others. This summer was mind-blowing for me. We bobbing around in an ocean of agendas...I now only trust the raw facts, and I attempt to be exposed to them from the everyday people that saw it/heard it/said it, etc, not through the filter of some agenda-(or money)-driven organization. Bunch of liars.
TracySloneckerFrazier
Having users flag messages as fake news is a terrible idea. Nothing to keep people from flagging posts, political or otherwise, as "fake" just because they don't agree and want to try and get the article removed. Or maybe they honestly believe the articles are fake. An article was recently published and was on FaceBook that the debate over GMOs is over...they are harmless and they have proof. Great... but how many "anti-GMOers" are going to read that and flag it as "fake".
If you're going to let end users "flag" news as "fake"...no designation should be noted on the post until it's determined to be such.
Doug Nutter
The solution may be to certify journalists in much the same way as other professionals are certified. As it is, readers have no way to judge the qualifications of the writer. They would have to be accountable to their peers in much the same way as lawyers and doctors. It's not perfect, but better than what we have.
Rustin Lee Haase
I'd be for the concept of fact-checkers on facebook if I had the freedom to choose which fact-checker I subscribe to or to even be one myself that others could subscribe to. I don't like the idea of anyone other than the user choosing what reports get out either directly or by proxy, but certainly NOT by a third party, not chosen by the user, who often has different ideas of what is true or good than the user's.
Load More