In a fight against the type of "offensive or clearly misleading" results that make up about 0.25-percent of daily search traffic, Google has outlined new efforts to stymie the spread of fake news and other low-quality content like unexpected offensive materials, hoaxes and baseless conspiracy theories.
The search giant's first line of defense against this uniquely 21st-century spread of misinformation is actually an old-fashioned one: human intelligence. You might think it's running completely on algorithms and AI these days, but Google actually has human evaluators that perform searches, assess the results and provide feedback back to the company. Google has updated the guidelines for this team of Search Quality Raters.
Now, raters have more detailed instructions for identifying low-quality search results. The improved feedback from these human testers will then inform changes to Google's search algorithms, so false and misleading content appears lower in the results.
Google has also fine-tuned underneath the hood. It has adjusted ranking signals specifically with the intention of demoting problematic content. As is typical of the more behind-the-scenes aspects of its ranking system, Google does not detail the specifics of these changes, but it did cite Holocaust denial results that appeared in December 2016 as an example of the type of problem it hopes to resolve.
Lastly, there are also new direct feedback tools, through which users can flag problem content that appears in Google' Autocomplete and Featured Snippets features. These are tools that help you arrive more quickly at the content you're looking for, but since their content is generated automatically, it is sometimes problematic. A "Report inappropriate predictions" link now appears under the Google search box, and a "Feedback" option now appears below the featured Snippet.
Google isn't the only major platform battling fake news – social media giant Facebook and other major tech companies are also funding solutions to curb the problem.
Source: Google
That being said, with every right comes a responsibility. The right of free speech comes with a responsibility to be truthful and to not cause harm. The torts of slander and libel, and the crime of perjury are all based on this. Unfortinately the courts are both too slow and too expensive for them to be effective. If we can not let the courts decide based on the weight of evidence, then we are left with some form of censorship. Google is after all a company providing a search engine. They are not bound to give results on every incidence of an item on the Web, but one could argue that they owe a duty of care to all using their service. As a result censorship may even be considered responsible. As another famous person said "I may not agree with you, but I will fight to protect your right to say it" (Evelin Hall summarising the teaching of Voltaire)
This doesn't bode well for Google.