Google continues its mission to clean up misinformation from its search engine, YouTube and other platforms -- yet it has become increasingly difficult, even for algorithms, to keep disinformation out of its news feed.
During the weekend, Google released a white paper detailing how it fights disinformation across Google Search, News, and YouTube, as well as its advertising platform Google Ads.
The white paper, released at the Munich Security Conference, outlines Google’s stance on misinformation. It also explains how the company goes beyond work with its products by supporting “a healthy journalistic system,” as well as partnering with civil society and researchers to stay one step ahead of risks.
“Misinformation,” “disinformation”, and “fake news” are words used to describe inaccurate news. But Google says it’s one thing to unknowingly disseminate incorrect information about an issue, and another to purposefully disseminate inaccurate information with the hope others believe it is true or to create discord in society.
The efforts, according to Google, are intended to stop malicious individuals and algorithms from spreading disinformation, and ensure that people only receive quality information, as well as the additional context around that information to make educated decisions.
Ranking algorithms, in part, provide major support to elevate authoritative, high-quality information across its platforms. "For most searches that could potentially surface misleading information, there is high-quality information that our ranking algorithms can detect and elevate," per the white paper. "When we succeed in surfacing high-quality results, lower quality or outright malicious results (such as disinformation or otherwise deceptive pages) are relegated to less visible positions in Search or News, letting users begin their journey by browsing more reliable sources."
Creators of disinformation are constantly exploring new ways to spread their messages by bypassing defenses set by online services. One example is new forms of artificial intelligence-generated, photo-realistic, synthetic audio or video content known as “synthetic media.”
This is often referred to as “deep fakes.” While this technology has been useful for applications used by individuals who are speech- or reading-impaired, Google says it raises concerns when used in disinformation campaigns and for other malicious purposes.
So, Google and YouTube are investing in research to understand how AI might help detect such synthetic content as it emerges.
Google, along with Microsoft, believes the key to providing users with accurate information is to provide a diverse set of perspectives. Both companies began providing query results with diverse perspectives last year, enabling those searching for information to form their opinion based on content that serves up in results. Ads that are sponsored also have become more visible.