Google and Microsoft, two huge tech companies that support massive search engines and advertising platforms, have updated the way they deal with deepfakes.
Support behind protecting users of Google Search against deepfakes was provided by updating its ranking system to ensure content does not rank high in Google Search, and by making it easier to remove deepfakes from serving up in the results.
When there is a higher risk of fake content appearing in the Google Search results, the company will use “ranking updates" to lower the chances of seeing explicit fake content. When searching for someone's name, Google Search will “surface high-quality, non-explicit content" such as news articles, when available.
"The updates we’ve made this year have reduced exposure to explicit image results on these types of queries by over 70%," Emma Higham, product manager at Google, wrote in a post that outlines the changes.
advertisement
advertisement
The need to distinguish explicit content that’s real and consensual from explicit fake content will become more important.
"While differentiating between this content is a technical challenge for search engines, we're making ongoing improvements to better surface legitimate content and downrank explicit fake content," Higham wrote. "If a site has a lot of pages that we've removed from Search under our policies, that's a pretty strong signal that it's not a high-quality site, and we should factor that into how we rank other pages from that site."
Google will demote those sites that have received a high volume of removals for fake explicit imagery. Tests shows this type of monitoring works well for Google.
Announcements from Google also focused on several image-related search features by expanding About this image, Circle to Search, and Google Lens.
People, for many years, have been able to request the removal of non-consensual fake explicit imagery from Search under its policies. A new system will make the process easier, the company said.
Now, when someone requests the removal of explicit non-consensual fake content of them from Search, Google’s systems will filter all explicit results on similar searches about them.
While Google addressed search engines specifically, Microsoft yesterday said it will protect the public from abusive AI-generated deepfake content in other ways. "As swiftly as AI technology has become a tool, it has become a weapon," Brad Smith, Microsoft vice chairman and president, wrote in a post.
With that post, Smith shared a 42-page document on how Microsoft defines the challenge and supports a comprehensive set of ideas including endorsements for the hard work and policies of others.
A strong safety architecture needs to be applied at the AI platform, model, and applications levels. Content filtering has been integrated within the Azure OpenAI Service.
Microsoft has been using watermarking and fingerprinting, investing in addressing abusive AI-generated content by building on existing frameworks, policies, and partnerships that support ongoing efforts.
The company partnered with NewsGuard to evaluate Microsoft Designer, the information shares in its 2024 Responsible AI Transparency Report that details the steps taken to map and measure risks, and then manage or mitigate the identified risks at the platform or application levels.
Microsoft and OpenAI in May announced the launch of a $2 million Societal Resilience Fund to further AI education and literacy among voters and vulnerable communities.
The next step, per Microsoft, a need for Congress to pass the bipartisan Protect Elections from Deceptive AI Act, sponsored by Senators Klobuchar, Hawley, Coons and Collins. This legislation prohibits the use of AI to generate materially deceptive content falsely depicting federal candidates in political ads to influence federal elections, with important exceptions for parody, satire, and the use of AI-generated content by newsrooms.
The report "Protecting the Public from Abusive AI-Generated Content," Smith wrote, is the basis for the most important things the U.S. can do--pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans.