Commentary

Meta, Where's The AI Video Tech To Identify Hateful Trigger Words In Videos?

Dear Mark Zuckerberg,

Happy New Year.

Meta and its very creative and intelligent developers this year have an opportunity to change humanity as the world -- especially the United States -- undergoes a heightened period of hatred and intolerance.

The 42-year-old terrorist who pledged his loyalty to ISIS before driving a rented pickup truck into a crowd of revelers on Bourbon Street, killing 14 people and injuring dozens more, posted five videos on Facebook in the hours leading up to the attack.

Christopher Raia, the deputy assistant director of the FBI’s Counterterrorism Division, explained all this during a news conference Thursday, but what he didn’t talk about was the sophisticated technology Meta developers could create to identify words in posts and videos that flag and trigger takedowns of content.

Meta in a September blog post explained testing of a Meta AI translation tool that automatically translates the audio of Reels from a variety of languages. Small tests were run on Instagram and Facebook, translating some creators’ videos from Latin America and the U.S. in English and Spanish.

advertisement

advertisement

Google Gemini tells me AI cannot identify every word in a Facebook video before it runs, but I have faith in your development team that it can be done.

Gemini gives several reasons for this, including real-time processing limitations, analyzing entire audio and video streams for every word in real-time is computationally very demanding, and even the most powerful AI has limitations.

Some of the technical challenges include speech-recognition accuracy in noisy environments or with multiple speakers, still an ongoing challenge. And understanding the nuances of language, including slang, accents, and humor, requires sophisticated AI that is still under development.

OpenAI’s Sora can create videos from text, but it’s not clear why it cannot identify trigger words.

The advertising industry has technology that can block lists of words preventing ads from serving on specific publisher websites. When someone posts a video, photo or post on Facebook they give up the rights to the content, so invasion of privacy is not a problem.

It would really help humanity if Meta could form an alliance with others in the industry focused on AI to develop some type of technology that could identify these words.

I'm sure Google DeepMinds, Microsoft, OpenAI, Anthropic, and many others would work with Meta to develop a forum to improve the industry.

There are plenty of AI video tools that can generate content, but I couldn't find one that allows the platform to identify trigger words in the content of the video.

4 comments about "Meta, Where's The AI Video Tech To Identify Hateful Trigger Words In Videos?".
Check to receive email when comments are posted.
  1. Dan Ciccone from STACKED Entertainment, January 3, 2025 at 1:46 p.m.

    It is amazing how many outlets like MP make it sound like AI is a magical tool to right all the wrongs in the world.  It's been 25 years of tech in the realm of the internet space and if we've learned anything, context matters and keywords don't mean anything without context and the entire industry has been struggling with "context" since is inception. 


    Also - you clearly do not understand technology and how it interacts with media.  Facebook users alone post more than 2.5 BILLION pieces of content every day.  Do you have any idea what kind of sheer energy/power and servers are needed to post this content, let alone scrub it all, put everything into context, and then try to decide what is good and what is bad?


    It's no different than blaming gun manufacturers for deaths - focus on the shooter and what motivates the behavior - the weapon is a tool, not the instigation or motivation of the behavior.

  2. Laurie Sullivan from lauriesullivan, January 3, 2025 at 2:10 p.m.

    Dan, you clearly don't understand the power of AI and what it can do. I used to write about semiconductors and processing, so I am fully aware of the processing power required. It can be done. It would take time and cooperation. It is different than blaming gun manufacturers for deaths, which I would never do, because I'm not blaming Meta or the industry. AI can detect sentiment in ad serving technology. 2.5 billion is nothing -- wait another year.  AI can detect sentiment in videos and audio. I’m not an engineer, but I believe it can be done with the correct technology and power. You should apologize for your rude behavior. 

  3. Dan Ciccone from STACKED Entertainment replied, January 4, 2025 at 6:56 a.m.

    Laurie, I've been involved in media and tech for two decades.  The industry has always struggled with targeting keywords and contextual content and history has also shown that many censoring applications have actually promoted falsehoods while subverting truths. 


    All of energy required  to power your "solution" comes at a great expense and AI is still highly flawed and only as good as the information that feeds it. 


    Finally, what you may deem inappropriate or offensive may be perfectly acceptable to someone else.  And I find no need to apologize for having a different opinion and pointing out the fact that you're ignoring two major problems at implementing AI at the scale required in your piece.  Power and cost alone are major inhibiting factors.  

  4. Laurie Sullivan from lauriesullivan, January 4, 2025 at 8:01 a.m.

    Dan, thank you for reading my piece and sharing your opinion. Cost, performance and power will all fall in line. I have faith considering the amount of money these AI companies are investing in power plants, processing facilities and semiconductor desioand production. 

Next story loading loading..