An executive order on artificial intelligence (AI) that U.S. President Joe Biden unveiled Monday requires safety assessments, equity and civil-rights guidance, and research on the impact of the labor market.
The advertising industry has actively poured billions of dollars into automating processes from ad serving to bidding and creative advancements for a chance to streamline processes and reshuffle job responsibilities.
Marketers have become enthusiastic about generative AI (GAI), as 14% have already invested in tools.
Another 63% are planning to do the same in the next 24 months, but risk-management challenges loom. Despite the enthusiasm, only slightly more than half of respondents see greater reward than risk in GAI, according to Gartner’s 2023 Martech Survey Research Note.
Google stepped up an investment in AI last week -- committing $2 billion to startup Anthropic, fueling the race to align with emerging companies to find the next big breakthrough. The initial investment of $500 billion is a precursor to another $1.5 billion over time, according to one report.
Amazon also invested in Anthropic, the developer of Claude 2, a chatbot founded in 2021 to rival OpenAI’s ChatGPT. Slack, Notion, and Quora use its technology.
Agencies such as WPP Group have created partnerships with chip makers to advance advertising through AI. Earlier this year, WPP partnered with NVIDIA to develop a "content engine that harnesses NVIDIA Omniverse."
The engine connects an ecosystem of 3D design, manufacturing and creative supply-chain tools, including those from Adobe and Getty Images, allowing WPP’s artists and designers to integrate 3D content creation with generative AI.
While technology companies and advertising agencies have been their own watchdogs, the White House has stepped in by working with major developers on a series of voluntary commitments to red-team systems by third parties before releasing them. Red-teaming is intended to reveal risks to a company that traditional tests may not reveal.
The order took months to create, but the executive order reflects White House concerns that technology left unchecked could pose significant risks to national security, the economy, public health and privacy.
The move also aims to protect consumer privacy, eliminate or reduce bias, and evaluate how agencies collect and use commercially available consumer information procured from data brokers. It also should ensure the responsible use by the U.S. government.
About 15 U.S.-based technology companies — Google, Microsoft, OpenAI and others — have agreed to implement voluntary AI safety commitments as a step toward regulation for the technology’s development.
The announcement comes days before U.S. Vice President Harris is expected to attend a global summit on AI in London.
In August, the White
House challenged thousands of hackers and security researchers to outsmart top generative AI models from companies such as OpenAI,
Google, Microsoft, Meta and Nvidia. The AI-hackathon ran as part of DEF CON, where participants had less than an hour to trick the chatbots into doing things they
are not supposed to do, like giving potentially dangerous instructions or generating fake news.
The White House broke the executive order into eight parts, including creating new safety and security standards for AI:
Setting the groundwork for these advancements have been in the process for months. In January, the National Artificial Intelligence Research Resource (NAIRR) Task Force released its final report for creating a national research infrastructure that would broaden access to the resources essential to AI research and development.