Amazon Web Services, The Sleeping Generative AI Giant

Generative artificial intelligence (GAI) has changed the way content is consumed, and many of the places and ways that it is consumed did not seem possible just a year ago. Now the focus is on making content more creative.  

Jon Williams, the global head of agency business development and solutions at AWS, focuses on developing strategies for partners and customers through the use of AWS services -- advertising and marketing technologies.

Williams has more than 20 years of experience in advertising and marketing at companies such as Oracle Data Cloud, Microsoft, LinkedIn, and Snap, which enable him to support clients.

Media Daily News (MDN) caught up with Williams to talk about AWS, artificial intelligence (AI), the mistakes marketers sometimes make, services and costs.

MDN:  What is the biggest mistake people make about using GAI for advertising and marketing?

Williams:  Data is your differentiator. Before customers embark on their generative AI journey, they need to have a strong data foundation.



Another key consideration is to have vector embeddings. For example, words that are related in context will have vectors that are closer together, which helps machines understand the similarities and differences between words. This will ensure that your company’s data will work well when implementing generative AI.

For example, AWS released Vector engine for OpenSearch serverless so Developers can store vectors alongside business data and text, making it easier to query embeddings, metadata, and perform text search from a single API call.

Some common problems we see across financial services, retail, and advertising include having to perform additional work to match and link related data records because records are often siloed across applications, channels, and data stores.

To help solve this common issue, AWS released Entity Resolution, which helps companies easily match and link related data records in minutes instead of months. It is vital to ensure that these data foundations are in place as customers embark on their generative AI journey.

MDN:  What is the biggest misunderstanding?

Williams:  Many see big potential in generative AI alleviating tedious repetitive tasks without the need for human writers, creatives, designers, or program and campaign managers. The question is -- to what extent this technology can operate on its own and what is the right level of human input and evaluation?

Enterprises, industry bodies, and governments will need to partner to calibrate and help soften the human impact of automation. This will likely be through the creation of new roles and capabilities, employee re-skilling and refocusing toward high-value production and operations.

Another factor will be guidelines and/or regulations to help balance human input and operation of models. Human oversight and accountability will still be necessary to fully capitalize on opportunities ahead.

MDN:  How should marketers think about creative content development when using GAI?

Williams:  Generative AI can help marketers who are looking for fresh and innovative ways to engage with their target audience or save on creative concept development time.

Since this technology uses deep-learning algorithms to create unique ideas versus relying solely on human creativity, it can generate new ideas without any external influence apart from a simple prompt.

This allows businesses to create and test different variations of content across nearly all components to help see which ones perform the best, resulting in high quality, engaging, and contextually relevant imagery and copy for blog posts, social media updates, email campaigns.

For example, a car company might use generative AI to create hundreds of different ad variations featuring different cars, backgrounds, and music. An AI algorithm would identify which ads perform the best, allowing the company to invest more in those ads and improve their overall marketing effectiveness.

To do this a company would need to train what we call a Foundation Model or FM. For most, the time, resources, and cost of building FMs with millions -- maybe billions -- of data points is not feasible.

That’s why Omnicom is creating new foundational models with AWS that help automate activities such as developing creative briefs, media plans, ad creative, audience segmentation, and performance measurement.

MDN:  What should marketers consider when trying to enhance the customer journey using GAI?

Williams:  There are many generative AI use cases after the customer accepts an agreement including onboarding, up-selling, and retention.

For larger enterprises that embark in deal negotiations, as a deal progresses, generative AI can provide real-time negotiation guidance and predictive insights based on a comprehensive analysis of historical transaction data, customer behavior, and competitive pricing.

When a new customer joins, generative AI can welcome them with personalized training content, highlighting relevant best practices.

The combination of generative AI with machine learning (ML) can also offer real-time next-step recommendations, marketing lead scoring, customer service routing, fraud detection, cross-selling, and continuous churn modeling based on usage trends and customer behavior. It is useful in the customer journey because it can help identify critical touchpoints and drive customer engagement.

From a business-to-consumer (B2C) perspective, across industries, engagement models are changing. While they still desire a traditional face-to-face engagement in some instances, there is customer preference for online, self-service ordering and reordering.

Omnichannel generative AI optimization is fundamental to jumpstart top-line performance, giving sales, customer experiences, and marketing teams the right analytics and customer insights to capture demand.

For businesses, generative AI can help create extremely sophisticated chatbots that can answer customer questions quickly and efficiently to provide a better customer experience and improve their overall customer satisfaction.  For example, an e-commerce website might use a chatbot to answer customer questions about shipping and returns. 

The chatbot would use generative AI to analyze the customer's question and provide a helpful response in real-time. This will deliver seamless customer support 24 hours a day -- along with detailed, actionable records of the customers’ greatest queries and pain points.

MDN:  How can marketers best understand customer sentiment and optimize operations with GAI?

Williams:  I have two suggestions for marketers to further understand and optimize with generative AI.

Sentiment analysis: Beyond automation of customer listening, generative AI can aid in sentiment analysis by creating synthetic text data that has been labeled with different sentiments such as positive, negative, or neutral.

This synthetic data can be used to train deep-learning models to analyze real-world text data for sentiment. It can generate content that is intentionally designed to convey or reflect a specific sentiment that could help shape public opinion for marketing campaigns. This approach can address the issue of data imbalance and human bias in sentiment analysis of user opinions, in various areas like customer service.

Automation is key. Generative AI has the potential to tackle laborious admin work such as proofreading of marketing collateral, updating databases, managing ads campaigns, analyzing customer reviews, as well as monitoring social-media platforms and forums to gauge public opinion and sentiment about a brand, product, or service far quicker and more effectively.

Marketers can use these tools to read all customer reviews and questions related to a product or service, and provide a top-level summary of sentiment, to enable leaders to make decisions ranging from customer service policies to research and development.

Mundane tasks that have traditionally been done by hand not only go away, but can yield to extremely strategic decisions that may have been missed before, such as pricing optimization, inventory forecasting, and effective promotional strategies

MDN:  What are the best use cases for advertising and marketing using GAI?

Williams:  There are good use cases on creative and content generation, streamlining operations, and assisting with end-to-end customer journeys.

Another great use case is contextual advertising. This helps brands make their customers' experience more relevant and personalized, and has improved the return on their marketing investment. Brands and agencies now have the opportunity to take it to the next level by leveraging generative AI capabilities to tailor their ad creative to best fit the environment where it is served, in real-time.

Marketers and creatives can combine brand, audience, and campaign-specific goal prompts with the surrounding advert context -- for example, the subtext of a page or a slice of a video, and get nearly limitless iterations of visual, copy, and audio creative dynamically.

MDN:  What types of services does AWS offer when it comes to GAI for content, personalization, and the customer journey?

Williams:  As previously mentioned, there is Vector engine for OpenSearch serverless. This tool helps Developers store vectors alongside business data and text, making it easier to query embeddings, metadata, and perform text search from a single API call.

Before starting on the generative AI journey, customers must make sure they have a strong data foundation, which is where vector embeddings come into play. These ensure that generative AI will work well with your company’s data.

I have mentioned AWS Entity resolution, which helps set up entity resolution workflows in minutes instead of months helping to match, link, and analyze related records more quickly without needing to build and maintain complex custom solutions. This helps customers save time and cost.

There is also Amazon Bedrock, which provides wide access to pre-trained models that can be customized with your own data. It allows data to be kept private, and leverages the power of the cloud to deliver capabilities securely and at scale.

Companies don’t have to think about model hosting, training, or monitoring and can instead focus on the outcomes they are driving toward. All data in Bedrock is encrypted, and customer data is never used to train the original base model. You can configure Virtual Private Cloud (VPC) settings to access Bedrock APIs and provide model fine-tuning data in a secure manner.

Bedrock provides access to pre-trained FMs from a variety of providers, including Multilingual LLMs from AI21 to generate text -- to generate unique, realistic, high-quality images, art, logos, and designs from language prompts to create drafts or production-ready images that resonate with your audiences.

Users can create stunning visuals and realistic aesthetics with Stable Diffusion XL. Anthropic Claude 2 is for LLMs for thoughtful dialogue, content creation, complex reasoning, creativity and coding. Cohere is a text-generation model for business applications and embedding model for search.

Another service is Amazon Titan FMs. These are two new out-of-the-box LLMs -- one for text generation to create a blog post, for example, and one for vector embeddings, which can help translate text into numerical representation for semantic understanding of the text, useful for search.

Agents for Amazon Bedrock configure your FMs to automatically break down and orchestrate tasks without any manual code. The agent securely connects to a company’s data sources through a simple API, automatically converts the data into a machine-readable format, and augments the user’s request with relevant information to generate a more accurate response.

Agents in Bedrock takes action to fulfill the user’s request by executing API calls on your behalf. Customers don’t have to worry about complex systems integrations and infrastructure provisioning. As a fully managed service, agents for Bedrock take care of all of this for them.

AWS offers AI services such as Amazon Personalize Amazon Personalize ML, making it easier to integrate personalized recommendations into existing websites, applications, email marketing systems, and more.

MDN:  How long does it take and what are the pieces needed to put a cohesive GAI strategy in place?

Williams:  It depends on the company, its resources, and the foundation that is already in place. What I can tell you is that a number of our customers asked us for guidance.

That’s where the AWS Generative AI Innovation Center comes in. We’ve assembled a team of strategists, data scientists, engineers and solutions architects who will work alongside customers throughout the process of building generative AI systems.

AWS works with customers using Amazon’s working backwards process. First, we work with the customers to identify the business opportunities and the potential generative AI use cases. Then our team helps plan and develop proof-of-concepts, and we help them prepare for production launch at scale.

Our goal is to help customers understand how to select the right GAI use cases to experiment with, improve accuracy in foundation and large language models, and strategize how to fine-tune and customize these models for use cases.

The AWS GAI Innovation Center will typically work with select customers at no additional cost, including ideation, demo creation, and proof of concept (POC) development. Our AI Innovation Center will work helping companies responsibly apply generative technologies by developing methods to detect biases and explain model predictions, and monitor and implement human review of the generative AI’s output.

We develop our AI models and detect and remove harmful content from the data that customers may provide for customization. We’re going to reject inappropriate content in the user input and we’re going to filter outputs containing inappropriate content, which would include hate speech, profanity, violence etc.

The other area that is particularly important for customers to consider is making sure the customization enabled by the models is secure. So, in Amazon Bedrock, none of our customers' data is used to train the underlying models.

As the data is encrypted, it doesn’t leave the customers' VPC. It’s important to make sure customers can trust that their data will remain private and confidential from anyone else, particularly their competitors.

MDN:  Can you provide any type of cost estimate for that strategy? Or provide an example?

Williams:  As you start to think about the future when FMs are deployed at scale in your applications across your organization, most costs will be associated with running the models and doing inference.

While you typically train a model periodically, a production application can be constantly generating predictions (the inferences), potentially generating millions per hour. Because we knew that most of the future ML costs would come from running inferences, we prioritized inference-optimized silicon when we started investing in new chips a few years ago. Inf2 instances powered by AWS Inferentia2 are optimized specifically for large-scale generative AI applications with models containing hundreds of billions of parameters.

Inf2 instances deliver up to 4x higher throughput and up to 10x lower latency compared to the prior-generation Inferentia-based instances. They have ultra-high-speed connectivity between accelerators to support large-scale distributed inference. These capabilities drive up to 40% better inference price performance than other comparable Amazon EC2 instances and the lowest cost for inference in the cloud. AWS Trainium provides highest performance, energy efficient, and cost-effective training in Amazon EC2 for large-scale deep learning models. Trn1 instances, powered by Trainium, can deliver up to 50% savings on training costs over any other EC2 instance.

For developer productivity AWS created CodeWhisperer -- a giant leap forward in developer productivity. It is coding companion using FMs under the hood to improve developer productivity by generating code recommendations based on natural language comments and prior code in the integrated developer environment.

Coders completed tasks 57% faster than those that did not use CodeWhisperer, and completed tasks successfully 27% more frequently than those that did not.

This frees up valuable time from “heavy lifting” coding so companies can innovate faster.


Next story loading loading..