Microsoft confirmed that its Bing search engine now runs on GPT-4 after OpenAI launched the latest version on Tuesday.
OpenAI says GPT-4 is more capable and accurate than the previous version, and has worked for the past six months to make GPT-4 about 82% less likely to respond to requests for disallowed content, and 40% more likely to produce factual responses than the previous version.
This version of the large-scale, multimodal model can accept image and text inputs and produce text outputs.
“The changes are pretty significant for marketing, which is probably more concerned with the ability to alter the ChatGPT personality and style,” said Andrew Frank, vice president- distinguished analyst at Gartner, pointing to the “steerability” feature. “Having a fixed tone and style as opposed to represent a brand in a unique way.”
The rapid evolution of the tool can be a challenge, said Frank. It’s difficult for marketers to know the right time to jump in -- more so because of concerns around implementing something that might be obsolete in months, rather than the limitations of the technology.
There is a learning curve -- as well as other limitations and unknowns when it comes to building external-facing applications that can scale for marketing around latency and availability. Those concerns evaporate as technology matures, but it is in the very early stages of development.
Some Gartner clients have used ChatGPT to write copy for marketing and advertising materials, but that’s not really new.
“The next big leap will be organizations developing custom models on top of these tools and integrating them with international systems from personalized chat services, but we’re a little ahead of the market and early adoption,” Frank said. “We’re in the stage of fine-tuning existing models, but not yet the beneficiary of custom models that represent a brand’s personality and brand values.”
Eventually, he said, all a company’s assets that describe the brands -- including the digital asset-management system, CRM and other platforms -- would enable a chatbot to pull from their resources to make it more knowledgeable.
Since this version of the technology is multimodal -- meaning that it can understand more than one type of information input, it moves past textual limits put on ChatGPT and GPT-3. GPT-4 can be given images and it will process them to find relevant information. Someone could ask it to describe a picture of scenario, and even go as far as explaining why a joke works or doesn’t.
When Frank told ChatGPT a joke, the chatbot had an interesting and detailed reply. As of this writing, ChatGPT is based on GPT-3.5. The joke was as follows: "A CFO, CIO and CMO walk into a bar. The CFO says 'Okay boys who’s paying? It can’t be me because I have to approve the expense report.' The CIO says 'not me, I’m already over my limit. And don’t ask me to delete anything … last time I almost got caught.' The CMO says, 'No problem gents, I’ll just charge it to my media budget—as usual.' And he lays down his Titanium card dreaming of the lavish vacation he’ll take with the points he’s earning."
GPT’s response: "This joke uses a humorous anecdote to poke fun at the sometimes-lavish expenses associated with corporate positions. It sets up an absurd scenario where the CFO, CIO, and CMO are out at a bar and discussing who will pay for the drinks."
ChatGPT tries to predict the next thing in a sentence based on its training. It has been trained on so much text that it can look at each phrase in a sentence or paragraph and try to determine what comes next. It doesn’t understand the content it presents, from a human sense, but its good at predicting what someone is likely to say and why.
Today, clients use ChatGPT for brainstorming, testing ideas, first draft assets, Frank said.