What Marketers Need to Know about ChatGPT, the Exciting New Natural Language Processing Tool
Just like people across the globe, we’ve been kicking the tires on ChatGPT (a chatbot built on OpenAI) and other generative artificial intelligence (AI) programs. These AI tools, which were deployed in experimental phases to collect feedback and correct mistakes, have grown exponentially in users.
Generative AI is quickly becoming a societal mainstay and warrants ongoing attention and curiosity, including in the realm of healthcare marketing. ChatGPT uses a conversational “dialogue format” to answer questions, and as marketers we’ve been experimenting with its copywriting capabilities and testing its knowledge in topics where we’ve developed highly specific expertise.
The team at Media Logic has been exploring and experimenting with generative AI tools over the past year. While we are excited about how these tools can enhance what we do and how we do it, we are paying close attention to any potential issues with their use.
What does ChatGPT offer us as marketers?
As CNET reports, “We’re all obsessed with the mind-blowing ChatGPT AI chatbot.” ChatGPT and other generative AI programs are powerful tools and can be a resource for marketing teams. The programs can generate human-like responses and engage in back-and-forth “conversation” quickly and efficiently. ChatGPT can quickly distill complex topics, draft copy, answer broad or narrow questions and much more.
ChatGPT can author a business proposal, draft meal plans for specific dietary needs, write new blog articles or edit existing ones and even help you draft a difficult text. The program’s abilities are far-reaching and impressive, and we encourage you to try it out for yourself. That said, using the program has highlighted the value of the human side of the equation.
Ultimately, we agree with Marketing Brew when it says, “The need for human judgment doesn’t seem to be going away anytime soon.” The direction ChatGPT itself described for the BBC makes sense to us: “AI systems like myself can help writers by providing suggestions and ideas, but ultimately it is up to the human writer to create the final product.” Generative AI programs can open new creative avenues and be a substantial time saver for users. But, like any tool, it’s only as powerful as the person using it, and there are several considerations to keep in mind.
Key considerations if using AI or ChatGPT in marketing communications
There are now thousands of different AI programs with more popping up every day — and each with potential for opportunity, but some may also have limitations and areas for concern. Here are a few considerations to keep in mind when looking at the broad swath of generative AI programs.
1. The information may be out of date or inaccurate.
For example, when asked if the Public Health Emergency has ended, ChatGPT’s response was as follows: “As an AI language model, I don’t have access to real-time information or news updates, but as of my knowledge cutoff in September 2021, the COVID-19 pandemic was still ongoing in many parts of the world, and the public health emergency declared by the World Health Organization (WHO) in January 2020 was still in effect.“
We also asked ChatGPT “Who is the owner of Twitter?” a question we assumed it would get wrong because of the program’s September 2021 knowledge cutoff. The answer was not only incorrect, but it was also misleading in saying the information is current “as of May 2023.”
“As of May 2023, Twitter is a publicly traded company, which means that it is owned by its shareholders … The co-founder and CEO of Twitter is Jack Dorsey, but he does not own a majority stake in the company.“
2. The programs are not always able to cite credible sources.
We asked ChatGPT to compare Medicare Advantage and Original Medicare Only (OMO) and provide sources with URLs. ChatGPT cited articles that didn’t exist, and the links — which went to journal articles and news outlets — either directed to entirely unrelated articles or returned a 404. When we pointed that out to ChatGPT, it doubled down, writing that “the sources I provided are reputable and reliable” and resending incorrect links.
3. The programs aren’t always forthcoming about their limitations, which means users may be unknowingly deceived by unreliable information.
When asked, ChatGPT does articulate some — but not all — of its limitations. One omission was that the programs may cite nonexistent or unrelated sources. The self-assured tone of the writing may inspire false confidence on the part of the user, so it is recommended to double-check the accuracy.
4. The programs have inherent bias that can impact its answers.
The generative AI programs “mainly stuck to the facts, but each emphasized different facts,” noted Darrell West of the Brookings Institution in a comparison of ChatGPT and Google’s Bard. The programs’ subtle biases come across via the facts that are emphasized, certain language choices and even responses to questions about politics. Because the AI models are trained on human conversation, it isn’t surprising that they have biases, but it’s important for the human users to recognize and account for these slight but inherent prejudices.
5. There are ongoing issues around intellectual property and copyright.
Responses from generative AI are, supposedly, a conglomeration of information from across the web, but it is challenging — if not impossible — to determine exactly what informed the programs’ responses. Because of the newness of the programs, there are more questions than answers about the derivation of AI answers, ownership of the programs’ responses and ability to copyright and profit from AI-generated content.
Content creators and publishers are adjusting their process due to generative AI. In January, Science journal updated its guidelines: “For years, [our] authors have signed a license certifying that ‘the Work is an original.’ For [us], the word ‘original’ is enough to signal that text written by ChatGPT is not acceptable: It is, after all, plagiarized from ChatGPT.”
6. The content can become repetitive and increasingly vague, particularly in long form.
We’ve tested ChatGPT’s ability to draft long-form content for blogs. We noticed that the program initially presents the information, and then, depending on the topic and prompt, the program may rephrase the same arguments for the sake of filling the word count. Even if the beginning of the article is solid, the quality and ingenuity of the content tends to decrease as the copy continues.
ChatGPT and generative AI are exciting, creating all kinds of possibilities for businesses to improve efficiency, increase speed and reduce burdens on staff. But they are still in their infancy, so users should understand the programs’ limitations and exercise caution when applying the tools. We look forward to seeing where this goes as the programs continue to expand and improve.
Any questions? Reach out to Media Logic today.