Within five-days of its release in late 2022, ChatGPT attracted over one million users.
The easy-to-use chatbot quickly went viral on social media, marking a pivotal moment when the world began to acknowledge this new technology. However, OpenAI’s ChatGPT continues to divide opinions.
Since its launch, ChatGPT has dramatically improved business productivity, and disrupted ways in which creative content have been delivered.
In medical research it is commonly accepted that many AI technologies (known and emerging) will be incorporated into everyday internal operations (1) by virtue of their cost effectiveness.
Stakeholder research is a fundamental business activity required throughout product development. Get it right, and you’ll be engaging with the people who will help you get your messages across to their community quickly, effectively, and authoritatively. Get it wrong, and you’ll be floundering for months wondering what went awry with your launch planning strategy and execution.
One thing is for sure, conducting stakeholder research properly could be the difference between the success and failure of a product launch. To investigate the ability of ChatGPT to conduct stakeholder research, we asked it the same question three times on a single day in August 2024:
“Who are the top 10 global KOLs in ADHD in the last five-years?”
The answer was different each time, the overlap in each answer was less than 50%.
We repeated this question in two additional therapeutic conditions (IgAN and CLL), the results were consistent with the initial ADHD findings.
It is clear that ChatGPT can provide helpful information and answers to specific questions. However, along with its advantages, there are many limitations.
An inherent aspect of the AI operating model is that it works off the data available to it, therefore answers will include an element of ‘hallucination’ when the data is not known. In fact, OpenAI acknowledge this by adopting the term hallucination to describe incorrect, misleading, or fabricated results generated by AI models.
One mentionable instance was noted by Schizophrenia academic Robin Emsley. In 2023, he, like others were intrigued to see the capabilities of ChatGPT in aiding scientific writing and witnessed the hallucinations inaction (2).
With the likelihood of hallucinations in mind, results of the initial investigation were questioned further:
“Are all the people listed in the answer real people? If not, how many are fabricated?”
On average, 5% of the names in each of the searches were fabricated.
ChatGPT was then asked to list the relevant publications, events, and activities each of the individuals had been involved in, providing references of where the information was found. Similarly to Emsley’s findings, many of these answers were subsequently found to be fabricated, including falsified publications and incorrect links (2).
As seen with Emsley, this experience with ChatGPT was not unique, it is a widespread problem, particularly within highly specialised industries such as medical research. Ultimately, ChatGPT’s effectiveness is limited by the data it is trained on, if this data is incomplete, outdated, or biased, the information generated will be equally flawed.
While ChatGPT offers several advantages in other areas of the business, promoting cost savings and efficiency, its limitations must be carefully considered. Stakeholder research requires a nuanced understanding, which AI tools such as ChatGPT, lack without validation by experts.
This is not to say that ChatGPT and other tools should be avoided completely. Companies and employees should use this technology as complimentary, rather than a standalone solution, ensuring that final decisions and analyses are grounded in human expertise.
References
1. Sallam M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare. 2023;11(6):887.
2. Emsley R.ChatGPT: these are not hallucinations – they’re fabrications and falsifications. Schizophrenia. 2023;9(1):52.