Credit: Pixabay/CC0 Public Domain

Toward the end of 2022, ChatGPT took the internet by storm. The chatbot, powered by OpenAI's GPT-3 large language model, impressed millions with its ability to quickly generate articulate responses to many types of questions. These so-called transformer-based language models can already be used to help organizations in creating new innovation solutions, researchers at Radboud University argue in a paper published today in the Journal of Product Innovation Management.

"We have been studying various artificial intelligence solutions over the past few years, and found that they can already be implemented by organizations in a number of helpful ways," says Vera Blazevic, researcher in innovation management at Radboud University and one of the authors of the paper.

"When organizations need to innovate, they need more and more divergent ideas—that usually leads to better quality ideas further on. With the right prompting, transformer-based language models such as GPT-3 can quickly generate a lot of these ideas that can help in prototyping, for example."

Speeding up knowledge extraction

"Furthermore, GPT-3 can be used to summarize large texts, or to discern sentiment from those texts. For example, if an organization wants to analyze user reviews of their product, they can use tools such as these to find out which features customers respond most positively or negatively to. That's work that can be done by humans, but language models can help to speed up this knowledge extraction so that humans can focus on actually using the insights gained."

GPT-3 is a transformer-based language created by OpenAI. In a broad sense, it is an artificial intelligence system that has studied millions of texts and topics, and it uses that data to form new texts based on queries from users. In the past few years, the GPT-3 model has been used in various applications that have drawn a lot of attention, such as ChatGPT, DALL-E (which can generate images) and MuseNet (capable of generating songs).

Understanding biases

Blazevic warns that language models will, at least for now, still play a limited role in the innovation process. "Once you need ideas to converge, GPT is not particularly helpful. The AI can't judge which ideas are truly feasible and make sense, and which of them fit the organization you're working for. For those situations, humans remain essential. That's why we see room for hybrid intelligence: The language model can help kickstart meetings or discussions, after which humans take over to carry these ideas to the finish line."

Organizations that choose to use for idea generation should also be cognizant of the biases these tools have, as the models are trained on large datasets of existing, often-biased texts.

Blazevic notes, "In a hybrid intelligence team, humans can can check for potential bias as part of the process. This also necessitates the active management of such teams, by—for example—training employees in searching for and reflecting on biases. Acquiring those skills might then even help humans become more aware of their own biases, eventually leading humans and AI to learn from each other."

More information: Sebastian G. Bouschery et al, Augmenting Human Innovation Teams with Artificial Intelligence: Exploring Transformer‐Based Language Models, Journal of Product Innovation Management (2023). DOI: 10.1111/jpim.12656

Provided by Radboud University