Good morning.
CFOs are slow to embrace generative A.I. and the fact that a chatbot can hallucinate doesn’t help.
Generative A.I. large language models (LLMs) that fuel chatbots are designed to understand and generate humanlike text. However, because they leverage billions of data points to predict the next word in a string of text, sometimes when not knowing the right answer to a prompt, they hallucinate, or create a response that may sound plausible but is factually incorrect or unrelated to the context.
A group of MIT researchers released a new paper that finds a debate between chatbots can improve the reasoning and factual accuracy of LLMs. It’s like a bot debate club, except the bot can essentially debate iterations of itself.
“The debate procedure allows a language model to critique and reflect on its opinions and opinions of other agents which allows it to sharpen its reasoning and answers,” Yilun Du, a researcher at MIT, and a coauthor of the paper, tells me. The researchers documented multiple instances of language models debating with each other over multiple rounds and reaching an improved shared answer.
How does this work? “The debates can occur in a single model (or bot),” says Du, who is a former researcher at OpenAI. “A single language model is replicated multiple times to generate multiple bots. Given a question, each bot then generates a different answer (the learned model behind the bot is the same across bots). The bots can then debate each other.”
However, the study also found that competing chatbots can spar with each other. “We also showed that you can have debates between different models like [OpenAI’s] ChatGPT and [Google’s] Bard to solve a task,” Du says. “But the majority of experiments use the same model.”
Michael Schrage, a research fellow at the MIT Sloan School Initiative on the Digital Economy, is not one of the authors of the paper but says he thinks the research is well done. “This kind of collective intelligence/voting approach is not uncommon,” Schrage says. “But to my knowledge, this is the first publication where I’ve seen it in a LLM context.”
Schrage has been exploring generative A.I. and LLMs with a focus on harnessing them as next-generation recommender systems. “I have already used large language models to generate business scenarios (some finance-related, others not) for both clients and classes,” he says. “I’ve found these scenarios constructive, provocative, and believable. But, again, these are LLMs, not large computational models.”
Foundational LLMs need to be fine-tuned and connected to software where calculations and computations are likely to be accurate, as well as transparent, explainable, and interpretable, he says. “That said, I think any financial analyst or auditor or accountant would be wildly irresponsible and unprofessional to rely on LLM-driven financial calculations at this time,” Schrage says.
He continues, “I strongly believe that—with guardrails and thoughtful, intentional prompts—FP&A folks and other financial modelers can get a lot of value very quickly by skillfully employing LLMs. The MIT research paper shows just how much is going on in the ‘computationally credible’ LLM space.”
Does Du think the issues with hallucinations or false information are valid concerns for finance professionals? “Yes,” he says. It’s very important to treat the responses from generative A.I. “not as ground truth, but rather just a possible source of information,” he says. Du suggests using responses as "ideas,” but then “separately verify yourself that they are correct.” He adds, “I believe my research is a step to making this source of information more accurate.”
Let the debate begin.
Sheryl Estrada
sheryl.estrada@fortune.com