What you need to know
- A group of AI researchers from Stanford and Google research created a simulation featuring 25 characters powered by ChatGPT.
- The simulation ran for two days and illustrated that AI-powered bots can interact in a human-like way.
- The bots planned a party, coordinated the event, and attended the party within the simulation.
A group of AI researchers out of Stanford is putting the "sim" into simulation. The team placed 25 AI-powered characters, referred to as agents, into a virtual world similar to "The Sims." OpenAI's ChatGPT backed the bots, allowing the characters to interact with each other in a human-like way. The results of the study are both illuminating when it comes to the future of artificial intelligence and entertaining.
The team consists of five scientists from Stanford and one from Google Research. A summary of it can be found on the Cornell University website. That page also has a download link for a PDF of the entire paper (via Reddit).
"In this paper, we introduce generative agents--computational software agents that simulate believable human behavior," reads the summary."
"Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day."
A Large Language Model (LLM) was used to store the experiences of each character and allow those bots to communicate with each other in natural language.
The agents acted in a way that you may expect a real-life social group to interact. When just one of the bots was set to host a party, other agents ended up getting involved. Invitations were sent out, plans were made, and the characters coordinated to make sure they arrived at the party at the same time.
The researchers shared a replay of the simulation to accompany the paper.
The team used a control group of 25 humans that interacted as the characters while being observed. Those watching the real humans felt that the people were less realistic than their AI counterparts.
While the study is informative, it is important to put it into context. The AI researchers behind the study break down the limitations of their setup and errors that occurred. Some of the mistakes were comical, such as multiple agents entering a bathroom at once despite that room being intended for one person. The virtual town residents also left a cafe at lunchtime to go to a local bar instead.
Section 7.2 of the paper shares the following example:
"Some agents chose less typical locations for their actions, potentially making their behavior less believable over time. For instance, while deciding where to have lunch, many initially chose the cafe. However, as some agents learned about a nearby bar, they opted to go there instead for lunch, even though the bar was intended to be a get-together location for later in the day unless the town had spontaneously developed an afternoon drinking habit."
Windows Central take
This type of tech is what many of us dream about. Imagine a video game in which NPCs actually interact like humans, even if you weren't around and didn't prompt them in any way. Open-world games would feel more lifelike and extend the number of hours you could get out of a game.
While this use of ChatGPT is intriguing, it's not viable at this time for a video game. The researchers noted that simulating just two days cost thousands of dollars of ChatGPT tokens.
Of course, there are also ethical and security concerns that would need to be addressed before this type of AI made its way into a game. The first day that Google Bard was available in preview it shared fake news that was in turn fed into Bing Chat. Imagine a similar situation happening at a large scale with dozens of characters.
For now, this is an interesting experiment that sheds light onto AI and how bots interact with each other. Maybe someday we'll have RPGs with a vast world of interactive AI-powered NPCs.