Google is constantly updating Gemini, releasing new versions of its AI model family every few weeks. The latest is so good it went straight to the top of the Imarena Chatbot Arena leaderboard — toppling the latest version of OpenAI's GPT-4o.
Previously known as the LMSys arena, it is a platform that lets AI labs pit their best models against one another in a blind head-to-head. The users vote but don't know which model is which until after they've voted.
The new model from Google DeepMind has the catchy name Gemini-Exp-1114 and has matched the latest version of GPT-4o and exceeded the capabilities of the o1-preview reasoning model from OpenAI.
The top 5 models in the arena are all versions of OpenAI or Google models. The first model on the leaderboard not made by either of those companies is xAI's Grok 2.
The success of this new model comes as Google finally releases a Gemini app for iPhone, which beat the ChatGPT app in our Gemini vs. ChatGPT 7-round face-off.
How well does the new model work?
Massive News from Chatbot Arena🔥@GoogleDeepMind's latest Gemini (Exp 1114), tested with 6K+ community votes over the past week, now ranks joint #1 overall with an impressive 40+ score leap — matching 4o-latest in and surpassing o1-preview! It also claims #1 on Vision… https://t.co/AgfOk9WHNZ pic.twitter.com/HPmcWE6zzINovember 14, 2024
The latest Gemini model seems to perform particularly well at math and vision tasks, which makes sense as they are areas in which all Gemini models excel.
Gemini-Exp-1114 isn't currently available in the Gemini app or website. You can only access it by signing up for a free Google AI Studio account (the platform aimed at developers wanting to try new ideas).
I'm also not sure whether this is a version of Gemini 1.5 or whether its an early insight into Gemini 2, expected next month. If it is the latter then the improvement over the previous generation might not be as extreme as some expected.
However, it is doing well in technical and creative areas according to benchmarks. This would tie in to the idea its going to be useful for reasoning and managing agents. It first in math, solving hard problems, creative writing and vision.
Unlike other benchmarks the Chatbot Arena is based on human perceptions of performance and output quality, rather than rigid testing against data.
Whether this is just a new version of Gemini 1.5 Pro or an early insight into the capabilities of Gemini 2, its going to be an interesting few months in AI land.