Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Ryan Morrison

Google drops new Gemini model and it goes straight to the top of the LLM leaderboard

Google Gemini.

Google is constantly updating Gemini, releasing new versions of its AI model family every few weeks. The latest is so good it went straight to the top of the Imarena Chatbot Arena leaderboard — toppling the latest version of OpenAI's GPT-4o.

Previously known as the LMSys arena, it is a platform that lets AI labs pit their best models against one another in a blind head-to-head. The users vote but don't know which model is which until after they've voted.

The new model from Google DeepMind has the catchy name Gemini-Exp-1114 and has matched the latest version of GPT-4o and exceeded the capabilities of the o1-preview reasoning model from OpenAI.

The top 5 models in the arena are all versions of OpenAI or Google models. The first model on the leaderboard not made by either of those companies is xAI's Grok 2.

The success of this new model comes as Google finally releases a Gemini app for iPhone, which beat the ChatGPT app in our Gemini vs. ChatGPT 7-round face-off.

How well does the new model work?

The latest Gemini model seems to perform particularly well at math and vision tasks, which makes sense as they are areas in which all Gemini models excel.

Gemini-Exp-1114 isn't currently available in the Gemini app or website. You can only access it by signing up for a free Google AI Studio account (the platform aimed at developers wanting to try new ideas).

I'm also not sure whether this is a version of Gemini 1.5 or whether its an early insight into Gemini 2, expected next month. If it is the latter then the improvement over the previous generation might not be as extreme as some expected.

However, it is doing well in technical and creative areas according to benchmarks. This would tie in to the idea its going to be useful for reasoning and managing agents. It first in math, solving hard problems, creative writing and vision.

Unlike other benchmarks the Chatbot Arena is based on human perceptions of performance and output quality, rather than rigid testing against data.

Whether this is just a new version of Gemini 1.5 Pro or an early insight into the capabilities of Gemini 2, its going to be an interesting few months in AI land.

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.