Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

Can this tiny U.K. AI chip company best Nvidia?

Photo of Fractile founder and CEO Walter Goodwin (Credit: Courtesy of Fractile)

A British startup hoping to challenge Nvidia’s dominance in chips for AI applications with a radical new hardware design has just emerged from operating in stealth with $15 million in seed funding to pursue its idea.

The startup, Fractile, is the brainchild of Walter Goodwin, a 28-year old PhD. graduate from the University of Oxford’s Robotics Institute. Like some of the other teams hoping to take on Nvidia, Goodwin is pursuing a chip design that differs markedly from the graphics processing units (GPUs) that Nvidia makes.

The British startup is the latest in a large crop of startups and big tech titans that are trying to offer chips that can compete with Nvidia’s GPUs in the booming market for running AI applications. Other startups pursuing the same market include Groq, Mythic, Rain AI, Cerebras, and Graphcore (which was recently acquired by SoftBank), among others. Meanwhile, AMD, which already makes GPUs, has been ramping up its efforts to compete with Nvidia, and the big cloud providers, such as Microsoft, Google, and Amazon’s AWS, already manufacture their own AI-specific chips.

Fractile was founded in 2022, but has been operating for two years in “stealth mode,” while working on its chip designs. The company has secured its investment seed round from Kindred Capital, the innovation fund of the defense alliance NATO, and Oxford Sciences Enterprises, which led the funding. Also participating are Cocoa and Innovia Capital, as well as prominent angel investors and alumni of AI and semiconductor companies.

GPUs were originally designed in the late 1990s to accelerate the running of graphics-intensive applications, such as video games and computer-aided design software. Their advantage is that they can process a lot of data in parallel, rather than having to run programs in a linear sequence, which is what standard central processing units, or CPUs, do. It just so happens that the parallel processing capabilities of GPUs makes them very well suited to running the large neural networks—a kind of software very loosely based on how the human brain works—that are the bedrock of contemporary AI applications.

While GPUs are much faster at running AI software than CPUs, there are still aspects of their design that limit how fast they can run AI models. Among the biggest issue is that GPUs typically rely on memory stored elsewhere on the system, in a separate memory chip component called a DRAM (short for dynamic random access memory) chip. The shuttling of data on and off this memory chip to the GPU itself becomes a bottleneck to how fast the GPU can run an AI model. Goodwin told Fortune that Fractile’s design stores the data needed for computations directly next to the transistors that perform the arithmetic. This allows for much faster run-times for AI.

AI startup Groq, which already has its chips in production and offers them through its own cloud-based AI computing service, is following a somewhat similar approach, moving the system's memory closer to where the processing takes place. Groq uses SRAM (static random access memory) components co-located them on the chip, rather than off-chip DRAM to do this. But Goodwin says Fractile is going a step further and combining memory and processing into a single component, which should mean Fractile’s chips are even faster.

So far, though, Fractile has only tested its designs in computer simulations and has yet to manufacture test chips. But Goodwin said Fractile is convinced from these simulations that it can run a large language model, the kind of AI models that power today’s consumer chatbots and form the foundation of most generative AI applications, 100 times faster and 10 times cheaper than Nividia’s GPUs.

The company also said it was targeting a huge power reduction over other competing AI hardware. The energy consumption of AI chips has become a hot topic, as people become increasingly alarmed at the potential carbon footprint and energy cost of the AI boom. Both Google and Microsoft have announced that their efforts to achieve net zero CO2 emissions have been thrown off track by the global expansion of their datacenter infrastructure, and with AI computing loads taking up an increasing part of the work being performed in those data centers. Fractile said its goal is to create a chip that will offer 20 times better performance per watt of energy than any other existing AI hardware.

Perhaps the key to Nvidia’s dominance of the AI market has not just been the flexibility of its GPUs, but the software programming system it offers for running these chips, called CUDA. Nvidia has invested heavily in building a large community of developers around CUDA, and that has in turn made it difficult to convince developers to try alternative hardware. In the past, some AI chip startups have invested comparatively little in developing an easy-to-use software to run their chips, making it difficult for them to win any developers over from Nvidia.

Goodwin says Fractile has learned that lesson and built its own software stack alongside its hardware. He says that many of the things CUDA does are necessary because GPUs are not actually optimized to run AI workloads and that the computations the software must run to adjust for this further slows AI applications and wastes additional energy. Because Fractile’s chips won’t have to do these things, its software can be simpler and potentially rival CUDA, he says.

Goodwin declined to say exactly when Fractile, which currently employs just 14 people and plans to have 18 by the end of August, would have its chips in production. The latest seed funding will be used to by the company to further test its design in simulation and move towards producing its first physical test chips, he says.

Sam Harman, head of deep tech at Oxford Sciences Innovation, praised Fractile’s “radically innovative approach” to building AI chips. Kindred Capital partner John Cassidy said his firm liked that Fractile’s team had a deep understanding of how AI software was evolving. The speed of that evolution is a major challenge for AI chip companies because it can take at least two years to get a new chip design into full production, by which time the computing requirements of the AI sector may have changed. It's this phenomenon that has hurt previous attempts to displace GPUs as the workhorses of AI computing. GPUs are general-purpose enough and flexible enough that they can usually be adapted to AI’s next software wave, while more specialized chips often cannot. But Cassidy said he thought Fractile’s team “has the depth of knowledge to understand how AI models are likely to evolve, and how to build hardware for the requirements of not just the next two years, but five to 10 years into the future.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.