Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Wayne Williams

‘We built a technology which uses light to control light’: Finchetto CEO on ditching electronics to make networks faster

Finchetto.

In August 2025, I wrote about Finchetto, a UK photonics startup working on an optical packet switch that keeps data entirely in the optical domain rather than bouncing between light and electronics.

The firm’s breakthrough technology could make hyperscale networks dramatically faster, just as AI systems begin to strain today’s infrastructure. The idea also aims to cut power use while remaining scalable as link speeds increase.

In a bid to find out more, I spoke to Finchetto CEO Mark Rushworth about how the technology works, why packet switching in optics matters, where the hard problems still are, and how this could fit into real hyperscale and AI networks.

What inspired Finchetto to focus on photonic packet switching, and how does it differ from traditional electronic switching?

With Finchetto, we looked at the way networks run today and saw that there was a lot of unnecessary work going on.

A server or GPU often sends data as light, then that light gets converted into electrons inside a switch so a processor can figure out where it should go. It's then turned back into light again to leave the box. This back and forth introduces a cost in the form of power and latency.

We then asked ourselves if we could do that without falling back into the electronic domain. To do that, we built a technology which uses light to control light, so the switching all happens in the optical domain.

Most of the photonics work you see elsewhere is still circuit switching, which pins a path between two endpoints, using things like MEMS mirrors or thermo-optic devices to steer light.

The disadvantage there is a relatively slow reconfiguration, and it doesn’t keep up with packet-by-packet decisions at 1.6 or 3.2 Tbps. It’s in packet switching in optics that you get the real flexibility and performance, and that’s the gap we set out to fill.

When you bring that into big networks, what advantages do you see in terms of speed, efficiency, and scalability?

I’d say speed is the most obvious advantage, but efficiency is just as important. When you keep the signal as light, rather than translating it from light to electrons and back, you don’t burn as much power or experience as much of a delay.

In terms of scalability, all optical packet switching allows you to build very large, very flexible networks. You can make routing decisions at the packet level, so you can spread workloads much more evenly across a big fabric.

Using standard concepts like spine & leaf, but implemented with our photonic switches, you can push to tens of thousands of nodes without the network itself becoming a choke point.

How does that translate into real-world impact for hyperscale data centers from a performance and energy perspective?

Energy is top of the agenda for any hyperscaler right now. Anything that reduces network power consumption without hurting performance is going to have a positive impact on the bottom line and therefore on competitiveness.

Our approach removes a lot of the electro-optical conversions and many of the transceivers that fail most often, so you get a network that uses less power and is more resilient at the same time.

You can add Finchetto switches in phases, so you’re improving performance and energy efficiency over time while still sweating existing assets. That’s a much easier business case than ripping and replacing.

What does this mean specifically for emerging workloads like AI and other advanced compute?

AI is a perfect example of where the network can quietly kill your performance. These training clusters want to move huge volumes of data between GPUs with very tight timing. If the fabric can’t keep up, you end up with expensive silicon sitting idle.

By doing packet switching in optics with extremely low latency, we remove a lot of those bottlenecks at the hardware level. It also opens up options that weren’t practical before. Some of the more exotic topologies - torus, dragonfly-style architectures and so on - were historically hard to justify because the latency budget just didn’t work with conventional switching.

When your switch isn’t the limiting factor anymore, network architects can revisit those ideas and pick the topology that really suits the workload, rather than the one that works around the hardware.

How easily can data centers plug Finchetto into what they already have?

That’s been one of our big design principles from day one. The reality is that hyperscale data centers are already operating at a level the market accepts, and a lot of capital has gone into getting them there.

No one is going to say, “Nice idea, we’ll rebuild everything around it.” We’ve spent a lot of time making sure our technology looks and feels like a good citizen in a modern network.

It interoperates with existing transceivers, NICs, GPUs and cabling, and it drops into familiar architectures rather than demanding you redesign the whole thing. That means you can start with targeted deployments - a new AI pod or a performance-critical part of the fabric - and grow from there as you see the benefits.

Photonics has moved from being an interesting research focus to being central to the roadmap for the biggest players in the industry. You can see that in the attention around co-packaged optics, and in major acquisitions of early-stage photonics companies.

When leaders like Nvidia say, “We need optics right next to the compute,” the rest of the industry listens. The hard part is building a complete system that operators trust. It must integrate cleanly with GPUs, NICs, motherboards, and tools they already use; it must be reliable over its lifetime; and it must be straightforward to manage and upgrade.

Our answer is to make the optical core as passive and line-rate agnostic as possible. If you go from 800gbs to 1.6tbs, the switch in the middle doesn’t need to change, which is a very different proposition from replacing whole tiers of electronic gear every time you move up a speed notch.

If your switch is entirely optical and doesn’t have internal buffers, how do you stop packet loss and collisions in hot spots?

In a traditional electronic or hybrid switch, you lean on memory and buffering to smooth things out. In a pure optical system, you don’t get that, so you have to think differently.

What we’ve done is build collision avoidance and return to sender into the optical layer itself.

The switch can effectively tell whether a given path is free before it sends traffic down it. If it isn’t, the packet doesn’t go, so you avoid most collisions up front.

In the rare case where two packets clash, there’s a mechanism to return one of the packets to sender to retry.

All of this happens in optics, which is the clever bit, and it means you keep the benefits of an all-optical fabric with the complexities of packet switching functionality in the network.

Zooming out to the UK specifically: as the country ramps up investment in AI and data centers, what should it be doing to make sure homegrown photonics and networking tech actually gets used?

Most of the real UK innovation in this area is coming from startups, simply because there aren’t any big domestic switch vendors.

The risk is that we spend a lot of public money building AI infrastructure that’s essentially a shop window for overseas suppliers, while the UK companies doing the hard R&D never really get a foothold.

What would really help is proper support through the scale-up phase and into deployment: funded testbeds, like you see in quantum, where new technologies can be proven in realistic environments, and procurement frameworks that make it natural rather than exceptional to include UK-developed tech.

If we’re serious about “sovereign” capability in data centers and AI, we have to move beyond just hosting other people’s hardware.

Where else do you see photonics transforming networking?

It’s easy to focus on the big data centers because that’s where AI and cloud live today, but networking is much broader than that.

Think about intersatellite links in space, free space optical links bringing connectivity to hard-to-reach areas, or secure, high-bandwidth connections between aircraft or autonomous vehicles in defense.

Those are all fundamentally networking problems, and they’re all places where photonics can make a big impact.

Finally, how do you see Finchetto’s architecture evolving to meet future needs like quantum networking, optical compute, or photonic memory?

The way we’ve structured our IP is quite intentional. At its heart, what we’ve patented is a method and apparatus for switching data using nonlinear optics. In other words, it’s not tied to one very narrow implementation or use case.

That gives us a lot of headroom. The same underlying switching principle can be applied to different kinds of networks, whether that’s classical high-speed packet networks, future quantum adjacent architectures, or systems where compute and memory themselves are optical.

We’re focused on solving today’s problems around AI and hyperscale networking, but we’re doing it with a technology base that can move with the industry rather than getting stranded as the next wave arrives.

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.