Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
David Meyer

European AI watches Trump's return with a mix of fear and hope

Former U.S. President Donald Trump arrives to speak during an election night event at the Palm Beach Convention Center on November 06, 2024 in West Palm Beach, Florida. (Credit: Chip Somodevilla—Getty Images)

AI researchers and European startups are watching the U.S. nervously after the re-election of former President Donald Trump.

With AI regulation ratcheting up in Europe, and with tech companies in the region generally attracting less capital than their American counterparts, there are concerns that Trump's deregulatory America First strategy could leave European AI in an even trickier position than it is now.

While it's impossible to predict exactly how Trump will tackle AI, the likelihood is that he avoids imposing federal regulation on the sector in the U.S. Key ally Elon Musk may advise him that AI poses major future dangers if not kept under control, but Trump has already promised to scrap the tentative first steps that the U.S. took toward AI regulation under President Joe Biden, and some backers like venture capitalist Marc Andreessen oppose any such regulation.

If Trump does steer clear of regulating AI, that would create an even greater gap between the U.S. and the European Union, where privacy laws limit the data that AI companies can use to train their models. The EU also this year passed an AI Act that bans some uses of AI—like manipulating people or deducing someone's race or sexuality from their biometric data—while placing big responsibilities on companies building “high-risk” AI systems.

Many European startups are grappling with the new law’s implications as they prepare for its full enforcement by mid-2026, and they fear uncertainty could hold their scene back just as the U.S. strengthens its own position in the rapidly forming global AI sector.

“In the next two years there will be a lot of uncertainty about these regulations, and [European] startups will be [discouraged] because it’s not really clear what the legal state is,” said Wieland Brendel, a group leader at Germany’s Max Planck Institute for Intelligent Systems, a major technology research institute, and cofounder of the visual quality control startup Maddox AI.

“In general, I think the idea that the EU has around being protective of consumers and being protective of their citizens’ personal data, is the right one,” said Talha Zaman, the chief technology officer at Germany’s Meshcapade, a 3D body-modeling and avatar creation company. “It’s just a question of how it’s implemented and whether that’s essentially too much of a damper on innovation.”

But, while the disparity could see U.S. AI firms power ahead without regulatory shackles, some suggest their European counterparts might benefit from developing their models according to strict rules—a potential plus in the eyes of highly-regulated customers.

“It could turn out to be good for us, especially in the medical field,” said Marc Mausch, co-founder of the German AI-powered breast cancer detection firm Earlytrace.

Trump’s effect on talent

Fortune spoke to Brendel, Zaman and Mausch at Germany’s Cyber Valley AI cluster, a couple days after the U.S. election. Located in the picturesque medieval city of Tübingen, Cyber Valley has over the last eight years become Europe’s largest AI research consortium—a joint initiative of the state of Baden-Württemberg, the Max Planck Institute, the universities of Stuttgart and Tübingen, and a host of companies including Amazon and German stalwarts like BMW and Bosch.

The setup is unlike what one might find in the U.S., but the consortium is trying to provide a European counterweight to Silicon Valley, with fundamental research being closely allied with entrepreneurship. There are now around 100 startups in the Cyber Valley ecosystem, with the hub encouraging a very non-German willingness to take risks.

Michael Black, a Bay Area transplant who is a founding director at Max Planck Institute for Intelligent Systems and played a key role in setting up Cyber Valley, reckons the U.S.’s deepening divisions could make it easier for European research institutions and AI companies to retain talent.

Michael Black of Meshcapade and the Max Planck Institute (L), with Cyber Valley managing director Rebecca Reisch and Florian Stegmann, head of the chancellery in the German state of Baden-Württemberg.

“U.S. politics makes it maybe appealing for people to stay here,” said Black, who is also co-founder and chief scientist at Meshcapade, which spun out of Cyber Valley five years ago.

Black said a more regulation-averse era in the U.S. could benefit AI firms there, particularly when training models. “Companies that have that data have a huge competitive advantage today, and that’s independent of all the regulation out there,” he said. However, Black also warned that this would still leave U.S. companies having to stick to the same rules as their counterparts in Europe and elsewhere.

“Clearly the places where people are allowed to use any data for training will get a jump on things, but they may find markets closed,” Black said. “If Europe has a regulation that you have to make sure that the people’s privacy has to be protected, or some other country has things that say that people’s copyright has to be protected, if you’ve trained your model in a country where those things don’t hold, that model may not be usable elsewhere.”

Open-source AI

While that may limit the growth of AI companies that design their models for a low-regulation environment, it could also have an unpleasant knock-on effect in Europe.

Rather than paying expensive access fees for proprietary AI models like OpenAI’s GPT series or Google’s Gemini, many startups use “open source” models that they can run for free and easily modify. The main producers of these models are the U.S.’s Meta and France’s Mistral—now the only major large language model developer in Europe, since Germany’s Aleph Alpha pivoted away from LLM creation a couple months ago.

Meta has already made its consumer-facing Meta AI services unavailable in Europe, because the region’s data protection laws don’t allow the company to simply repurpose Facebook and Instagram user data for AI training. According to Brendl, if a similar fate met Llama, “that would be a huge problem for our ecosystem” in Europe.

There is also a chance that Trump could crack down on the spread of U.S. open AI models globally. Some of his supporters back open-source AI, but others see national security concerns and have called for open-source AI models to be added to U.S. export controls lists.

“If things head further in that direction in the sense of locking down models such that they’re only available to the big players, then that would have a negative impact on everyone else,” said Zaman. “If you start treating it like a weapon, then everyone needs to develop their own.”

“We should not lose the ability to create our own foundation models,” said Mausch.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.