Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Lloyd Coombes

Meta gets an F in first AI safety scorecard — and the others barely pass

Meta Llama 3.1.

As artificial intelligence evolves, it's clear there's some need for oversight. Most AI labs openly support regulation and provide access to frontier models for independent evaluation before release — but they could be doing more.

The world is turning to AI to solve all manner of problems, but without proper oversight, it could always create more.

Future of Life has developed a report card for the different AI labs including OpenAI, Meta, Anthropic and Elon Musk's xAI. The AI Safety Index is an independent review looking at 42 indicators of "responsible conduct".

The report gives a letter grade to each company based on these indicators and Meta, focused on open-source AI models through its Llama family, gets an F

AI Safety Index has worrying stats for the biggest AI companies

Source: Future of Life Institute
Grading: Uses the
US GPA system for grade boundaries: A+, A, A-, B+, […], F

The panel includes a series of luminaries in education and think tanks to check on how AI companies are operating, and the initial results are alarming.

Looking at Anthropic, Google DeepMind, Meta, OpenAI, x.AI and Zhipu AI, the report has found "significant gaps in safety measures and a serious need for improved accountability."

According to the first report card, Meta scores lowest (x.AI isn't far behind), while Anthropic comes out on top — but still only gets a C.

All flagship models were found to be "vulnerable to adversarial attacks", while also having the potential to be unsafe and break away from human control.

Perhaps most damning, the report says "Reviewers consistently highlighted how companies were unable to resist profit-driven incentives to cut corners on safety in the absence of independent oversight."

"While Anthropic's current and OpenAI’s initial governance structures were highlighted as promising, experts called for third-party validation of risk assessment and safety framework compliance across all companies."

In short, this is the kind of oversight and accountability we need to see in the burgeoning AI industry before it's too late, as the more powerful the models get, the more real the harms become.

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.