A team of Stanford researchers is warning that leading AI models are woefully non-compliant with responsible AI standards, represented by the EU's Artificial Intelligence Act.
Driving the news: The House Science Committee meets Thursday to probe AI executives on how to develop AI "towards the national interest."
Why it matters: While leading AI companies have expressed openness to regulation, they don't come close to following the first democratic rules for AI foundation models, drafted by EU officials and lawmakers, the study finds.
- The research represents the first detailed look at how leading AI models stack up against what's likely to be the first thorough set of rules governing AI. The study will feature in today's hearing.
Driving the news: Senate Majority Leader Chuck Schumer on Wednesday unveiled a framework for comprehensive AI legislation.
- Schumer insisted that in order for innovation to "be our North Star,” Americans will have to trust that AI is safe for them and the country.
The details: The U.S. researchers scored 10 leading AI models against 12 requirements laid out in the draft EU law. They found:
- Hugging Face's BLOOM scores highest: 36 out of 48.
- Market leader ChatGPT is in the middle of the pack.
- Aleph Alpha, a German AI company, is at the bottom of the pile — scoring just 5 out of 48 — with Google-backed Anthropic managing 7 points.
- Open source models scored higher on transparency, while closed or proprietary models scored higher on risk mitigation.
What they're saying: Kevin Klyman, a researcher at Stanford's Center for Research on Foundation Models and one of the report's authors, told Axios the most concerning finding is "providers often do not disclose the effectiveness of their risk mitigation measures, meaning that we cannot tell just how risky some foundation models are."
- "The risk landscape for foundation models is immense, spanning many forms of malicious use, unintentional harm, and structural or systemic risk," per the report authors.
- "Enacting and enforcing the EU AI Act will bring about significant positive change in the foundation model ecosystem," they added.
The bottom line: AI providers haven't prepared to cope with legislation that mandates disclosure of their processes and mitigation of their risks.
Editor's note: This story has been corrected to remove a reference to Harvard as the research was conducted by Stanford.