A committee of MIT leaders and scholars has released a set of policy briefs outlining a framework for the governance of artificial intelligence.
Their approach includes extending current regulatory and liability approaches in pursuit of a practical way to oversee AI.
The papers aim to help enhance US leadership in the area of artificial intelligence broadly while limiting the harm that could result from the new technologies and encouraging exploration of how AI deployment could be beneficial to society.
The main policy paper, 'A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector', suggests AI tools can often be regulated by existing US government entities that already oversee the relevant domains.
The recommendations also underscore the importance of identifying the purpose of AI tools, which would enable regulations to fit those applications.
"As a country, we're already regulating a lot of relatively high-risk things and providing governance there," said Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, who helped steer the project, which stemmed from the work of an ad hoc MIT committee.
"We're not saying that's sufficient, but let's start with things where human activity is already being regulated, and which society, over time, has decided are high risk. Looking at AI that way is the practical approach."
The project includes multiple additional policy papers and comes amid heightened interest in AI over the last year since the creation of ChatGPT thrust it into the mainstream discourse, as well as considerable new industry investment in the field.
The European Union is currently trying to finalise the world's first 'AI Law', which will aim to regulate systems based on the level of risk they pose.
Negotiations on the final legal text began in June, but a fierce debate in recent weeks over how to regulate general-purpose AI like ChatGPT and Google's Bard chatbot threatened talks at the last minute.
With this in mind, world leaders and tech specialists gathered as UK Prime Minister Rishi Sunak hosted the world's first 'AI Summit' in Bletchley Park, Buckinghamshire last month.
In the build-up to the conference, Sunak announced the establishment of a 'world first' UK AI safety institute.
The summit concluded with the signature of the Bletchley Declaration – the agreement of countries including the UK, United States and China on the "need for international action to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community".
Any governance effort faces the challenges of regulating both general and specific AI tools, as well as an array of potential problems including misinformation, deepfakes, surveillance, and more.
MIT's policy paper calls for advancements in auditing AI tools – exploring various pathways such as government-initiated, user-driven, or legal liability proceedings.
The consideration of a government-approved SRO – similar to the Financial Industry Regulatory Authority (FINRA) - is proposed to enhance domain-specific knowledge and facilitate practical engagement with the dynamic AI industry.
MIT's involvement in AI governance stems from its recognised expertise in AI research, positioning the institution as a key contributor to addressing the challenges posed by evolving AI technologies. The release of these whitepapers signals its commitment to promoting responsible AI development and usage.
"We felt it was important for MIT to get involved in this because we have expertise," said David Goldston, director of the MIT Washington Office. "MIT is one of the leaders in AI research, one of the places where AI first got started. Since we are among those creating technology that is raising these important issues, we feel an obligation to help address them."