Financial regulators in the United States have named artificial intelligence (AI) as a risk to the financial system for the first time.
In its latest annual report, the Financial Stability Oversight Council said the growing use of AI in financial services is a “vulnerability” that should be monitored.
While AI offers the promise of reducing costs, improving efficiency, identifying more complex relationships and improving performance and accuracy, it can also “introduce certain risks, including safety-and-soundness risks like cyber and model risks,” the FSOC said in its annual report released on Thursday.
The FSOC, which was established in the wake of the 2008 financial crisis to identify excessive risks in the financial system, said developments in AI should be monitored to ensure that oversight mechanisms “account for emerging risks” while facilitating “efficiency and innovation”.
Authorities must also “deepen expertise and capacity” to monitor the field, the FSOC said.
US Treasury Secretary Janet Yellen, who chairs the FSOC, said that the uptake of AI may increase as the financial industry adopts emerging technologies and the council will play a role in monitoring “emerging risks”.
“Supporting responsible innovation in this area can allow the financial system to reap benefits like increased efficiency, but there are also existing principles and rules for risk management that should be applied,” Yellen said.
US President Joe Biden in October issued a sweeping executive order on AI that focused largely on the technology’s potential implications for national security and discrimination.
Governments and academics worldwide have expressed concerns about the break-neck speed of AI development, amid ethical questions spanning individual privacy, national security and copyright infringement.
In a recent survey carried out by Stanford University researchers, tech workers involved in AI research warned that their employers were failing to put in place ethical safeguards despite their public pledges to prioritise safety.
Last week, European Union policymakers agreed on landmark legislation that will require AI developers to disclose data used to train their systems and carry out testing of high-risk products.