Congressional US House of Representatives staffers are now prohibited from using Microsoft's Copilot generative AI assistant, citing security concerns.
This congressional action, first reported by Axios, reflects the federal government's ongoing efforts to grapple with its internal use of AI and develop regulations for this rapidly growing technology.
According to the Axios report, the Office of Cybersecurity has deemed the Microsoft Copilot application a "risk to users due to the threat of leaking House data to non-House approved cloud services," House Chief Administrative Officer Catherine Szpindor said.
As a result, staff members are prohibited from using Microsoft'sMicrosoft's AI-powered assistant on their government-issued devices. A memorandum from Szpindor cited concerns the Office of Cybersecurity raised regarding the potential for unauthorised data leaks to cloud services. However, congressional staffers remain permitted to use Copilot AI on personal devices.
In a statement to Reuters, a Microsoft spokesperson said, "We recognise that government users have higher data security requirements. That's why we announced a roadmap of Microsoft AI tools, like Copilot, that meet federal government security and compliance requirements that we intend to deliver later this year."
AI in government: Balancing innovation and security
The memo further noted that lawmakers and staff are only permitted to use the paid version of OpenAI's ChatGPT Plus, known for its improved privacy features. Szpindor emphasised that ChatGPT Plus can only be used for " research and evaluation" purposes, with privacy settings always enabled.
The memo restricts staff from pasting "any blocks of text that have not already been made public" into the chatbot. It also explicitly states, "No other versions of ChatGPT or other large language models AI software are currently authorised for use in the House."
Concerns about AI misuse extend beyond the House. In 2023, a bipartisan group of four US senators introduced legislation to ban AI-generated content that deceptively portrays candidates in political ads for federal elections.
This move reflects broader anxieties surrounding the potential for AI manipulation in elections. Researchers accused Copilot of providing inaccurate information when responding to US election-related queries last year.
Unsurprisingly, Szpindor's office plans to assess the government version of Copilot after its release to see if it's appropriate for use on House devices. To provide context for those unfamiliar with the situation, it's important to note that the Satya Nadella-led software giant previously announced plans to introduce several government-specific tools and services.
These include a secure version of Azure OpenAI for classified workloads, alongside an enhanced version of the Copilot assistant for Microsoft 365. These offerings aim to improve the security of sensitive government data.
Moreover, big tech companies like Google and Apple have also implemented limitations on employee use of generative AI tools like ChatGPT. Last year, the Korean smartphone giant restricted employees from using generative AI tools on Samsung-owned devices and the company's internal networks.
This move follows a slew of privacy lapses involving OpenAI, such as a ChatGPT bug that leaked user chat histories. The culprit was traced back to "a bug in an open source library," highlighting potential vulnerabilities in these AI models.