Hello and welcome to the September special edition of Eye on A.I.
The federal government’s movement around AI has been center stage this past month, sparked by the Senate’s first listening session to inform how it might regulate the emerging technology. So today we’re catching up on what’s been happening around AI at the state and local levels, where we’re seeing executive orders, regulation, chatbot rollouts, and overall, a whole lot more in the ways of action.
This week, Oklahoma Gov. Kevin Stitt signed an executive order establishing a new AI task force to inform how the state’s government could benefit from the technology. The task force is charged with studying, evaluating, and developing policy and administrative recommendations for AI deployment and is due to report its findings by the end of this year.
“The private sector is already finding ways to use it to increase efficiency,” Stitt said in the news release. “Potential exists for the government to use AI to root out inefficiencies and duplicate regulations, and it is an essential piece of developing a workforce that can compete on a global level."
The move mirrors California Gov. Gavin Newsom’s executive order from earlier this month, which made a splash for urging California legislators to establish new policies around how state agencies and departments procure, use, and train employees regarding generative AI technology. He also mandated that state agencies and departments create risk assessment reports regarding how generative AI could affect their work, the state’s energy usage, and the economy, as well as a report examining “the most significant and beneficial uses of GenAI in the state.”
But these movements in Oklahoma and California are just two examples of how states are starting to seriously grapple with the usage of AI within government, with others like Maine and Washington State having already banned and issued warnings about the use of AI tools for government uses, respectively.
And the moves toward state-level AI regulation go far beyond government usage. Overall, 10 states have already incorporated AI regulations as part of larger consumer privacy laws that either passed or will go into effect this year, according to the nonprofit Electronic Privacy Information Center, which recently published a report outlining every state-level AI law proposed, passed, and going into effect. These laws target a varied set of issues, addressing facial recognition, the use of AI for hiring, and the right to opt out of various automated decisions, among others. States including California, New York, Massachusetts, Rhode Island, and Pennsylvania have also recently proposed bills regulating generative AI in particular.
And when you get down to the local level, we’re at the point where we're seeing generative AI turn up in government tools. In partnership with Dell, the City of Amarillo, Texas, this month announced it’s developing a generative AI-powered digital assistant for the city. Designed with the city’s identity, tone of voice, and knowledge, it will be available in early 2024 to answer residents’ questions, such as about park facilities or the trash pick-up schedule.
Richard Gagnon, chief information officer for the city of Amarillo, told Eye on AI that the digital assistant’s ability to easily provide information across multiple languages is a big point of interest for the city, which has more refugees per capita than any other city in Texas.
“In a single middle school, sixty-two languages and dialects are spoken. Twenty-four percent of our population speaks a language other than English at home,” he said. “Generative AI presents an opportunity to connect with our entire population not only for access to city services but also in our digital literacy and workforce development efforts.”
But with issues around bias, “hallucination,” and more still swirling around generative AI, there are some obvious concerns to adopting it for government use at this time. Gagnon said they’re using ChatGPT for their initial development in the cloud but are currently testing other LLMs for their planned move to on-premise later this year, which he says will offer more control over the knowledge base and LLM. He also said the city’s partnership with Pryon, an AI company focused on enterprise knowledge management, is “critical” to combating these issues.
The city is additionally working out strategies for managing safety and improving the AI's performance on an ongoing basis. For example, there will be daily reporting to each department on what questions were asked in their domains and how they were answered. Directors will be expected to review these daily and update the website or source documents to continuously improve the conversation.
And with that, here’s more AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com