In 2017, the city of Rotterdam in the Netherlands deployed an artificial intelligence (AI) system to determine how likely welfare recipients were to commit fraud. After analysing the data, the system developed biases: it flagged as “high risk” people who identified as female, young, with kids, and of low proficiency in the Dutch language.
The Rotterdam system was suspended in 2021 after an external ethics review, but it demonstrates what can go wrong when governments adopt AI systems without proper oversight. As more local governments turn to AI in an effort to provide real-time and personalised services for residents, a “smarter” environment and better, safer systems, the risks are rising.
As part of our ongoing research, we studied 170 local governments around the world that use various AI systems.
We found AI is already touching nearly every aspect of public service delivery, and most of the governments didn’t even have a published policy about it.
AI in everyday governance
AI applications are affecting local governance in profound ways. Our international investigation uncovered 262 cases of AI adoption across 170 local councils, spanning a wide array of technologies and services.
We found these technologies are being deployed across five key domains.
1. Administrative services. For example, the VisitMadridGPT tourism chatbot in Madrid, Spain delivers personalised recommendations, real-time support, and cultural insights for visitors.
2. Health care and wellbeing. For example, during the height of the COVID pandemic in 2021, Boston mayor’s office in the United States launched an AI-driven chatbot for contactless food delivery, simultaneously addressing hunger and safety concerns.
3. Transportation and urban planning. Logan City in Australia has implemented a real-time AI system that keeps drivers informed where parking is available, reducing congestion and frustration. Meanwhile, AI-driven route optimisation for public transport is being widely adopted to save time and emissions.
4. Environmental management. In Hangzhou, China, an AI system is being used to classify waste more efficiently, boosting recycling rates.
5. Public safety and law enforcement. Chicago in the US has used sensors and AI automation to shape law enforcement strategies. By pinpointing crime hotspots, the city reportedly reduced gun violence by 25% in 2018. However, this technology has also raised ethical concerns about racial profiling.
The double-edged sword of AI
Our study using AI found only 26 had published AI policies as of May 2023 – less than 16%. Most are deploying powerful AI systems with no publicly available framework for public oversight or accountability.
This raises serious concerns about ethical violations, systemic biases and unregulated data use.
Without robust policy, local governments risk deploying powerful AI systems without critical checks or external supervision. Algorithms could unintentionally discriminate against certain populations when allocating resources such as public housing or health services. The stakes may be incredibly high, as in Rotterdam’s welfare fraud risk scores.
Among the councils with AI policies, there was a clear emphasis on collaboration with stakeholders, raising awareness among employees and citizens, and ensuring transparency and regulation.
Among these, Barcelona City Council’s AI policy stands out. Its policy includes principles such as being transparent about AI, making sure AI decisions can be explained, and fair, and sets a benchmark for other municipalities.
Public in the dark
A recent survey our team conducted in Australia, Spain and the US shows a significant gap between public awareness and local government action about AI. More than 75% of respondents were aware of AI technologies and their growing presence in everyday life, but not when it came to local government initiatives.
On average, half of the respondents were unaware their local governments are actively using AI in public services. Even more concerning, 68% said they had no idea local governments have – or could have – policies governing AI use.
This striking lack of awareness raises pressing questions about the transparency and communication of local councils. As AI becomes increasingly embedded in urban management – from traffic monitoring to public safety and environmental sustainability – better informing the public is essential.
Without public understanding and engagement, efforts to build trust, accountability, and ethical oversight for AI in governance may face significant hurdles.
The future we face
There is no doubt AI systems have great potential to improve urban governance. But without policies that prioritise transparency, accountability and ethical use, cities risk unleashing a system that could harm more than it helps.
However, it’s not too late for local governments – and citizens – to avoid this grim future. Local governments can create robust AI policies that ensure fairness, transparency, and the ethical use of data. Citizens can be educated about AI’s role in local governance.
AI applications are reshaping and transforming our world. But how we choose to guide their integration into our communities will determine whether they’re a force for good or will simply implement biases and hidden agendas.
Our project is working with local governments in Australia, the US, Spain, Hong Kong and Saudi Arabia to create guiding AI principles that we aim to finalise by the end of 2025.
The authors acknowledge the contribution of Kevin Desouza, Rashid Mehmood, Anne David, Sajani Senadheera and Raveena Marasinghe to the research described in this article.
Tan Yigitcanlar receives funding from the Australian Research Council.
Karen Mossberger has received funding support from the Australian Research Council as Co-Principal Investigator.
Pauline Hope Cheong has received funding support from the Australian Research Council as Co-Principal Investigator.
Rita Yi Man Li has received funding support from the Australian Research Council as Co-Principal Investigator.
Juan Manuel Corchado Rodriguez does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
This article was originally published on The Conversation. Read the original article.