The Comptroller and Auditor General of India (CAG), Girish Chandra Murmu, who is the chair for the Supreme Audit Institutions (SAIs) of the G20, warned that absolute dependence on Artificial Intelligence (AI) for auditing purposes may lead to inaccurate findings, and emphasised ethics as the cornerstone of responsible AI. The CAG conducts financial audits, compliance audits, and performance audits. The auditing challenges of AI include ensuring transparency, objectivity, fairness, and avoiding bias.
Responsible AI must be ethical and inclusive. Only ethical AI can add credibility, trust, and scalability to the CAG audit. Data sets must be complete, gathered on time, accurate, available, and relevant. If integrity of the data fields is not ensured, we will have inaccurate audit findings. The AI auditor must be extra-vigilant about the risk of inherent AI data bias if data are taken from unauthorised sources like social media, where data manipulation and fabrication are common.
India needs AI regulation
In June, the European Parliament approved the EU AI Act, the first of its kind in the world. The Act ensures that generative AI tools such as ChatGPT will be placed under greater restrictions and scrutiny. Developers will have to submit their systems for review and approval before releasing them commercially. Parliament also prohibited real-time biometric surveillance from all public settings and “social scoring” systems.
Ensuring the accuracy of vast Internet data mines is a challenge. The content generated by AI systems may lead to potential copyright infringement issues, violating intellectual property rights. Addressing legal implications relating to content ownership is a formidable task. AI bias is an inherent risk originating from the human bias that is added to the data sets of machine learning. Elon Musk wants to address these concerns by developing ‘Truth GPT’, a “maximum truth-seeking AI”. His vision of a harmonious fusion of technological progress and ethical considerations poses significant challenges. A multifaceted approach may be required to mitigate bias and ensure the safety and accuracy of AI models. U.K. Prime Minister Rishi Sunak said that he wants to make the U.K. the “geographical home” of AI safety regulation. It is time for India also to take a cue from the EU and make appropriate legislation about the use of AI systems.
Challenges before the CAG
The CAG faces many challenges in auditing AI systems. AI regulation and data standardisation are critical. Since the data for various government entities are taken from different sources and stored in multiple divergent platforms, the AI auditor will face enormous risks and challenges. Audits cannot be based on big data from unauthorised sources. Data integration and cross-referencing become cumbersome. The data platforms of all entities must be synchronised through the government’s IT policies. According to the CAG, One Indian Audit and Accounts Department One System, a web-enabled IT application is going to support multiple languages, offline functionality, and a mobile app, enabling complete digitalisation of the audit process from April 1, 2023, with only one exception, the defence audit, because of security dimensions. The SAI G20 conference emphasised the need for a common international audit framework relating to AI.
At present, auditors can only adopt and adapt existing frameworks and regulations relating to IT. As there are limited precedents for AI use, the national audit institution needs to communicate with all the stakeholders. The existing definitions and taxonomies of AI must be examined to adopt what is legally acceptable. Since there is wide variance among AI systems and solutions, the auditor must adopt an appropriate AI design and architecture while defining the audit’s objective, scope, approach, criteria, and methodology. There needs to be capacity building of auditors in varied aspects of the AI technology landscape so that they are familiar with AI frameworks, tools, and software. In the absence of explicit AI auditing guidance, auditors must focus on ethics, use authentic data sources to ensure transparency, address legal concerns, and look at deficiencies in IT controls and governance. AI audit assignments may require consultation with data scientists, data engineers, data architects, programmers, and AI specialists. AI outsourcing to third parties while using cloud computing implies the risk of third parties’ having control of the infrastructure. AI domain risks such as big data, machine learning, and cybersecurity must be documented in a risk and control matrix.
Compliance issues
Global organisations have developed many AI auditing frameworks. These include the COBIT framework for AI audit, the US Government Accountability Office framework, and the COSO ERM Framework. The U.K.’s Information Commissioner’s Office has published draft guidance on the AI auditing framework. Data Protection Impact Assessments are legally required if organisations use AI systems that process personal data to avoid potential risks. The AI auditor must ensure that personal data is processed in a manner that guarantees appropriate levels of security.
With few frameworks available for auditing AI, auditors can only focus on the risks, controls and governance structures that are in place to determine whether they are operating effectively.