Consider these facts: Artificial Intelligence (AI) is here to stay; AI possesses the capability to fundamentally change the way in which we work; AI can, by assimilation of data from multifarious online sources, present far more powerful (and seemingly creative) solutions than any human can; AI is a far greater force of either good or evil (or both). Given all this, AI needs to be regulated.
‘East is east and west is west’
Governments across the world are grappling with the regulation of AI. Till recently, the greatest advances in the regulation of AI have been made in the European Union (EU), Brazil, Canada, Japan and now, China. Countries in the EU, Brazil and the United Kingdom (which shall be referred to hereafter as “western systems”) have adopted regulatory measures with differences which, on a closer study, are superficial. Regulations in Japan and China, however, are fundamentally different from the western system.
The western systems have all adopted means of regulation which are intrinsically western in character. That is to say, they are founded in the Eurocentric view of jurisprudence. The eastern model is entirely different.
The western model focuses on a risk-based approach. First, lawmakers create a pyramid of risks and identify the risks posed by every type of AI-based application. The pyramid of risks then proceeds to be divided into four categories: ‘unacceptable risk’, ‘high risk’, ‘limited risk’ and ‘low risk’. In the EU therefore, lawmakers have gone about prescribing prohibited classes of activities for the ‘unacceptable risks’, a regulated class of activities for the ‘high risks’, and a simple set of disclosure-based obligations for the ‘low risks’. Brazil too has followed a system of categorisation of risks and regulations to address those risks. It also has strict governance measures to be complied with by all AI applications. Canada too follows a similar pattern of identifying activities to be prohibited, and clear regulations on how AI-based applications must function.
As far as the eastern models are concerned, the Japanese government’s Integrated Innovation Strategy Promotion Council has framed a set of rules called the “Social Principles of Human-Human-Centric AI”. It was published by the Japanese government in March 2019 and manifests the basic principles of an AI-capable society. The first part contains seven social principles that society and the state must respect when dealing with AI: human-centricity; education/literacy; data protection; ensuring safety; fair competition; fairness, accountability and transparency, and innovation.
The Chinese regulations are even more interesting. The opening lines of Article 4 of these regulations are: “The provision and use of generative artificial intelligence services shall abide by laws and administrative regulations, respect social morality and ethics, and abide by the following provisions”.
The law goes on to prescribe the kind of values that should be upheld and promoted by artificial intelligence services, and the ends that should be achieved through these AI-based applications and services. The difference is stark. While there are areas of overlap, it is readily evident that one set of rules (the western model) spells out what must be done, and how to do it. The weight of the rules rests on the means to be adopted and the rationale underlying those means. The other set of rules focuses on the ends and the values that must be upheld by compliance with the rules.
The western model is perfect for the West — a clear set of rules, which a rule-abiding society will undoubtedly comply with, along with a set of proscriptions and punishments for the few who violate the law.
The eastern model is more open, and embraces the overlap between the legality of the rules and the morality of the rules. Legal systems which possess this overlap have even been given the name, “Hindu Jurisprudence”, by legal philosopher Harrop Freeman.
Why is there this difference between the West and the East? In the 1930s, Professor Northrop of the Yale Law School undertook a very interesting study of legal systems in the East and the West. While he was not particularly focused on Indian legal systems, his conclusions, as outlined below, are very apt to us as well.
The underlying theories
After examining the cultural relativism between the eastern and the western systems, Professor Northrop concluded that eurocentric legal systems (or western legal systems) created rules of law through “postulation”. That is to say, the legal system defines precisely what must be done in any given social order, by humans, and prescribes the penalties for non-compliance therewith. On the other hand, eastern or oriental legal systems created rules of law through “intuition”. That is to say, the law prescribes the end that is to be achieved, and the morality underlying that law. The subjects intuitively divine for themselves the most appropriate means of achieving those ends, keeping in view the underlying morality of the law.
In India, our ancient legal systems were so profoundly successful because there was a clear indication of the end to be achieved, as well as an underlying moral code. People complied with those laws by the process of intuition underlaid by morality. We see repeated echoes of these in the Pandavas’ exile to the forest, and in Emperor Ashoka’s edicts. This served us well. Neighbour China has a story where Emperor Wudi in 140 BC punished his errant sister, by proclaiming, “The laws are created by my former royal fathers. If I break their laws because of my sister, how can I face them in the royal temple after I die?” Law and morality are synonymous in the East.
The British who came to India transplanted their legal system here. What we have today possesses neither the virtues of our ancient Indian system nor the virtues (which are not inconsiderable) of the English legal system.
Cue from the judiciary
Justice V. Ramasubramaniam, who retired recently from the Supreme Court of India has lamented in more than one judgment that our slavish tendency is to ape western legal systems. One of his most celebrated judgments, on cryptocurrency in India, has, at its core, the Sanskrit epigram “neti neti” — meaning “it is neither this nor that”. One would expect regulators to take the cue from our judges and frame our newest regulations based on our eastern models of jurisprudence.
Thus, systems based on AI must be regulated. It is for India to frame its regulations. Do we slavishly copy the West, and frame lengthy regulations, with labyrinthine procedures and procrustean punishments? Or do we hark back to our roots and approach these regulations from a different perspective?
NITI Aayog has circulated three discussion papers which touch upon AI. In each of these, there are references only to the AI regulations in the EU, the United States, Canada, the United Kingdom, and Australia. It has gone on to state: “The responsible AI principles discussed earlier in this Paper, have been developed by first identifying systemic considerations prevalent among AI systems across the world, and identifying principles that may be used to mitigate the identified considerations.” A reading of the discussion paper suggests that NITI Aayog is going to follow the western model.
The time has come for India to have regulations in a manner that is consistent with the Indian ethos, by and for Indians. Let us hope that AI regulation is done better than indications suggest. India must look east.
Srinath Sridevan is a Senior Advocate at the Madras High Court