India and global artificial intelligence governance

VIDUSHI MARDA

back to issue

ARTIFICIAL Intelligence (AI) systems are increasingly embedded in society – from curating social media feeds and assisting law enforcement, to deciding an individual’s creditworthiness and aiding in healthcare. There are at least two possible explanations for this recent and substantial mainstreaming of AI in everyday life. First, there is more computing power and data today than ever before. Second, due to this the possibility of using AI systems to predict and classify large amounts of data makes it fertile ground for governments and industry alike.

At the outset, it is crucial to ask: what do we mean by AI? AI, broadly defined, refers to the ability of computers to exhibit intelligent behaviour. It has existed as a field of computer science for over sixty years. The most recent wave of interest in AI has been spurred by one technique of AI, called machine learning (ML) – where algorithms train on data, uncover patterns and predict future outcomes by learning from this data.

Given the speed and scale at which these systems work, a number of incentives are at play. Some stakeholders view AI as a business opportunity, and look at ways in which scale, efficiency and deployment can be encouraged. The Chinese government, for instance, has laid down plans to be the world leader in AI by 2030, referring to this technology as ‘a new engine of economic development.’

Others view AI as a leveller in society, and look at ways in which inclusion and widespread adoption of AI will help us solve complex problems like financial inclusion and access to healthcare. An example of this at an intergovernmental level is AI for Good, an annual summit organized by the International Telecommunications Union (ITU), which aims at scaling AI applications like sentiment analysis and credit scoring for global and inclusive impact.

Still others view AI primarily as sociotechnical systems and look at ways by which these systems can be regulated to ensure that negative consequences such as discrimination and surveillance do not occur. In its national AI strategy, Germany recognizes the economic potential of AI, but underscores that underlying the strategy is the democratic desire to anchor AI in an ethical, legal, cultural and institutional context which upholds fundamental social values and individual rights.

These competing interests have given rise to a growing and textured conversation around the nature and extent of AI regulation and accountability in societies; a conversation which I broadly term ‘AI governance’ for the purposes of this article. In this article, I will shed light on what AI governance looks like internationally, map key arguments and concerns that have emerged, and finally analyse India’s engagement with this landscape.

 

Given the multiple ways in which AI can be used in societies, one approach to governance has been to erect normative ethical frameworks that guide how AI should be designed, developed, and deployed. These frameworks are used by various stakeholders to indicate their priorities, considerations, and in some cases, also explicitly spell out use cases and principles that will not be pursued.

Governments have discussed ethics to varying degrees of detail in AI strategies (at the time of writing this essay, at least 50 state-led AI strategy documents have been released by countries around the world). The United Kingdom, for instance, has explicitly stated its aspiration to become the world leader in ethical AI. Other states like China and the United States focus more on the competitive edge and economic opportunities that these technologies can generate.

Ethical frameworks are also in place at an intergovernmental level. The European Union has multiple ethics initiatives underway, including Ethical Principles for Trustworthy AI published by the EU High Level Expert Group on AI. In May 2019, forty two nation-states signed the OECD’s Principles for Trustworthy AI. Nordic and Baltic governments issued a joint declaration on AI that explicitly recognized the importance of ‘ethical and transparent guidelines, standards, principles and values to guide when and how AI applications should be used.’

 

Ethics has also been championed by the private sector globally. In June 2018, Google published its ‘AI Principles’ publicly stating their intention to build socially beneficial AI systems that would not create or reinforce biases and would be safe and accountable. Google also included a list of applications the company would not pursue, which include technologies that cause overall harm, violate human rights, etc. Microsoft published ethical principles under the umbrella of ‘Responsible AI’ and constituted the AI and Ethics in Engineering and Research Committee (AETHER), to make recommendations on what AI technologies the company should deploy. Facebook responded to the chorus of ethical AI by funding a research institute at the University of Munich beside a dedicated AI ethics team.

Technical institutions like the Institute for Electrical and Electronics Engineers (IEEE) and the Association for Computing Machinery (ACM) have produced ethical principles for autonomous systems that ostensibly feed into technical considerations when designing and developing AI systems. Civil society and academia have also engaged with ethical approaches to AI systems – either as respondents to government and corporate consultations, through institutional proceedings at the UN and other similarly placed intergovernmental processes, or through multi-stakeholder spaces like the Partnership on AI.

This mushrooming of ethical initiatives is accompanied by skepticism surrounding their utility. Critics point out that ethical frameworks are often posed as an alternative or preamble to regulation, a phenomenon political scientist Benjamin Wagner has termed ‘ethics washing’. The lack of accountability and redressal mechanisms surrounding ethical frameworks is made worse by the fact that existing principles are vaguely worded with no grounding in law. Even ethical standards for technical communities do not materially affect the design or development of AI – research has shown that the ACM code of ethics had no observed effect on the work of software engineers who were explicitly asked to consider it. The lack of teeth in ethical frameworks is particularly dangerous in context of state use of AI systems, as this could lead to a significant dilution of state accountability.

 

The fear of hampering innovation through regulation is one of the main reasons for the popularity of ethical frameworks. This tension has been in the public eye most explicitly through the G7’s effort to kickstart the Global Partnership on AI (originally called the International Panel on Artificial Intelligence). This partnership was launched by France and Canada in December 2018 to address ethical concerns, establish shared principles and regulations to generate international consensus on the human rights impact of AI systems. All G7 countries have expressed agreement with these overarching goals, with the notable exception of the United States, which has responded cautiously. Subsequently, the US in January 2020 published a list of 10 principles for government agencies to consider while formulating governance mechanisms for AI, encouraging a light touch approach to AI governance, and cautioned against heavy handed innovation killing models of regulation.

 

As a result, the field of AI regulation is far from homogeneous. At an abstract level, there is a tendency to claim that AI systems are so novel that they operate in a regulatory vacuum. This is not the case. Existing laws (at both national and international levels) can and must find application in context of AI; it is the extent to which they apply, and the ways in which they need to evolve that are the crucial questions now. Jurisdictions like the European Union are grappling with how traditions of democracy, rule of law and human rights can act as a regulatory mechanism for emerging technologies like AI and how they need to evolve. At the same time, existing regulation, at both domestic and regional levels, is already adapting. The General Data Protection Regulation, for instance, lays down data protection rights broadly and carves out some specific safeguards in context of automated decision making.

New forms of AI regulation are cropping up elsewhere too. Legislative proposals like the Algorithmic Accountability Act introduced in the US Senate in April 2019 seek to establish accountability mechanisms for uses of AI in the public sector. Another form of regulation are use-case specific bans. For instance, San Francisco became the first city to ban law enforcement use of facial recognition in May 2019 which other cities in the US also followed. In January 2020, the Trump administration also announced a ban on the export of certain AI systems for reasons of national security.

Calls for regulation also occur at the intergovernmental level. In 2018, the United Nations Special Rapporteur on freedom of expression appealed to UN General Assembly member states to apply existing standards of human rights, constitutional guarantees and sectoral regulation to the design, development and deployment of AI systems. The EU is also considering more regulation for AI. Other stakeholders like Google and Microsoft have also expressed the need for regulation.

It is important to note that an often overlooked, but crucial part of the governance puzzle is the stage of technical specification and standard setting for AI systems. The design and development of AI systems is technically determined through standardization bodies such as working groups within the IEEE, through technical actors and researchers like the Fairness, Accountability and Transparency in Machine Learning (FaccT/ML) community’s efforts at refining technical approaches to ethical standards of fairness, accountability and transparency in machine learning.

 

Mapping India’s engagement with the global governance regime is not a linear or neat process as there is a patchwork of initiatives, developments and contestation related to domestic governance structures. The Indian government’s prioritization of AI has steadily increased driven by rising budgetary allocations towards AI. This is hardly surprising as AI falls at the intersection of multiple flagship projects of the Indian government. Digital India aims at making India a digitally empowered society by providing every individual digital infrastructure as a core utility. Make in India seeks to transform India into an international manufacturing hub, spurring the incentive for domestic design and development of AI systems. The 100 Smart Cities Mission is another initiative closely related to the Union government’s approach to AI given its focus on providing ‘smart solutions’ to improve the quality of life of citizens in a sustainable environment.

 

In the last three years, there have been several policy documents that also directly refer to the development and deployment of AI. In March 2018, the Ministry of Commerce’s AI Task Force published a report identifying key areas for AI in India, including healthcare, agriculture, national security, retail, etc. While framing AI as a socio-economic problem solver at scale, the report did not attempt to comprehensively discuss ethical and social implications of these systems, instead focusing on how the government can encourage growth in these sectors.

NITI Aayog’s National Strategy for Artificial Intelligence, published in June 2018, states that India’s approach to AI should be one that will ‘leverage AI for economic growth, social development and inclusive growth, and finally as a "garage" for emerging and developing economies.’ In May 2019, NITI Aayog proposed (and subsequently received approval for) a Rs 7500 crore budget to set up an AI framework for India with a view to push for greater adoption and institutional oversight. In parallel, the Ministry for Electronics and Information Technology (MEITY) has set up four committees in February 2018 to draft a policy framework for AI after recognizing AI’s impact on the economy and society. Unsurprisingly, a turf war between the two agencies with concerns about duplication of work and funding reached a peak in August 2019, when MEITY requested the finance ministry to intervene and resolve issues.

While the exact form and central institution (if any) to govern AI is still evolving, the substantive focus and outlook of India’s approach to AI is clear. AI is primarily seen as a tool to fuel economic growth. However, India’s strategy does not take a cookie cutter approach. While the financial impact of AI the biggest motivating factor, aspects such as inclusion and ‘greater good’ also featured prominently in NITI Aayog’s #AIFORALL strategy.

There have also been a steadily growing list of ad hoc regulation in context of AI systems, usually honing in on the idea of data being a key resource and driving factor of India’s AI future. At the time of writing this article, India does not yet have a data protection law. However, the draft data protection bill provides significant insight into the government’s approach. For instance, data localization was a flashpoint throughout the process of drafting and discussing provisions of the bill; with Section 40 of the draft bill mandating that a government-defined class of ‘critical data’ must mandatorily be stored exclusively in India. A primary justification for this provision was that Indian players – government, private sector and research organizations – should have access to this data to locally develop and deploy emerging technologies like AI.

 

While the bill contemplates principles like purpose and collection limitation, explicit consent in case of sensitive personal data, it raises privacy concerns in one fell swoop – the government can exempt its agencies from all protections subject to procedures and oversight from the agency in question. This objective resonates with the Economic Survey published by the Ministry of Finance which states that personal data collected by the government becomes a ‘public good’ once anonymized. This is not the appropriate place for an in-depth discussion on the fallacy of anonymized data in context of machine learning. However, suffice to say that this will have significant implications in terms of how AI is designed, developed and deployed in India.

The private sector shares and plays a significant role in the government’s aspirations for AI. There are several sectors identified by policy initiatives discussed above – spanning healthcare, agriculture, retail, urban development, mobility, education, law enforcement, etc. Discussing plans for future AI prioritization, government officials have mentioned that economic viability of applications for private actors will determine deployments, bringing to the fore questions about private-public partnerships and their impact on governance.

 

Across AI policy initiatives, ethics and human rights (primarily privacy) are mentioned but as an afterthought or formality at best. It is clear from mushrooming of use cases that the use of AI systems will be more opportunistic and driven by executive decisions, than deliberate and guided by ethical or regulatory norms. The use of AI systems is considered an efficient, desirable and useful step, in and of itself, without meaningfully engaging with the limitations of these technologies.

The national Automated Facial Recognition System (AFRS) demonstrates these threads of analysis comprehensively. In July 2019, the Home Ministry announced plans for the AFRS – a system that will use images from CCTV cameras, newspapers, police raids to identify criminals by matching these to existing records under the Crime and Criminal Tracking Network System (CCTNS). It would bolster nationwide intelligence sharing between police departments by having a centralized system for face recognition. Here, the exceptionalism afforded to shiny and ‘efficient’ technology is made apparent. Even in the face of overwhelming evidence to show that facial recognition is an unreliable technology, the limitations of these systems are ignored in favour of the potential for enhancing law enforcement capabilities.

The Home Ministry has not taken a clear stance which would suggest that it considers the ethical and legal implications of using facial recognition, which is particularly concerning when governments around the world are putting in place bans or at least strict regulation. In fact, the legal basis on which the AFRS stands is unclear. Responding to a legal notice from the Internet Freedom Foundation, the Home Ministry traces the legal basis for the AFRS to a Cabinet Note from 2009, which is, at best, a document of procedure, not of legal consequence. Further, the AFRS is afforded an exception to regulatory and ethical standards that the government otherwise adheres to, and runs counter to the fundamental right to privacy reaffirmed by the Supreme Court in 2017 in Puttaswamy v. Union of India.

In Puttaswamy, the court laid down a four-part test that any action infringing the right to privacy must satisfy: it must be demonstrated to be in pursuit of a legitimate aim, bear a rational connection with the aim and shown to be necessary and proportionate. The AFRS does not meet this constitutional standard

 

India’s approach to AI governance is layered and simultaneously evolving. Domestic developments suggest how India will engage with international processes as and when they evolve. While primarily endorsing the business case and social inclusion narrative of AI, India has a long way to go insofar as understanding it as a sociotechnical system with the capacity to drive inequality, exclusion, surveillance and an erosion of constitutional and human rights.

As of now, it is clear that opportunistic, ad hoc decisions from the state and private companies reigns supreme in the context of AI deployment, development and use. However, AI governance in India needs to mature to acknowledge the limitations, potential and impact of AI systems on daily life. Civil society is structurally excluded from the AI governance space, with government consultations (if any) being the only window for engagement. A majority of key decisions and deliberations are made by permutations and combinations of industry, government, and sometimes technical actors, and civil society is yet to be included. This exclusion is misguided particularly as the societal impact of AI systems become more apparent every day.

While global debates around the impact of AI systems place emphasis on India, particularly in context of how AI systems will impact employment and the future of work, there has been limited explicit engagement from home with these intergovernmental or multilateral initiatives. Beyond the policy and economic realms, there is growing indication that India will engage with AI standardization. The International Telecommunications Union is meant to set-up an innovation centre in India to incorporate technologies from South Asian countries and emerging economies in standards for technologies.

India has the opportunity to position itself as a thoughtful leader in the context of AI by drawing from its democratic foundations and experience of deploying technologies at scale. Instead of buying into the AI race for economic power, New Delhi should engage with the strengths and limitations of these systems and institute a deliberate and future proof strategy for the design, development, standardization and deployment of AI technologies. While industry and state interests have played a leading role thus far, it is important to note that effective AI governance must be multi-stakeholder in nature.

top