Datums for Peace: Prospects for Cooperation on Artificial intelligence and the Case of South Asia

Artificial intelligence (AI) is reshaping the global geopolitical landscape. However, beyond India’s growing AI leadership, South Asian countries face challenges in developing and regulating AI due to a lack of technical skills, research and development, and capital for setting up AI infrastructure, among other deficiencies. Can South Asian nations collectively leverage their potential in the AI revolution, or will they be left behind? 

AI developments are happening at a rapid pace in 2024, but policymaking is not up to par. So far, only the European Union (EU) has passed the AI Act, a legal framework to protect private information and data in AI use. The ethics and best practices around AI are now well established thanks to the efforts of the UN and different governments, but there are not enough cooperation frameworks. The AI race is driven by economic goals, technological innovation, and geopolitical competition. But it also adds a new layer of anxiety around the uncertainty of its harmful potential to humankind. Here, there may be some lessons for the international community from the history of nonproliferation treaties and agreements. 

The invention of nuclear weapons in the 1930s soon led to the use of atomic bombs for mass destruction in World War II. Since then, the global community has established safeguards in the form of monitoring mechanisms, from the International Atomic Energy Association (IAEA) to the Treaty on the Nonproliferation of Nuclear Weapons (NPT). Although such arrangements have not always proven to be effective in containment, they brought nations together to develop international standards, monitoring mechanisms, and tools of accountability for international peace and security. 

Not surprisingly, policy experts have drawn parallels between nuclear and AI safety, with some referring to the need for a Manhattan Project for AI. Technological developments with AI seem to move faster than policy and regulation for AI safety. In May 2023, the heads of AI labs OpenAI, Anthropic, and Google DeepMind, as well as researchers like Geoffrey Hinton and Yoshua Bengio and prominent figures like Bill Gates, signed an open letter stating, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This document underscores the collective “AI anxiety” in reaction to the uncertainty around AI’s rapid growth. 

Like nuclear weapons, AI safety concerns are transboundary and have incited global competition between great powers.

Like nuclear weapons, AI safety concerns are transboundary and have incited global competition between great powers. Countries are racing to increase AI capabilities, improve their supply chains for chip manufacturing (which is essential to machine learning processing), and get ahead in global leadership. China wants to lead the way by 2030, the United States wants to revitalize its semiconductor industry, and the EU is pushing ahead with regulatory leadership. AI poses a similar existential threat to nuclear weapons. Some liken it to an apocalyptic takeover; others believe any remote job will cease to exist in five years. 

At its core, AI diplomacy is about three things: knowledge exchange, regulatory coordination on data protection, and fostering innovation. Complex technical and academic skills in working with data and technology are required to implement AI, and this resource is unevenly distributed across the world. Developed countries are more likely to be able to develop human capital, invest in research and development, come up with academic curriculums, and spend on super-computer power. Programs like the Fatima Fellowship bring students from developing countries to work on AI-related research projects in US university departments to help create a knowledge exchange and provide a solution. As AI’s regulatory landscape develops in different directions, cooperation among countries and international organizations has become necessary. Collaboration, through the standardization of approaches and guidelines, ensures that nations develop and apply AI responsibly and respond to global challenges. 

Where do South Asian countries fit in to this global race? Many lack the resources to invest significantly in AI. A microchip manufacturing plant costs tens of billions of dollars. Additional investments are needed for reskilling human resources, funding research and development, supporting supercomputers, developing AI safety and sustainability, and ensuring sustainable production. South Asian countries need to reinvigorate their regional partnerships and alliances to advocate for their role in AI proactively. Organizations like the Bay of Bengal Initiative for Multi-Sectoral Technical and Economic Cooperation and the South Asian Association for Regional Cooperation (SAARC) can enable cross-border communication on AI to ensure their voices are heard.  However, political tensions rooted in India’s dominant position in the region, its tensions with Pakistan, and other sensitive issues like the role of the Taliban government have rendered organizations like SAARC ineffective. More broadly, the region lacks infrastructure for the connectivity and trade required for effective regional cooperation. AI regional diplomacy may need to take a step back to reassess the efficacy of existing platforms, and even explore new channels for regional integration. 

Development partners like UN Development Programme can help foster regional cooperation through platforms like the UN Office for South-South Cooperation. By collaborating on policy, strategy, and vision for the region, South Asian countries can leverage greater negotiation power and collective agendas to push for fair distribution of burdens and benefits. India can naturally play a leadership role in integrating AI diplomacy in the region. It is already leading in the region with the highest number of AI projects implemented, a huge data repository, and continued growth in the IT sector. 

AI should accelerate the Sustainable Development Goals, foster cooperation, and help humanity identify priorities to build a better future for all. 

More broadly, international cooperation is essential for national security, conflict prevention, and norm setting in the region. AI can follow the trajectory of nuclear control over the decades. Treaties, alliances, and agreements can be developed to address transboundary issues and accelerate knowledge and resource exchange. Cooperation can address shared interests and ensure that inequalities between rich and poor countries are properly addressed, and that there are mutually acceptable frameworks to prevent AI from being used harmfully. The Fourth Industrial Revolution, and ongoing global conflicts, highlights the need to prioritize human over technological achievements. With AI, innovation needs to be driven by necessity and good intent. AI should accelerate the Sustainable Development Goals, foster cooperation, and help humanity identify priorities to build a better future for all. 

South Asia can be incorporated into these international cooperation models in different ways. The IAEA was created out of similar fears and concerns about a relatively new nuclear technology that can cause irreversible destruction to humanity. An international monitoring body, possibly under the United Nations, can be similarly established to ensure AI is used for peace and prosperity.  A Manhattan Project-type scenario for research and development on AI could also ensure South Asia’s participation through India’s immense IT and technology talent pool, which is already a major international export. These contributions to global efforts may be more effective ways for the region to be involved in international AI cooperation than regional collaborations, which face major challenges due to diplomatic disputes, among other obstacles. Indeed, treaties like the NPT provide the opportunity for creating legally binding multilateral commitments that South Asian countries may join to build international trust and integration. 

Thus, like nuclear non-proliferation, and more recently climate change, AI also requires international cooperation and efforts to minimize risks to humankind and maximize efficiency. International forums, agreements, and treaties can provide common standards and objectives, reduce the risk of conflict, and foster more innovation. The paradox is that we do not know enough about the parameters and scale of the risk we are dealing with when it comes to AI. But that does not need to stop us from initiating dialogue—the first step in diplomacy.


Sadman Rahman is a former South Asia Institute and Indo-Pacific Program staff intern at the Wilson Center.

The views expressed are the author's alone, and do not represent the views of the U.S. Government or the Wilson Center. Copyright 2024, Indo-Pacific Program. All rights reserved.

Follow the Indo-Pacific Program on Twitter @IndoPacific. or join us on Facebook.

Author

Sadman Rahman
Former Staff Intern, Indo-Pacific Program

Indo-Pacific Program

The Indo-Pacific Program promotes policy debate and intellectual discussions on US interests in the Asia-Pacific as well as political, economic, security, and social issues relating to the world’s most populous and economically dynamic region.    Read more

Indo-Pacific Program