THE United Nations’ Governing AI for Humanity: Final Report outlines a visionary framework for the governance of artificial intelligence (AI), promising ethical deployment, equitable development, and global cooperation. However, a deeper analysis reveals critical risks associated with the plan, particularly regarding its potential to perpetuate geopolitical divides and marginalize certain nations. As the UN aspires to create a unified governance model, it risks becoming a tool for reinforcing existing power structures, excluding advanced AI players like China, Russia and nations in the Islamic world that face consistent Western scrutiny.

Despite its significant advancements in AI, China and Russia is unlikely to play a major role in the UN’s governance framework due to its strained relations with the West. China and Russian AI capabilities, particularly in military applications and cybersecurity, have drawn global attention, positioning the country as a formidable player in the AI race. However, its perceived geopolitical threats, stemming from conflicts such as the war in Ukraine, have effectively sidelined Russia from many international initiatives, including AI governance discussions.

The UN report’s emphasis on inclusive governance contradicts the reality of excluding major AI stakeholders based on political considerations. By sidelining China and Russia, the global AI framework risks creating a fragmented landscape where key players operate outside established norms. This isolation could exacerbate tensions, as China and Russia and similarly excluded nations may develop parallel AI systems with limited interoperability and minimal alignment with international standards.

Islamic nations, particularly those viewed critically by the West, face similar risks of exclusion under the proposed governance model. Countries such as Iran and others in the Middle East have invested heavily in AI research, focusing on fields like healthcare, agriculture, and education. Despite these contributions, Western narratives often frame these nations as security threats, leading to their marginalization in global technology dialogues.

The potential branding of certain AI systems as “rogue” or “terrorist” technologies further underscores this risk. Concepts like “Terrorist AI” (TAI), highlighted as a growing concern, could be weaponized to target AI development in countries deemed adversarial by Western powers. This framing not only stifles innovation in these regions but also risks perpetuating harmful stereotypes that associate Islamic nations with instability and extremism.

For example, AI-driven border surveillance technologies developed by Western nations are often hailed as innovations, whereas similar technologies deployed in Islamic nations might be scrutinized for potential misuse. This double standard highlights the dangers of a governance model influenced by geopolitical biases rather than a genuine commitment to equitable development.

Excluding major players like China, Russia and Islamic nations has broader implications for global AI governance. A truly effective framework requires the participation of all nations, particularly those with advanced AI capabilities. Fragmentation not only hinders the interoperability of AI systems but also increases the likelihood of an “AI arms race,” where countries operate outside established norms to gain strategic advantages.

The UN report emphasizes the importance of fostering a “common understanding” and “shared benefits,” yet overlooks the critical need to address the geopolitical realities that perpetuate exclusion, rendering these goals difficult to achieve. Marginalized nations may resort to developing their own AI ecosystems, potentially in collaboration with other excluded countries, creating parallel frameworks that challenge the legitimacy of the UN’s efforts — much like China has established its own tech giants, which now rival the largest American corporations. This divergence could result in a fractured AI landscape, with competing standards and limited cooperation on critical issues like safety, ethics, and accountability.

Beyond geopolitical exclusion, the UN’s governance model risks imposing Western-centric values on diverse cultural and ethical contexts. The emphasis on “transparent” and “trustworthy” AI systems, while seemingly neutral, often reflects Western ideals that may not align with the priorities of excluded nations. This approach risks marginalizing alternative perspectives on AI ethics, perpetuating a form of “AI colonialism” where the rules are dictated by a few dominant players.

For nations excluded from the governance framework, the lack of representation in defining AI norms exacerbates existing inequalities. Islamic nations, for example, have unique ethical frameworks rooted in Islamic jurisprudence, which could offer valuable insights into the responsible deployment of AI. Exclusion from global dialogues prevents these perspectives from shaping the future of AI, reinforcing the dominance of Western narratives.

To avoid these pitfalls, the UN must adopt a genuinely inclusive approach that transcends geopolitical biases. This requires acknowledging the contributions of all nations, including those perceived as adversaries or threats, and ensuring that their voices are represented in governance discussions. Specific steps could include:

• Decentralized Governance: Instead of a centralized model, regional governance frameworks could allow nations to shape AI norms based on their unique contexts and priorities. These regional frameworks could then interface with a global structure to ensure interoperability and cooperation.

• Neutral Arbitration: Establishing independent bodies to mediate geopolitical tensions and ensure fair representation in governance discussions could prevent the exclusion of nations like China, Russia or those in the Islamic world.

• Respect for Cultural Diversity: The governance model must account for diverse ethical frameworks, ensuring that AI norms are not dictated solely by Western values but reflect the multiplicity of global perspectives.

While the UN’s Governing AI for Humanity report offers a vision of equitable and ethical AI governance, its implementation risks entrenching existing geopolitical divides. The branding of certain technologies as “rogue” or “terrorist” further perpetuates harmful narratives that stifle innovation and exacerbate global tensions. For the governance framework to succeed, it must move beyond the binaries of “good AI” versus “bad AI” and embrace a genuinely inclusive approach.





Dr Rais Hussin is the Founder of EMIR Research, a think tank focused on strategic policy recommendations based on rigorous research.

** The views and opinions expressed in this article are those of the author(s) and do not necessarily reflect the position of Astro AWANI.