On February 6th, 2025, our Mission, in partnership with the Permanent Missions of the Monaco, San Marino and the Holy See, hosted a conference at the United Nations Headquarters titled “Can Moral and Ethical Boundaries be Applied to Artificial Intelligence?: A Humanistic and Interfaith Response in Recognition of UN World Interfaith Harmony Week”.
With AI increasingly influencing governance, media, academia, humanitarian efforts, and society at large, this event provided a crucial platform to examine how moral and ethical considerations can help guide AI development. The session was moderated by Dr. David Gibson, director of the Center on Religion and Culture at Fordham University, who led our panelists through this insightful conversation.
Ambassador Beresford-Hill, opened the event, noting how artificial intelligence has captured global attention, sparking extensive contemplation and analysis on its possibilities and implications. The Ambassador highlighted how in recent meetings of the Economic and Social Council, AI has remained a central theme in major dialogues, often looming over discussions on global governance and ethics. Acknowledging that the majority of the world’s population belongs to faith-based communities, he emphasized the importance of addressing AI not only as a technological phenomenon but also as a subject of interfaith concern.
H.E. Ambassador Isabelle Picco, Permanent Representative of the Principality of Monaco, emphasized the UN’s ongoing efforts in cybersecurity and AI governance to protect individuals in an era of rapid technological advancement, noting that the organization adopted its first-ever resolution on AI last year, making discussions on AI governance particularly timely. Stressing the broader implications beyond personal privacy, she pointed out that, as social media and technology increasingly shape interactions, young people, often unknowingly, are particularly vulnerable to their influence. She underscored the necessity of anticipating consequences and ensuring AI does not undermine human conscience, stating, “We have to think about the consequences, and we must prevent artificial intelligence from spreading falsehoods and threaten our own conscience.”
The conference’s first panelist, Father Paolo Benanti, Advisor to Pope Francis and the Holy See on AI, critically examined how AI and digital innovation are reshaping power structures, democracy, and human agency, drawing an analogy to urban planning in New York, where infrastructure like bridges have unintentionally created social hierarchies. He argued that society is shifting from a hardware-based reality to one defined by software, where ownership is no longer absolute, as companies retain control over software licenses, extracting economic value while limiting users’ rights. With only a small percentage of the population capable of coding, most people remain passive consumers in a system dominated by a few corporations, he argued. AI is further transforming the economy of attention into an economy of intention, not only monetizing time spent on platforms butactively shaping human behavior, raising concerns about freedom, cognitive rights, and democracy, where algorithms may distort reality and influence political power, he noted. Father Benanti stressed that computational power, once decentralized through personal computing, has become increasingly centralized with the rise of cloud-based AI systems, shifting from being a force for democratic participation, as seen in the Arab Spring, to a potential threat, as evidenced by misinformation, polarization, and political instability such as the Capitol riots. The COVID-19 pandemic accelerated a reliance on computational tools for social participation, deepening this shift toward computational democracy, now dominated by a handful of corporations, with the vast majority of global cloud computing power controlled by a few entities in Seattle, he noted. Father Benanti warns that these technological shifts demand scrutiny to prevent AI from reinforcing unchecked power hierarchies while he called for governance mechanisms that preserve democracy in this new digital era, leaving the audience with the challenge of critically questioning technology’s role in shaping the future.
Professor John Tasioulas, Director of the Institute for Ethics in AI at Oxford University, emphasized the necessity of grounding AI ethics in established moral and legal frameworks rather than reinventing them. He argued that AI’s impact on society is shaped by human choices and must ultimately serve the promotion of human flourishing in a just manner. Highlighting the risks of AI’s ideological framing, he warned against blurring the distinction between human and artificial intelligence, stressing that AI lacks genuine understanding and rational autonomy. He also cautioned that AI-driven solutions risk distorting societal values, particularly in areas such as criminal justice, employment, and democracy. To address these challenges, he proposed three key measures: affirming a new human right to a human decision, reassessing the division of power between corporations and governments, and ensuring that AI governance remains subject to democratic accountability.
Professor Nathalie Smuha, Legal Scholar and Philosopher at the KU Leuven Faculty of Law and Crimonology, emphasized the ethical and philosophical dimensions of AI governance through the lens of Jewish thought, focusing on three key aspects. First, she highlighted the importance of human relationships as a fundamental source of meaning and referenced the first teaching in Pirkei Avot, which asserts that human-to-human relationships are even more primal than the relationship between humans and God. She warned against the erosion of interpersonal interactions due to increasing reliance on AI and stressed the need to safeguard the sources of meaning in human connections. Second, she underscored the significance of alterity and plurality, cautioning that AI’s tendency to systematize and categorize individuals risks diminishing human dignity and diverse perspectives. Drawing on Jewish rabbinic traditions, she highlighted the value of debate and multiple interpretations, contrasting this with the homogenization of thought facilitated by AI tools such as ChatGPT. Finally, she contrasted the Western emphasis on rights with the Jewish tradition’s focus on duties, stressing that ethical AI governance requires not only protecting rights but also ensuring that developers and stakeholders bear responsibility for the technology they create and deploy.
Dr. Muhammad Aurangzeb Ahmad, Principal Research Scientist at KenSci Inc. and Affiliate Assistant Professor in the Department of Computer Science at University of Washington Tacoma highlighted how AI is reshaping the social, moral, economic, and psychological fabric of human life, arguing that questions of morality are increasingly becoming questions of engineering. While Silicon Valley operates within a secular framework, he asserted that religious traditions, particularly Islamic thought, offer valuable ethical insights. Drawing on Islamic philosophy, he explained that the world is a trust from God, with moral agency being central to this trust, yet AI and machine learning increasingly undermine human moral agency by automating decision-making and shaping behavior. Citing Al-Farabi’s ‘The Virtuous City’ and Mulla Sadra’s vision of a just society, he stressed the importance of self-improvement and collective well-being, contrasting this with Silicon Valley’s “move fast and break things” ethos, which he argued is ill-suited to addressing human societal needs. He also pointed to Islamic law’s five key objectives—protecting life, property, health, religion, and dignity—as a robust ethical framework that can guide AI governance. Finally, he emphasized that while AI presents new ethical and legal challenges, we don’t have to reinvent the wheel; historical ethical traditions offer valuable insights that should incorporate diverse religious and cultural perspectives to address these challenges effectively.
Professor Benedetta Audia, Professor of Procurement in International Development at George Washington University, provided key insights into the United Nations’ approach to artificial intelligence, emphasizing both policy frameworks and practical applications. She highlighted the UN’s cautious yet steadfast stance on AI, advocating for a human-centered approach that enhances rather than replaces human intelligence. Professor Audia outlined critical ethical and moral questions the UN is addressing, such as algorithmic bias, accountability, and accessibility, and pointed to UNESCO’s core principles for AI ethics, including fairness, transparency, and human rights protection. She also underscored the role of various UN bodies, such as the Human Rights Council and the Office for Disarmament Affairs, in shaping responsible AI practices and preventing its misuse. Additionally, she highlighted practical UN initiatives, including UNESCO’s readiness assessment methodology, ethical impact assessment tools, and the “Women for Ethical AI” initiative. Concluding her remarks, she reaffirmed the UN’s pivotal role in guiding international cooperation, capacity-building, and real-world AI implementation, countering the notion that its influence is limited to policy-making by emphasizing its tangible global impact.
Ambassador Beresford-Hill delivered concluding remarks: he reflected on historical parallels, noting that technological advancements have often been met with fear and resistance, citing the Industrial Revolution and the actions of the Luddites as examples of initial skepticism toward innovations that ultimately reshaped society. Drawing on this perspective, he questioned whether AI would be another disruptive force like the atomic bomb, carrying vast and potentially destructive consequences, or whether it could be harnessed for the benefit of humanity. He emphasized that the responsibility lies with human beings as the creators of AI, stating, “We are the generators of AI. We are the ones who will give it a life that will inhabit and use either for our benefit or not”. Expressing appreciation to all participants, he reiterated the importance of collective wisdom and the need for continued dialogue in shaping AI governance for the common good.