
Location: Waldorf Astoria Doha West Bay Hotel, Doha, Qatar
Format: In-person event
Open to all interested stakeholders free of charge.
Registration required – open until 5 December 2025.
For additional information, please contact the GRACE Initiative Team: 📧 unodc-grace@un.org
Technological innovation continually expands what is possible. Among recent advances, artificial intelligence (AI) stands out for its transformative potential. AI systems can process vast amounts of data, learn with minimal human input, and generate outputs that approximate human reasoning and language. Unlike earlier digital tools, AI can both interpret and produce complex information, enabling new forms of analysis, communication and problem-solving.
Across sectors, AI is already reshaping practices: adaptive learning platforms personalize education; AI-driven diagnostics improve early disease detection; and analytic tools enhance access to information and accountability in governance. These capabilities offer significant opportunities for integrity and transparency.
Applied responsibly, AI can strengthen anti-corruption efforts. By detecting patterns in large datasets, AI tools can identify irregularities in procurement, public spending and financial transactions. Predictive models can flag high-risk cases for audit, while AI-powered transparency tools can help journalists, watchdogs and citizens hold decision-makers accountable. Together, these applications can reinforce integrity systems and promote more transparent governance.
However, AI also introduces new risks. When misused, it can facilitate corruption, for example through manipulation of training data, biased algorithms or opaque decision-making. AI’s ability to scale decisions and automate processes amplifies the potential for abuse, including fraud, misinformation and resource capture. Without robust governance and oversight, these risks may outweigh the benefits.
Rapid advances in AI and its growing use across sectors make this an opportune moment to examine AI-based anti-corruption efforts. While AI tools are being deployed to detect corruption, enhance transparency and strengthen oversight, evidence of their impact remains fragmented. As these applications expand, it is critical to assess what works, identify risks and design systems that uphold integrity.
This first in-person event aims to bridge research and practice in AI for anti-corruption. By convening global experts and practitioners, it will foster knowledge exchange, peer learning, and collaboration. The discussions will explore ways to improve data availability and interoperability, turning “open data” from a policy aspiration into a practical tool for transparency. By defining standards and aligning incentives, this event seeks to catalyse collective advocacy toward governments and institutions, ensuring that AI’s potential to fight corruption is realized safely, inclusively, and effectively.
The Academic Symposium brings together panellists from over 35 countries, representing all regions, and provides a truly global platform for dialogue on AI, anti-corruption, and integrity. The programme features three distinguished female keynote speakers, alongside approximately 60 presenters – 39% women, 58% men, and 3% identifying as diverse.
The event places strong emphasis on engaging young researchers, offering opportunities to present their work, contribute to discussions, and build skills and leadership in AI and integrity. This focus aligns with the GRACE initiative’s mission to empower youth and ensure their voices shape the future of anti-corruption efforts.
These elements underscore UNODC’s commitment to inclusion, equality, and youth empowerment. By fostering global and gender diversity, the event enriches the exchange of perspectives, experiences and promotes more inclusive and ethical approaches to AI development and governance.
The keynote speakers bring world-class expertise at the intersection of artificial intelligence, data science, and open governance. Professor Somaya Al-Maadeed, University of Qatar, is a pioneer in AI and computer vision, recognized for her work on intelligent systems for security, document analysis, and digital heritage preservation. Professor Meeyoung Cha, Scientific Director at the Max Planck Institute for Security and Privacy and Professor at the Korea Advanced Institute of Science and Technology, is a leading scholar in computational social science, with research on misinformation, fraud detection, and human-machine interaction. Ms. Natalia Carfi, Executive Director of the Open Data Charter, is a global leader in open government and data transparency, advancing initiatives that make public data more accessible and accountable.
Together, they offer diverse and complementary perspectives on harnessing AI and data for integrity and good governance.
AI is reshaping how governments uphold integrity, detect wrongdoing, and enhance accountability. This panel examines how governments in various contexts, including Iraq, Nigeria, Qatar, the United States, and the European Union, are integrating AI into anti-corruption frameworks.
The five papers explore the promise, practice, and politics of top-down AI-based governance. Case studies include:
cross-national comparisons of factors shaping AI innovation in EU public procurement.
These studies highlight both the transformative potential and governance challenges of institutionalizing AI for integrity. By examining real-world experiences, this panel offers actionable insights for policymakers, international organizations, and scholars seeking to build accountable, data-driven, and ethically grounded public institutions.
As AI gains traction as an anti-corruption tool, critical questions remain about its use in grassroots efforts. This panel presents five studies at the intersection of AI, journalism, law, and civic technology in Latin America, Africa, and Turkey.
Topics include:
risks of algorithmic silencing in Namibia and Zimbabwe.
Complementing these cases, two talks provide broader perspectives: one examines AI tools in light of shifting power dynamics; another explores how customized generative AI systems can support grassroots monitoring of public spending in Brazil. These papers underscore that combating corruption in the era of AI requires not only improved algorithms, but also inclusive, context-aware, and ethically grounded systems that serve the public interest.
Corruption often leaves visible traces – from ghost construction projects and illegal mining to unreported emissions and deforestation. Advances in AI, satellite imaging, and geospatial analytics now enable real-time detection of such irregularities. This panel explores how Earth observation data is being used to develop tools for transparency, independent verification, and early detection of corruption risks.
The five contributions demonstrate how “AI from above” can expose governance failures across sectors and regions. Case studies include:
satellite-based detection of sulfur dioxide emissions as an indicator of regulatory capture.
By mapping the spatial footprints of corruption, this panel shows how digital tools can make lack of integrity visible from above.
This fast-paced session showcases diverse research at the intersection of AI and anti-corruption. Featuring brief, high-impact presentations from multiple regions and disciplines, it highlights how scholars are reimagining integrity through innovations in data, design, and governance.
Topics range from Bosnia and Herzegovina’s integration of AI into digital identity systems to Nigeria’s ethical and legal challenges in AI adoption; from Ubuntu-inspired youth integrity movements in Africa to compliance innovations in Brazil and digital governance reforms in Albania. Other talks address global data ecosystems, algorithmic audits in diplomacy, comparative frameworks across continents, open educational AI standards, and the repurposing of anti-corruption technologies in Ukraine.
Together, these insights reveal how AI is reshaping the infrastructures of integrity. Designed for an engaged and diverse audience, the session invites reflection, debate, and collaboration, underscoring that the future of AI and anti-corruption is a rapidly evolving landscape of ideas.
This poster session showcases the work of emerging scholars examining how AI can both strengthen and undermine integrity, especially in fragile or rapidly changing political environments.
Covering diverse cases and approaches, the posters explore issues such as algorithmic bias and accountability frameworks; AI-powered whistle-blowing platforms; digital governance reforms in Ethiopia and India; gender and compliance in corporate contexts; environmental monitoring; and the global politics of algorithmic oversight. The session highlights how young researchers are advancing anti-corruption studies through conceptual innovation, empirical inquiry, and critical reflection, posing essential questions about power, ethics, and technology.
Unlike a traditional panel, this interactive format encourages informal dialogue, peer exchange, and mentorship across disciplines and regions. It demonstrates that the next generation of anti-corruption scholars is not only experimenting with cutting-edge technologies but also shaping the ethical and political frameworks that will determine how AI serves the public good.
While much of the global debate on AI and anti-corruption remains conceptual, a growing number of initiatives are moving from theory to implementation. This session showcases practical projects that use AI to enhance transparency, accountability, and public integrity across sectors and regions. Presenters include innovators from government agencies, civil society, and academic-policy collaborations, testing AI in real institutions, communities, and policy systems.
The featured projects span a wide range: satellite- and AI-based detection of illegal mining in the Venezuelan Amazon; citizen-led reporting tools like Vigilante Cívico in El Salvador; machine learning to uncover illicit corporate networks in the Netherlands; integrity “companion bots” for frontline workers; hybrid human-AI auditing in Chile; early warning systems for education governance in Ukraine; and participatory AI monitoring of local procurement in Kenya. Collectively, they demonstrate how technology can be embedded in anti-corruption infrastructures, while revealing shared challenges related to data quality, privacy, and sustainability.
This interactive, practice-oriented session invites dialogue among practitioners, policymakers, and researchers on scaling, evaluating, and governing AI tools responsibly. It provides empirical evidence of implementation conditions and shows how AI innovations can strengthen integrity systems when grounded in ethics, participation, and accountability.