Charles Martinet, Su Cizem, Jared Perlo, Jérôme Barbier
The rapid advancement of artificial intelligence capabilities has catalyzed international efforts to establish governance frameworks that address associated risks and opportunities. The question is no longer whether AI should be regulated, it is how quickly the world can establish robust governance frameworks before AI capabilities outpace our ability to control them. At this pivotal moment, our aim is to help ensure AI development aligns with humanity’s collective interests, both now and into the future.
As a French non-profit committed to research, education, and public awareness on AI safety, we offer this report to propose a focused agenda for international collaboration on AI governance and risk mitigation. We identify six key areas for action structured around Three Pillars for Effective Global AI Governance and offer strategies to close critical governance gaps.
This report is intended to support decision-makers in governments, international organizations, and industry as they navigate critical decisions on global AI governance. We outline the necessary steps to ensure that AI safety governance evolves before, not after, a crisis emerges.