14/2/2025

The Summit for the Inaction on AI Safety

The Summit for Action on AI, which concludes in Paris this Tuesday, February 11, leaves scientists and civil society organizations working for AI safety with a sense of unfinished business. The Centre pour la Securité de l’IA (CeSIA) denounces this lack of ambition and calls for the announced investments to be directed toward research and development of safe and trustworthy AI.

While France can take pride in organizing a broad and inclusive event, bringing together more than 1,000 delegates from various countries and backgrounds, the summit's outcomes fall far short of CeSIA’s expectations regarding AI regulation and governance.

According to Charbel-Raphaël Ségerie, Director of CeSIA: "It is as if France were hosting a COP on climate change and decided to turn it into a fossil fuel trade show. We cannot allow the market and industry to dictate the development of a fundamentally misunderstood technology that poses major risks to our society. We call on the government to resume the work initiated during the previous two summits and urge it to strictly condition announced investments on the development of safe and trustworthy AI."

Despite record public and private investments announced by President Macron and Ms. Von der Leyen, no concrete proposals have been made regarding the continuation of cooperation and regulation efforts initiated at the first two AI Summits (Bletchley Park in 2023 and Seoul in 2024). Even more concerning, the AI Safety Report, initiated at the Bletchley Park Summit and supported by the UN and the EU, was shelved almost immediately upon publication. This synthesis of scientific literature on AI safety, which engaged 150 researchers for over a year, is the closest equivalent to an IPCC report for AI, yet it was omitted from the summit’s final declaration.

The only significant announcement regarding safety concerns the creation of INESIA (National Institute for AI Evaluation and Safety), which has been welcomed by CeSIA. However, this institute lacks dedicated funding and will have to rely on existing resources from the structures it brings together to fulfill its crucial mission: serving as a public trusted third party in model evaluation. In stark contrast, the €109 billion in private investments in the AI industry announced by President Macron—from the United Arab Emirates, investment funds, and major corporations—demonstrates a clear imbalance between economic priorities and efforts to regulate AI.

The lack of concrete action on AI safety is all the more concerning given that experts worldwide agree that AI safety research is dangerously neglected. According to the largest survey conducted among machine learning specialists, 70% of respondents want this research field to be given greater priority. Currently, only 2% of scientific publications on AI focus on safety, while teams dedicated to these issues within tech companies are severely underfunded or even disbanded.

To India, which will host the next summit, we recommend continuing the efforts launched at Bletchley Park along three key pillars:

  • Establishing an international scientific consensus to collectively assess risks and define a shared roadmap for AI safety research;
  • Developing common standards and effective governance tools to steer AI development in a controlled direction;
  • Strengthening coherence among various international governance initiatives.

CeSIA calls for a global political awakening to place safety at the heart of AI development before the relentless race for investments irreversibly compromises our ability to control this transformative technology.

Sign up for our newsletter