Introduction to French AI Policy

This post was written as part of the AI Governance Fundamentals course by BlueDot. I thank Charles Beasley and the students from my cohort for their feedback and encouragements.

Disclaimer: The French policy landscape is in rapid flux, after president Macron called for a snap election on 1st and 7th July. The situation is still unfolding, and the state of French AI policy may be significantly altered.

At various AI governance events, I noticed that most people had a very unclear vision of what was happening in AI policy in France, why the French government seemed dismissive of potential AI risks, and what that would that mean for the next AI Safety Summit in France.

The post below is my attempt at giving a quick intro to the key stakeholders of AI policy in France, their positions and how they influence international AI policy efforts.

My knowledge comes from hanging around AI safety circles in France for a year and a half, and working since January with the French Government on AI Governance. Therefore, I’m confident in the facts, but less in the interpretations, as I’m no policy expert myself.

Generative Artificial Intelligence Committee

The first major development in AI policy in France was the creation of a committee advising the government on Generative AI questions. This committee was created in September 2023 by former Prime Minister Elisabeth Borne.[1]

The goals of the committee were:

  • Strengthening AI training programs to develop more AI talent in France

  • Investing in AI to promote French innovation on the international stage

  • Defining appropriate regulation for different sectors to protect against abuses.

This committee was composed of notable academics and companies in the French AI field. This is a list of their notable member:

Co-chairs:

  • Philippe Aghion, an influential French economist specializing in innovation.

    • He thinks AI will give a major productivity boost and that the EU should invest in major research projects on AI and disruptive technologies.

  • Anne Bouverot, chair of the board of directors of ENS, the most prestigious scientific college in France. She was later nominated as leading organizer of the next AI Safety Summit.

    • She is mainly concerned about the risks of bias and discrimination from AI systems, as well as risks of concentration of power.

Notable members:

  • Joëlle Barral, scientific director at Google

  • Nozha Boujemaa, co-chair of the OECD AI expert group and Digital Trust Officer at Decathlon

  • Yann LeCun, VP and Chief AI Scientist at Meta, generative AI expert

    • He is a notable skeptic of catastrophic risks from AI

  • Arthur Mensch, founder of Mistral

    • He is a notable skeptic of catastrophic risks from AI

  • Cédric O, consultant, former Secretary of State for Digital Affairs

  • Martin Tisné, board member of Partnership on AI

    • He will lead the “AI for good” track of the next Summit.

See the full list of members in the announcement: Comité de l’intelligence artificielle générative.

“AI: Our Ambition for France”

In March 2024, the committee published a report highlighting 25 recommendations to the French government regarding AI. An official English version is available.

The report makes recommendations on how to make France competitive and a leader in AI, by investing in training, R&D and compute.

This report is not anticipating future development, and treats the current capabilities of AI as a fixed point we need to work with. They don’t think about future capabilities of AI models, and are overly dismissive of AI risks.

Some highlights from the report:

  • It dismisses most risks from AI, including catastrophic risks, saying that concerns are overblown. They compare fear of AI to previous overblown fears during the development of electricity and trains.

  • It takes a hard pro open-source stance. The report dismisses risks from open-source by saying that models which can increase disinformation are already open-source, so no additional risks in releasing more of them, and that current models don’t increase biorisk, so no need to worry about it.

  • It recommends that France lead international AI governance, and advocates for an international AI organization.

  • The main fear presented in the report is the fear of lagging behind the US and becoming irrelevant. “It’s a race against time” it says.

The AI Action Summit

In November 2023, the UK organized the inaugural AI Safety Summit. At the end of the Summit, France announced it would host the next one. The date have been confirmed recently: 10-11 February 2025. The main organizer is Anne Bouverot, chair of the Generative Artificial Intelligence Committee mentioned above.

A major update is that the name was changed to “AI Action Summit”, and will now focus on five thematic areas, each led by an “Envoy to the Summit”:

  • AI for good: Martin Tisné, member of the Generative Artificial Intelligence Committee.

  • AI Ecosystem: Roxanne Varza, Director of Station F, the world’s largest startup incubator.

  • AI security and safety: Guillaume Poupard, former general director of the French National Agency for Systems Security.

  • AI global governance: Henri Verdier, French ambassador for digital affairs since 2018, known for his pro-open-source stance.

  • AI impact on the workforce: Sana de Courcelles, Director and Senior Advisor for Special Initiatives at the International Labour Organization.

None of those organizers seem to think AI could pose a catastrophic risk in the coming years, or have even taken stances against concerns about catastrophic risks. This leads me to fear that the Summit might lose a large part of its AI Safety focus if efforts are not made to get safety back in the agenda.

Organizations working on AI policy and influencing it

Various companies, non-profits and governmental agencies influence the direction of AI policy in France. I listed only the most influential and most relevant organizations.

National AI Safety Institute

The French government has decided to create a National Center for AI Evaluation, which will be a joint organization under the public computer science research center Inria, and the French standards lab LNE.[2]

This organization will represent France in the network of safety institutes, which was announced at the Korean AI Safety Summit.

EDIT: Actually, France did not take part in the Korean summit announcement of collaboration between National AI Safety Institutes. However, they announced the creation of the Center for AI Evaluation at Vivatech, which was happening at the same time.

Think-tanks

There are not a lot of think-tanks influencing AI policy in France. The leading one is Institut Montaigne, one of the most influential French think-tank, which has a division working on AI Governance.

The Future Society, a US and Europe based AI governance think-tank, also has some influence in France, but it’s not their priority.

Leading AI companies in France

There are a lot of AI companies popping up in France. I listed below the companies which have or could have an international influence, and who have a large policy influence.

  • Mistral AI: Wants to be the OpenAI of Europe, trains and releases open and closed models. They have a lot of impact on policy, and don’t believe in the potential for catastrophic risks of AI.

    • Mistral was lobbying for the removal of rules on general AI systems from the EU AI act, and have been criticized for their partnership with Microsoft[3].

  • LightOn: Develops models for large companies, now focusing on making more agentic models.

  • Kyutai: Non profit AI research center, financed by Eric Schmidt, Xavier Niel and Rodolphe Saadé. What they work on is unclear for now, but given their funding source, they could become big.

  • Giskard: An AI evaluation startup, focused on removing bias and ensuring compliance

  • PRISM Eval: New startup in AI evaluation, focusing on cognitive evaluation.

  • Helsing & Preligens: Military AI companies who influence the government’s position on military use of AI.

France is also home to AI research centers of international tech companies

  • Google DeepMind. Previously, the Paris location was one of the main offices of Google Brain, before the merge with Deepmind.

  • Meta FAIR research lab, directed by Yann LeCun.

  • OpenAI opened an office in France, mainly focused on policy.

AI Safety and x-risk reduction focused orgs

France has a small AI Safety community (~20 people), so the only organization working on AI Safety with a strong focus on AI risk reduction is the CeSIA (a new French center for AI safety), who is working on raising awareness of AI risks in both the general public and policy circles, as well as developing technical benchmarks for AI risks. It is an offshoot of EffiSciences, an organization dedicated to impactful research and reducing catastrophic risks.

Conclusion

As said in the intro, the political situation in France is in flux, and the key stakeholders of AI policy may change soon. If the far right party National Rally gets in power, their main AI advisor will probably be Laurent Alexandre, former doctor, transhumanist, and accelerationist. He will probably advocate for more investment, more acceleration, and less focus on safety. There may be changes in the organization of the Summit and its overall direction, but I expect most of the existing stakeholders to stay influential.

Overall, the position of the French government is influenced by actors skeptical of AI risks, who steer both national and international policy towards acceleration and innovation.

Given that those risk skeptical actors also exist in other countries, my theory for why the French government ended up less focused on AI risks than the UK is the lack of prominent actors raising the alarm about the risks. I don’t think that the French Government is impervious to AI safety arguments, I just think that barely anybody has tried presenting the AI Safety side of the debate.

  1. ^
  2. ^
  3. ^