After two years of fractious negotiation, the Geneva Accord on Artificial Intelligence was signed in the early hours of Friday morning by representatives of 78 nations. The treaty creates a permanent international oversight body, mandatory transparency requirements for large AI systems, and — for the first time — enforceable cross-border penalties for violations.
What the Accord Actually Says
The document runs to 847 pages, but its core provisions can be summarised in three pillars. First, any AI system capable of influencing more than one million people must register with the new International AI Monitoring Authority (IAMA) and submit to quarterly audits. Second, developers must publish detailed "model cards" disclosing training data sources, known failure modes, and deployment restrictions. Third, nations may levy fines of up to 4% of global annual revenue against companies found in breach.
"This is the moment the world decided that artificial intelligence would serve humanity — not the other way around."
— Dr. Mireille Fontaine, lead negotiator, EU delegationThe Scientific Community's Response
Reaction from researchers has been sharply divided. Those who worked closely with the negotiating teams expressed cautious optimism. The transparency provisions, they argue, will finally allow independent scientists to audit systems that currently operate as black boxes.
What Happens Next
The IAMA will formally come into existence on July 1, 2026, with an initial budget of $2.4 billion. Its first director will be elected by member states in April. For technology companies, the transition period ends on January 1, 2027. After that date, non-compliant systems will be subject to mandatory shutdown orders pending audit.