At the 2026 World Economic Forum, Singapore unveiled the Model AI Governance Framework for Agentic AI (MGF), setting a new global standard for the regulation and governance of autonomous AI systems. As AI continues to advance rapidly, the MGF addresses the unique challenges posed by agentic AI, systems capable of independently reasoning, planning, and executing tasks on behalf of humans. This pioneering framework marks the first step in establishing comprehensive guidelines for the responsible deployment of agentic AI, ensuring that these technologies evolve in a way that is both safe and beneficial.
The framework builds on Singapore’s previous AI governance initiatives, such as the 2019 Model AI Governance Framework, the AI Verify testing framework, and the 2025 Global AI Assurance Pilot. While the MGF is not legally binding, it provides critical insights into the country’s regulatory approach and offers best practices for industry adoption. By focusing on the specific risks associated with agentic AI, the framework aims to ensure that these technologies are implemented in a manner that promotes trust, accountability, and safety.
What is Agentic AI?
Agentic AI refers to systems that can plan, reason, and act autonomously to achieve objectives with minimal human intervention. Unlike generative AI, which responds to prompts to produce outputs, agentic AI can take proactive actions, adapt to new information, and interact with other systems or agents to complete complex tasks.
At the heart of many agentic AI systems are advanced language models capable of interpreting natural language instructions and activating connected tools such as calendars, application interfaces, and payment processors. These systems can be deterministic (producing consistent results) or non-deterministic (producing variable outputs), with the latter introducing added unpredictability that requires stronger oversight and governance.
The deployment of agentic AI systems presents new challenges, particularly as multiple agents work in parallel, increasing both efficiency and risk. Any errors in one part of the system can quickly cascade and impact the entire workflow, requiring careful management and risk mitigation strategies.
Risks of Deploying Agentic AI
The risks associated with agentic AI can have significant consequences, as errors in autonomous systems may replicate across multiple processes. The MGF highlights five primary categories of risks:
- Erroneous Actions: Mistakes, such as scheduling errors or incorrect data processing, can lead to costly consequences.
- Unauthorized Actions: AI systems may act outside their permitted scope, for instance, executing unauthorized transactions.
- Biased or Unfair Actions: AI may produce discriminatory outcomes, such as biased hiring decisions or unfair vendor selection.
- Data Breaches: Sensitive data may be exposed or misused by AI systems if not properly secured.
- Disruption to Connected Systems: Malfunctions or compromises can cause widespread disruptions, affecting critical systems or networks.
How the MGF Addresses These Risks
The MGF outlines several strategies for mitigating the risks associated with agentic AI, organized into four key dimensions:
- Assess and Bound Risks Early
Organizations must assess the potential risks of AI systems before deployment. This includes evaluating the severity and likelihood of errors, considering factors like task complexity, data access, and system exposure. Risk bounding techniques, such as limiting the tools and data available to AI agents, can help minimize risks. - Human Accountability
Ultimately, responsibility for AI systems lies with the organizations overseeing them. The MGF emphasizes the importance of distributed accountability across leadership, product teams, cybersecurity teams, and end-users. Regular human oversight checkpoints should be established to ensure that sensitive actions, such as financial transactions, are approved by human operators. - Technical Controls Across the Lifecycle
Technical safeguards must be integrated throughout the development, deployment, and operation of AI systems. This includes rigorous testing, gradual rollouts, and continuous monitoring to detect and address unexpected behavior. Additionally, secure design practices, such as using sandbox environments and whitelisted servers, are recommended to reduce risks before deployment. - End-User Responsibility
End-users must be empowered to interact responsibly with agentic AI. Transparency is key, ensuring that users understand the scope of the AI’s capabilities and how to escalate issues when necessary. Organizations should also provide training to ensure that users know how to properly oversee AI systems and avoid automation bias.
Conclusion
Singapore’s Model AI Governance Framework for Agentic AI sets a global precedent for the responsible and safe adoption of autonomous AI systems. While not legally binding, the framework provides critical guidance for organizations seeking to implement agentic AI in a way that fosters trust and mitigates risks. As AI continues to evolve, businesses must stay engaged with this evolving regulatory landscape to ensure that they are prepared for future challenges and opportunities in AI governance.