ISO/IEC 42001 & the EU AI ACT

Navigating the New Frontier of AI Regulation

Understanding the EU AI Act and the Role of ISO 42001

The rapid integration of artificial intelligence into our economic and social fabric has been nothing short of revolutionary. From optimising supply chains to personalising healthcare, AI promises a future of unprecedented efficiency and innovation. Yet, this promise comes with a caveat. The very power that makes AI transformative also introduces potential risks—to our fundamental rights, our safety, and the democratic principles we hold dear.

An Overview of the EU AI Act: A Risk-Based Approach to Regulation

At its core, the AI Act is not a blanket law that treats all artificial intelligence the same. Instead, it adopts a practical, risk-based approach, tailoring the level of regulation to the level of risk an AI system poses to society. This framework categorises AI systems into four distinct tiers:

1. Unacceptable Risk:

Certain AI practices are deemed to pose a clear threat to the safety, livelihoods, and rights of people and are therefore banned outright. This includes systems that use manipulative subliminal techniques to distort a person's behaviour in a harmful way, exploit the vulnerabilities of specific groups like children or the disabled, or are used for social scoring by public authorities.

2. High Risk:

This is where the AI Act places its strongest focus. High-risk AI systems are those that can have a significant adverse impact on people's safety or fundamental rights. These are not exotic, futuristic concepts; they are systems being used today in critical areas. The Act provides a clear list, which includes AI used for:

  • Healthcare with applications such as, support emergency response and triage, function as regulated medical devices, or provide diagnostic and decision-support capabilities.
  • Critical infrastructure, such as managing the supply of water, gas, and electricity.
  • Access to essential services, like credit scoring systems that determine loan eligibility or systems used by public authorities to evaluate entitlement to social benefits.
  • Education and vocational training, for purposes like assessing students or determining access to institutions.
  • Employment and workforce management, including CV-sorting software for recruitment or systems for evaluating employee performance.
  • Law enforcement, migration, and the administration of justice.

Providers of these high-risk systems must comply with a stringent set of requirements before they can be placed on the market. These obligations include rigorous risk management, high-quality data governance to prevent bias, detailed technical documentation, robust human oversight, and high levels of accuracy and cybersecurity.

3. Limited Risk:

This category covers AI systems that pose a risk of manipulation or deception. The key obligation here is transparency. For instance, when you interact with a chatbot, you must be informed that you are communicating with a machine. Similarly, users must be made aware when they are interacting with AI-generated "deepfake" content.

4. Minimal Risk:

The vast majority of AI systems are expected to fall into this category. Think of AI-enabled video games or spam filters. These systems are not subject to any legal obligations under the Act, though their providers are encouraged to voluntarily adopt codes of conduct.

ISO/IEC 42001: The Blueprint for a AI Governance

The AI Act tells you what you need to do to be compliant. But for many organisations, the question is how. How do you build a systematic, repeatable, and auditable process to meet these legal demands? The ISO/IEC 42001 tells you how to implement the governance. Published in December 2023, it is the world's first international standard for an Artificial Intelligence Management System (AIMS). Just as ISO 9001 provides a framework for quality management and ISO 27001 for information security, ISO 42001 provides the framework for governing the responsible development and use of AI.

Implementing an AIMS based on ISO 42001 is a strategic decision that embeds AI governance into an organisation's DNA. It provides a structured approach that directly aligns with the requirements of the EU AI Act, helping organisations to:

  • Implement a Robust Risk Management Process: A cornerstone of the standard is the requirement to establish a process for AI risk assessment and treatment. This involves identifying, analysing, and evaluating risks not just to the business, but to individuals and society, and then determining the necessary controls to mitigate them. This process directly maps to the risk-based logic of the AI Act.
  • Conduct AI System Impact Assessments: The standard requires organisations to formally assess the potential consequences of their AI systems on individuals and society. This is a critical exercise for identifying potential harms and fulfilling the fundamental rights obligations under the AI Act.
  • Manage the Entire AI System Lifecycle: ISO 42001 provides controls for every stage of an AI system's life, from defining requirements and managing data to verification, validation, deployment, and monitoring. These operational controls are precisely what is needed to build the evidence of compliance for high-risk systems.

By adopting ISO 42001, an organisation isn't just preparing for a single audit; it's building a culture of responsible innovation and a demonstrable record of accountability.

Turning Compliance into a Competitive Advantage:

evidify

The EU AI Act and ISO 42001 represent a new paradigm for technology governance. While essential, compliance can appear to be a formidable task, particularly for the small and medium-sized businesses that are the engine of our economy. The burden of documentation, the complexity of risk assessments, and the need to provide continuous evidence can divert precious resources from innovation to administration.

This is the problem Evidify was built to solve.

  • A Single Source of Truth: Our platform's core is the Knowledge Working Graph (KWG), a dynamic data model that represents and connects all your management system elements—from policies and risk assessments to the technical documentation of your AI systems. This eliminates the data silos that make compliance so difficult, providing a holistic, real-time view of your governance posture.
  • Operationalising Compliance: Evidify translates the requirements of ISO 42001 and the AI Act into tangible, manageable workflows. The platform guides you through risk assessments, impact assessments, and the documentation of your AI system's entire lifecycle. It provides a unified framework where the evidence of compliance is a natural byproduct of your daily operations, not a separate, manual task.
  • Designed for the AI-Driven Organisation: Evidify is built for the "mixed work" environment where human experts and AI agents collaborate. Our platform facilitates the seamless interaction and oversight required by the AI Act's human oversight mandate. The continuous "Act, Review, Improve" cycle at the heart of Evidify ensures that your AI governance system is not a static document, but a living, evolving process that drives continual improvement.

The era of AI regulation has begun. For businesses that embrace it, this is not a burden, but an opportunity—an opportunity to build deeper trust with customers, to mitigate risks more effectively, and to lead in the development of responsible, human-centric technology. With ISO 42001 as the blueprint and Evidify as the platform, you can navigate this new frontier with confidence, turning the complex challenge of AI compliance into a powerful and sustainable competitive advantage.

Sign up to our newsletter for the latest updates on the EU AI Act, ISO standards, and best practices in AI governance.

Sign up to our newsletter

By signing up you acknowledge that you'll be added to Evidify mailing list. You can unsubscribe any time.


© 2025 Evidify S.L. - All rights reserved.

No data is sent to a third party. Learn about out Cookies Policy & Privacy Policy

Accept