Artificial intelligence is no longer experimental. It powers fraud detection at banks, diagnostic tools in hospitals, hiring algorithms at corporations, and autonomous decision-making in critical infrastructure. Yet until December 2023, there was no international standard for managing AI systems responsibly. ISO/IEC 42001 changed that -- and for tech companies building or deploying AI, it represents both a governance imperative and a competitive opportunity.
At Xcapit, we achieved ISO 27001 certification in 2025 -- an experience that fundamentally changed how we operate. Now we're actively working toward ISO 42001 because AI governance deserves the same rigor we apply to information security. This article shares what we've learned so far: what the standard requires, how it relates to existing frameworks, and practical steps for companies ready to take AI governance seriously.
What Is ISO 42001?
ISO/IEC 42001:2023 is the first international standard specifying requirements for an Artificial Intelligence Management System (AIMS). Published by the International Organization for Standardization in December 2023, it provides a structured, certifiable framework for organizations that develop, provide, or use AI-based products and services.
Think of it as ISO 27001 for AI. Where ISO 27001 provides a management system for information security, ISO 42001 provides one for responsible AI. It doesn't prescribe which algorithms to use or ban specific applications. Instead, it requires you to establish policies, assess risks, implement controls, and continuously improve how your organization governs AI throughout its lifecycle. Because the standard is certifiable, an accredited auditor can verify compliance and issue formal certification -- transforming AI governance from a marketing claim into an independently verified commitment.
Why ISO 42001 Was Created
AI failures made the status quo untenable. Biased hiring algorithms, discriminatory lending models, facial recognition errors disproportionately affecting minorities, and AI-generated disinformation demonstrated that ungoverned AI creates real harm. Meanwhile, regulation accelerated globally -- the EU AI Act entered into force in August 2024, the U.S. issued Executive Order 14110 on AI safety, and Canada, Brazil, China, and Japan introduced AI-specific legislation.
Voluntary AI ethics principles proved insufficient. Nearly every major tech company published guidelines, but without a management system to operationalize them, these principles remained aspirational. An organization can declare commitment to fairness while its production models perpetuate bias -- not out of malice, but because no systematic process exists to detect and correct it. ISO 42001 addresses these gaps by providing an auditable framework that translates principles into practice.
The Structure of ISO 42001
ISO 42001 follows the Annex SL high-level structure used by all modern ISO management system standards. If you're familiar with ISO 27001, ISO 9001, or ISO 14001, you'll recognize the architecture: context of the organization, leadership, planning, support, operation, performance evaluation, and improvement.
What makes ISO 42001 distinct are its AI-specific elements. Annex A defines 38 controls organized across themes including AI policies, internal organization, AI system lifecycle, data management, transparency and information for stakeholders, use of AI systems, and third-party relationships. Annex B provides implementation guidance for each control. These annexes are where the standard moves beyond generic management system territory into genuinely AI-specific governance.
Key Requirements for Tech Companies
Several requirements stand out as particularly significant for organizations building or deploying AI.
AI Policy -- Your organization must establish an AI policy that defines its commitment to responsible AI, addresses principles like fairness, transparency, accountability, safety, and privacy, and is communicated to all relevant parties. This isn't a generic ethics statement; it must be specific, actionable, and reviewed regularly.
AI Risk Assessment -- ISO 42001 requires systematic identification and evaluation of AI-specific risks: bias and discrimination, lack of explainability, unintended behaviors, data quality issues, adversarial attacks, environmental impact, and loss of human autonomy. The assessment must cover the entire AI system lifecycle -- from conception through decommissioning.
AI System Impact Assessment -- Beyond risk, you must evaluate potential impacts on individuals, groups, and society -- including fundamental rights, economic effects, and effects on vulnerable populations. The depth must be proportional to the system's consequences.
Data Management -- Rigorous practices for data acquisition, quality assessment, provenance tracking, bias documentation, and protection. For companies building AI for clients, this directly affects how you source, process, and document training data.
Human Oversight -- The standard requires defining which AI outputs need human review, establishing intervention procedures, ensuring oversight personnel have adequate competence, and documenting the rationale for each system's level of autonomy.
Transparency and Explainability -- Controls for informing users they're interacting with AI, explaining how systems make decisions, documenting limitations, and providing mechanisms to challenge AI-driven decisions.
Third-Party AI -- If you use foundation models, pre-trained models, or AI-as-a-service APIs, you must assess and manage the associated risks. You can't claim responsible AI while using opaque third-party models without understanding their training data, limitations, and potential biases.
How ISO 42001 Relates to ISO 27001
This is the most common question we hear, and it's especially relevant because we hold ISO 27001 certification. The answer: they're complementary, not redundant.
ISO 27001 protects confidentiality, integrity, and availability of information -- data security, access controls, encryption, incident response. These concerns become more complex with AI but don't disappear. ISO 42001 addresses what ISO 27001 was never designed to cover: algorithmic fairness, AI transparency, impact assessments for automated decisions, data quality for training, and the ethical dimensions of AI deployment. You can have a perfectly secure AI system that is deeply unfair. Both standards are needed.
The practical advantage of holding ISO 27001 is significant. Because both use Annex SL, many elements transfer directly: context analysis, leadership commitment, risk methodology, internal audit, management review, and continual improvement. In our experience, roughly 40-50% of the management system infrastructure carries over. At Xcapit, we're extending our existing ISMS with AI-specific controls rather than building a parallel system -- a natural integration that avoids bureaucratic overhead.
Alignment with Regulatory Frameworks
ISO 42001 doesn't guarantee regulatory compliance, but it provides a robust foundation for multiple frameworks simultaneously.
The EU AI Act classifies AI systems into risk categories and imposes proportional requirements for high-risk systems -- risk management, data governance, documentation, transparency, human oversight, and cybersecurity. ISO 42001's controls map directly to these requirements, and the European Commission has recognized harmonized standards as a pathway to demonstrating compliance.
The NIST AI Risk Management Framework, increasingly referenced in U.S. federal procurement, aligns closely with ISO 42001 through its four functions: Govern, Map, Measure, and Manage. Additional frameworks emerging in Canada, Brazil, the UK, and Japan all converge on common themes that ISO 42001 addresses: risk-based approaches, transparency, accountability, and human oversight. For companies operating globally, one certification efficiently demonstrates governance maturity across multiple jurisdictions.
Benefits for Tech Companies
- Competitive differentiation -- ISO 42001 is in early adoption. Companies certifying now stand out in RFPs, especially in regulated industries like finance, healthcare, and government.
- Client trust -- Certification transforms 'we take AI governance seriously' from a sales claim into an independently verified fact.
- Regulatory readiness -- Proactive governance costs dramatically less than reactive compliance when AI regulations become mandatory.
- Structured governance -- A formal AIMS forces you to document assumptions, assess risks systematically, and create accountability for AI outcomes.
- Reduced liability -- Documented governance, risk assessments, and impact evaluations provide evidence of due diligence if an AI system causes harm.
- Talent attraction -- Engineers and data scientists increasingly seek organizations that take responsible AI seriously. Certification signals that commitment credibly.
The Certification Process
Having been through ISO 27001 and now pursuing ISO 42001, here's what the process looks like in practice.
Phase 1: Gap Analysis (2-4 weeks) -- Inventory all AI systems you develop, deploy, or use. Map existing practices to ISO 42001 controls. Identify gaps. This produces your implementation roadmap.
Phase 2: AIMS Implementation (4-8 months from scratch, 3-5 months if extending existing ISO systems) -- Define your AI policy, establish risk and impact assessment methodologies, implement Annex A controls, develop lifecycle procedures, create documentation processes, and train your team.
Phase 3: Internal Audit and Management Review -- Conduct at least one complete audit cycle to verify conformance and catch issues before the external auditor does.
Phase 4: Certification Audit -- An accredited body conducts a two-stage audit: Stage 1 reviews documentation; Stage 2 verifies effective implementation through interviews, evidence review, and observation.
Challenges and Honest Realities
The auditor ecosystem is still maturing. ISO 42001 was published in December 2023, and accredited auditors with deep AI expertise remain limited. Interpretation of some controls is evolving -- what constitutes 'adequate' transparency or how granular an impact assessment should be for different risk levels are questions the community is still resolving.
Scope definition requires careful thought. Too narrow and the certification lacks credibility; too broad and implementation becomes unwieldy. Start with core AI activities and expand over time. And organizational buy-in matters -- governance introduces new processes that teams may initially view as overhead. The cultural work of showing that governance improves outcomes is as important as technical implementation. This is identical to our ISO 27001 lesson: it's a cultural shift, not a compliance exercise.
Practical Steps to Start Today
You don't need to wait for certification to build AI governance capabilities.
Inventory your AI systems -- every model, API, third-party service, and embedded AI component. Document each system's purpose, data inputs, decision outputs, affected stakeholders, and current governance measures. Many organizations are surprised by how many AI systems they operate once they look systematically.
Conduct AI-specific risk assessments across bias, transparency, reliability, privacy, security, societal impact, and human oversight. Use a consistent methodology and document treatment decisions.
Establish an AI governance policy -- articulate principles, define roles and responsibilities, set requirements for risk assessment before deployment, and address third-party AI. Start with what you know and iterate.
If you hold ISO 27001 or another Annex SL certification, build on it. Many AIMS elements -- risk methodology, audit processes, document control, training -- can be extended rather than rebuilt.
Xcapit's Journey: From ISO 27001 to ISO 42001
When we achieved ISO 27001 certification in 2025, passing the IRAM audit with zero non-conformities, we proved that a focused team can build world-class management systems. That experience gave us the confidence and infrastructure to pursue ISO 42001.
Our motivation is straightforward. We build AI agents, machine learning systems, and AI-powered software for clients across finance, energy, and government. If we're going to build AI that affects people's lives, we owe it to our clients -- and to the people their systems serve -- to govern that AI responsibly.
We're extending our ISO 27001 ISMS with AI-specific risk categories, impact assessment processes integrated into our development lifecycle, data management practices for training data, and transparency and human oversight controls. The process has already surfaced valuable insights: our AI system inventory revealed governance gaps in third-party model dependencies, our risk assessments forced productive conversations about automation levels, and writing our AI policy clarified principles that had been implicit but never formalized.
We're not pursuing ISO 42001 because someone told us to. We're pursuing it because the work we do demands it. When you build AI systems for UNICEF, for energy companies managing critical infrastructure, and for financial institutions handling sensitive data, governance isn't optional -- it's professional responsibility.
AI governance is not a future concern -- it's a present requirement. Whether you're building AI agents, deploying machine learning models, or integrating third-party AI services, the question is not whether to govern AI responsibly, but how. ISO 42001 provides the framework. At Xcapit, we're actively pursuing ISO 42001 certification while continuing to build AI systems our clients can trust. If you're looking for a technology partner that combines deep AI expertise with certified governance practices, explore our AI development services or contact our team to discuss how we can help you build AI responsibly.
Fernando Boiero
CTO & Co-Founder
Over 20 years in the tech industry. Founder and director of Blockchain Lab, university professor, and certified PMP. Expert and thought leader in cybersecurity, blockchain, and artificial intelligence.
Let's build something great
AI, blockchain & custom software — tailored for your business.
Get in touchReady to leverage AI & Machine Learning?
From predictive models to MLOps — we make AI work for you.
Related Articles
AI Regulation Global Landscape: What It Means for Your Business
A practical guide for business leaders navigating the EU AI Act, US executive orders, China's governance model, and emerging Latin American frameworks — with concrete compliance steps and timelines.
LLM Security: Defending Against Prompt Injection Attacks
A technical deep dive into prompt injection, indirect injection, jailbreaking, and data exfiltration attacks on large language models — with practical, layered defense strategies for teams building production AI systems.