As Artificial Intelligence rapidly integrates into various sectors, a shared understanding of its foundational concepts is more critical than ever. ISO/IEC 22989:2022 establishes this much-needed common language, offering a comprehensive framework for AI concepts and terminology. For organisations developing, procuring, or regulating AI, this standard is an essential starting point.
Why Terminology Matters
In a field as interdisciplinary and fast-moving as AI, ambiguous terminology leads to misaligned expectations, flawed procurement, and regulatory friction. ISO/IEC 22989 provides clear definitions for over 100 terms—ranging from the definition of an AI system itself, to nuanced concepts like machine learning, bias, and explainability. This standardises communication among technical teams, legal experts, policy makers, and the public.
The AI System Life Cycle
One of the core contributions of ISO/IEC 22989 is its formalisation of the AI system life cycle. Recognising that AI differs significantly from traditional software development, the standard details specific stages:
- Inception: Defining objectives, requirements, and feasibility.
- Design and development: Creating the model, architecture, and processing training data.
- Verification and Validation: Checking that the AI capability works as designed and meets objectives.
- Deployment: Releasing the system into the target environment.
- Operation and monitoring: Running the system, handling incidents, and potentially continuous validation for systems that learn on the job.
- Re-evaluation and Retirement: Continuously assessing the operating results against risks and objectives until eventual decommissioning.
"Understanding the AI life cycle is not just about engineering; it is about knowing when and where to apply governance, risk management, and human oversight."
Trustworthiness: A Multidimensional Challenge
The standard deeply explores the characteristics of trustworthy AI. Trustworthiness is presented not as a single attribute, but as an umbrella for several critical properties:
- Robustness: The ability to maintain performance under varying circumstances, including atypical data inputs.
- Reliability and Resilience: The capability to perform consistently and recover quickly from incidents.
- Explainability and Predictability: Ensuring that human stakeholders can understand the factors influencing an AI decision and reliably assume its output.
- Controllability: The ability for a human or external agent to intervene in the system's functioning.
- Transparency: Making appropriate information about the system available to relevant stakeholders.
The AI Ecosystem and Stakeholder Roles
ISO/IEC 22989 maps out the broader AI ecosystem, acknowledging the roles of AI providers, producers (like developers), customers, partners, and the subjects affected by AI. This mapping is vital for determining accountability. When an AI system produces an adverse outcome, the ecosystem framework helps trace the issue back to data providers, system integrators, or the deployment context itself.
Want to align your AI initiatives with global standards?
We help organisations interpret and apply foundational AI standards like ISO/IEC 22989 to build robust, trustworthy, and compliant AI systems.
Contact us