Home » Resources » Blog » Introducing ISO/IEC 42001:2023: Managing AI Responsibly

Introducing ISO/IEC 42001:2023: Managing AI Responsibly

ISO 42001

The thought of businesses (and individuals) using Artificial Intelligence (AI) has struck a level of apprehension into communities.  The fear that computers would rule the world and humans would become subservient to these machines and be controlled by them. This can be traced back to science fiction, the media (including social media in more recent times), propaganda and, the movie industry, to mention but a few.

AI is not a totally new concept and was first considered in the 1940s when Alan Turing developed, ‘Christopher’, a computer that he built which could learn from itself to enable the breaking of the German Enigma Codes.

With the development of AI over the years [decades], a need to control and regulate it’s growth and use was required.  Whilst no, formal, Act of Parliament relating to AI has yet been created, UK governments have set out guidelines and principles in an effort to control the proliferation of such systems.  This, augmented by the EU AI Act, provides a first line measure of control to ensure responsible management of Artificial Intelligence.

The introduction of ISO/IEC 42001:2023 added a layer of governance which requires organisations to, not only understand and passively comply with existing requirements but, actively implement a system where interaction, review, maintenance and continual improvement is necessary to retain a certification which proves that responsible AI management is taken seriously.

The Role of ISO 42001 in Building Trust in AI

ISO/IEC 42001:2023 provides a set of requirements for the definition and implementation of an Artificial Intelligence Management System.  By using guidelines, policies and procedures, which all employees and other interested parties are obliged to follow and interact with, an organisation can set in motion a means by which AI can be controlled initiating the building of trust and confidence in its use.

An AI system should be built around the following principles:

  • Safety, Security and, Robustness.
  • Appropriate Transparency and Explainability.
  • Fairness.
  • Accountability and Governance.
  • Contestability and Redress.
  • AI Expertise
  • Availability And Quality Of Training And Test Data
  • Environmental Impact
  • Maintainability
  • Privacy

This will ensure a number of factors which will help create an environment whereby AI can be controlled, thus instilling an atmosphere of trust and confidence.  By adopting the requirements and guidance provided within ISO/IEC 42001:2023, organisations are able to implement a management system which will be regularly reviewed, updated and audited, to ensure that all criteria are met.

ISOQAR provides certification audits to ISO/IEC 42001:2023 and training in both the standard and auditing methodologies.

Share via socials

Related Content

Insights from our experts and customers on how obtaining ISO Certification can positively effect your business.