ISO 42001 – Establishing, implementing and maintaining artificial intelligence within a management system

ISO 42001

These past few years have seen an explosion in AI use within organisations – AI is becoming the norm and almost every tool, platform and system we encounter now hosts AI powered features. Major platforms such as Sharepoint, Monday.com, Zoho and Jotform have begun to include such features.

With such a widespread adoption of AI tools, the potential for misuse has substantially increased. In response, ISO 42001:2023 deals specifically with the safe, ethical and efficient application of AI within a management system. 

ISO 42001 addresses building and implementing AI management systems, and relevant documentation that allow employees to implement artificial intelligence in a way which avoids careless application of AI behaviour that can lead to misinformation, false data generation, and incorrect analysis. 

Key AI misuse risks ISO 42001 helps address

When it comes to AI misuse, there are 3 key problems that usually surface. ISO 42001 has four annexes detailing controls, implementation guidance and supplemental material designed to combat these issues as well as others. 

These problems are:

  1. Lack of transparency & explainability
  2. Inaccurate data analysis
  3. Unexpected behavioural changes

1. Lack of transparency

Lack of transparency and explainability results with the use of artificial intelligence that is accepted without explanation.

AI can reach certain conclusions or generate certain processes in unexpected ways that still produce apparently “correct” responses. Using this logic indiscriminately without question could lead to incorrect assumptions, with the potential to snowball – creating major issues down the road.

ISO 42001 establishes controls for this through processes such as the development of a Statement of Applicability (SOA), which outlines when and why an AI system will be used, as well as establishing performance measures for evaluation.

2. Inaccurate data analysis

Improper data analysis refers to data input into an AI management system, and data that is output from an AI management system. Both must be controlled for AI tools to be used without error.

Incorrect data inputs could cause AI to develop biases, generate false information and be exploited – resulting in security risks. Procedures include thorough documentation and an orderly labelling of training and testing data.

3. Unexpected behavioural changes

Unexpected behavioural changes occur when an AI tool develops certain behaviours that may have unintended consequences. This can happen from improper application, poor quality data and training, and insufficient monitoring.

Detailed documentation and the development of an SOA are key in preventing such behaviour changes. 

Managing AI risks and opportunities with ISO 42001 certification

Adopting ISO 42001 certification not only protects an organisation from AI abuse but also allows for a more effective, streamlined use of artificial intelligence. By controlling these processes properly, AI tools can be used confidently and easily to achieve incredible results and boost productivity within an organisation. 

These are just some of the common AI issues that can arise, ISO 42001 also includes recommendations for limiting environmental impacts, minimising security risks and understanding the impact of AI on individuals and groups. 

AI use is growing exponentially in almost every industry, and as a result ISO 42001 certification may become one of the most crucial and widely adopted certifications. Getting ISO 42001 certified early could give you a major competitive advantage, while improving reputation and trust within and beyond your organisation.

Get in touch with ICS today for more information regarding certification to ISO standards.

Leave a Reply