Can standards help realize the promise of AI?
Article

Can standards help realize the promise of AI?

BSI
BSI
Staff
20 Nov 2023

We are at the very beginning of the AI revolution. By helping to generate and evaluate new ideas, automate tasks, and collaborate with humans, AI will help organizations solve some of the biggest challenges facing society today.

It therefore comes as no surprise that over 50% of organizations are planning to incorporate the use of AI and automation tech in 2023, according to Deloitte. And, according to a KPMG survey of Global 500 companies, leaders who invest in AI and automation tools expect to see significant growth within the next few years.

As machines are increasingly entrusted with decisions that profoundly affect our lives, ensuring they are being used legally, ethically, and appropriately is one of the most important challenges facing society. Regulators and politicians will require state-of-the-art technical input into their decision-making, to ensure they keep pace.

In this article, we examine three of the most exciting areas of potential for AI, some of the possible dangers, and how standards development is keeping pace to help maximize the potential and mitigate the risk.

Standards and AI in 2023

Here are just three of the ways AI could transform business and society in the coming years:

  • R&D: AI has the potential to significantly speed up data collection and analysis. Ultimately, this is helping to drive innovation, and will lead to the release of transformative new products. For example, algorithms can trawl through vast data sets – including information from patients, the structure of chemical compounds, and animal studies. This could help researchers to better identify what a drug needs to target in the human body, what molecule would be best suited for this, or even how to create a new molecule. We are already seeing medicines designed by AI reach trials in humans for the treatment of conditions including motor neurone disease and cancers, for example. AI can also help companies improve how they produce existing products. Unilever have been using AI to predict the response of biological processes when the skin is exposed to certain chemicals or ingredients, helping them move completely away from animal testing for cosmetics.

  • Carbon reduction: According to the Boston Consulting Group, AI offers the potential to reduce greenhouse gas emissions by 5-10% through improved monitoring, prediction and optimization. AI-powered data engineering can help track emissions from every part of a company’s operations. This includes everything from corporate travel to individual pieces of IT equipment and could even include supply chain and downstream users of products. Harnessing this data, predictive AI has the potential to forecast future emissions and identify areas where further emissions reduction is possible. Google has already applied these techniques to help achieve its goal of running entirely on carbon-free energy by 2030. AI has helped the company predict the combined effect of various procedures in its data centres on energy consumption and identify where efficiencies can be achieved.

  • Supply chain transformation: Global supply chains are becoming longer, more interconnected, and increasingly complex to manage. However, AI-powered tools are expected to provide improvements across the supply chain, ranging from improved transportation logistics and inventory management. AI-powered forecasting and risk management could even offer the potential to predict and hedge against issues like power outages, port congestion, manufacturing delays and materials shortages. Real-time predictive analytics (rather than analytics based on historical data) also mean that it will be possible for companies to anticipate any potential supply bottlenecks much further in advance than was previously possible.

The challenges of AI

However, there are practical and ethical concerns surrounding AI. It can ‘hallucinate’ entirely inaccurate information, which poses obvious problems for companies looking for AI to solve business critical challenges.

As AI tools are trained on ‘data lakes’ of existing material, there are also significant intellectual property questions yet to be resolved. For example, does copyright infringement apply to an AI creation? If AI is learning using proprietary data, this isn’t an issue, but what happens when third-party data is used to train AI?

Then there is the issue of public trust. A 2023 study revealed that 20% of the public believe that it is a real risk that AI “could cause a breakdown in human civilization in the next fifty years.” 64% also think it will increase unemployment. 

To help overcome the risks, bring clarity, and ensure AI lives up to its transformative potential, governments around the world are evolving existing regulation or drawing up new laws. The  EU has recently voted to approve draft rules for the AI act.  President Biden has issued a far-reaching executive order on safe, secure, and trustworthy artificial intelligence. China has issued a set of measures to manage the generative AI industry, requiring service providers to submit security assessments and receive clearance before releasing mass-market AI products. 

Following the world’s first major AI Safety Summit at Bletchley Park, the UK government is building the AI Safety Institute focused on advanced AI safety for the public interest.

Alongside legislative efforts, industry specific and general standards will also be essential for supporting the development of responsible AI.

The role of AI standards

By setting out best practices and requirements, standards are developing fast to help ensure that AI systems are not just powering innovation, but doing so accurately, reliably, and safely. Crucially, they can also help to mitigate some of the most serious risks.

Standards also provide guidelines for responsible AI development and use, which means they can help to combat bias, discrimination, and privacy violations – building trust and adoption are key to realizing the potential of AI.

With many AI-related standards under development, it might be helpful to think of them as a layered model. Firstly, there are the core cross-industry standards like ISO/IEC 42001 that help organizations implement an AI governance structure and management system. These are the foundations helping organizations establish objectives and identify risks relating to their use of AI.   

Once they have those clearer goals in focus, there are other technical standards in development about specific AI themes, topics and technologies. These can complement the foundational layer with best practices and risk treatments, along with sector- or application-specific standards. 

Start with the foundation

BS ISO/IEC 42001 should be at the centre of your strategy for standards-based AI development and use.

It is the central management system that provides the framework in which the implementation of an AI risk management system sits and can be used for conformity assessment and certification. Designed to increase trust in the complex AI supply chain, it can be compared to, and used concurrently with BS EN ISO 9001 for quality management or BS EN ISO/IEC 27001 for information security management.

It should ideally be used with an AI governance framework guidance like BS ISO/IEC 38507:2022 Information technology – Governance of IT – Governance implications of the use of artificial intelligence by organizations. It helps an organization set the AI strategy that the management system will implement.

Specific risks might require a more in-depth application of standards. On risk management, for example, BS ISO/IEC 23894:2023 provides guidance on how organizations that develop, produce, deploy or use products, systems and services that use AI can manage associated risks.

There are several useful technical reports summarizing the state of the art in various relevant areas, many of which are already being used to aid AI innovation, for example:

  • Trustworthiness: PD ISO/IEC TR 24028:2020 Information technology – Artificial intelligence – Overview of trustworthiness in artificial intelligence

  • Bias: PD ISO/IEC TR 24027:2021 Information technology – Artificial intelligence (AI) – Bias in AI systems and AI aided decision making

  • Robustness: PD ISO/IEC TR 24029-1:2021 Artificial Intelligence (AI) – Assessment of the robustness of neural networks – Part 1: Overview

  • Societal and ethical concerns: PD ISO/IEC TR 24368:2022 Information technology – Artificial intelligence –Overview of ethical and societal concerns.

  • Testing: PD ISO/IEC TR 29119-11:2020 Software and systems engineering – Software testing – Part 11: Guidelines on the testing of AI based systems

Additional technical standards are being published soon to provide a quality framework for data, more guidance around treating unwanted bias, a transparency taxonomy, approaches and methods to achieve explainability, and much more.

Managing the AI wave responsibly

The AI market is large and growing at an extraordinary rate. It will deliver increasing opportunities and benefits, but only if risks are managed responsibly. And public trust is maintained. Standards offer your organization a flexible approach to managing all aspects of AI development and use so that you can comply with regulations, improve your processes and products and meet customer requirements.

Discover BSI Knowledge

Over 100,000 internationally recognized standards are available for simple and flexible access with a BSI Knowledge subscription. Our tailored subscription service allows you to build your own custom collection of standards or opt for access to one of our pre-built modules, keeping you up to date with any changes. With support from a dedicated BSI account manager, our subscription service helps you achieve a more coherent and effective approach to best practice. Request to learn more.

Share
Share this article with your network
Share
Share this article with your network