Topic

Artificial intelligence

Building trustworthiness in artificial intelligence systems can help organizations – including those who want to procure and use AI services – prevent misplaced trust and facilitate the safe and secure adoption of AI technologies. Our standards are helping businesses, governments and society realize and experience the benefits of this technology.

Shaping our future with trustworthy AI

Discover how standards are a powerful enabler for AI adoption

Maximizing the value of AI for society with BS ISO/IEC 42001
Article

Maximizing the value of AI for society with BS ISO/IEC 42001

In today's rapidly evolving digital landscape, businesses are increasingly recognizing the transformative power of artificial intelligence (AI) but are struggling to deploy it in a trusted and responsible way. An international standard has published to help organizations use artificial intelligence responsibly in pursuing their objectives. Global AI adoption is growing steadily. In 2022, 35% of companies reported using AI in their business, and an additional 42% reported they are exploring AI. Its deployment can help organizations of all sizes and sectors to drive operational efficiency, optimize decision-making processes, and gain a competitive edge. However, they must navigate a set of challenges to successfully implement and leverage its potential. Some of these challenges include: Perceived complexity and lack of understanding surrounding AI technology. Many businesses may not fully comprehend the various applications and benefits that AI can offer to their specific industry or operations. This lack of awareness can lead to a hesitation to invest in AI solutions. Data privacy and security concerns can also deter businesses from embracing AI. The use of AI often involves collecting and analysing large volumes of data, which raises concerns about protecting sensitive information and complying with relevant regulations. Lack of trust in the quality, accuracy and reliability of AI systems. Faulty or biased AI algorithms can lead to incorrect decisions, compromising the quality of products or services and potentially damaging a business's reputation. Ethical considerations such as bias and transparency, demand careful attention to ensure responsible deployment and gain public trust. Addressing these challenges requires a systematic approach to managing the transition within businesses. BS ISO/IEC 42001 Information Technology — Artificial intelligence — Management system is the first international standard to provide best practice for governing AI effectively. It aims to build trust in the technology, so it becomes more widely trusted and deployed to the advantage of organizations, as well as wider society. What AI guidance does BS ISO/IEC 42001 provide businesses with? Developed by experts from 50 countries, including the UK (via the British Standards Institution), BS ISO/IEC 42001 is an integral part of improving the governance and accountability of AI globally. BS ISO/IEC 42001 specifies the requirements and provides guidance for establishing, implementing, maintaining and continually improving an AI management system within the context of an organization. It is what is known as a ‘management system’ standard, developed specifically for AI. A management system sets out the processes an organization needs to follow to meet its objectives and provides a framework of good practice. These standards help organizations to put an integrated system in place, including, for example, senior management support, training, governance processes and risk management – all essential to getting AI governance and accountability right. To learn more about how standards are supporting businesses with their AI adoption, visit our Artificial Intelligence Topic Page. What are the benefits of implementing an artificial intelligence management system? From streamlining workflows and automating routine tasks to extracting invaluable insights and personalizing customer experiences, implementing an AI management system has emerged as a strategic imperative for businesses seeking to thrive in the age of intelligent automation. BS ISO/IEC 42001 benefits businesses by:  Accelerating trust in AI adoption. Its implementation builds trust in how AI innovation is conducted, improving the quality, security, traceability, transparency and reliability of AI applications and reduces regulatory and market confusion. Improving capacity for AI implementation, innovation and adoption. A management system can create a more stable and predictable environment for the development and deployment of AI systems. Improving AI quality as this standard can help to ensure that AI systems are developed and deployed consistently. Supporting compliance with national and global AI objectives, international regulators, and legislators. Cost savings as this standard can reduce the costs associated with developing and deploying AI systems, as businesses can rely on existing frameworks, protocols, and guidelines rather than creating them from scratch. Ensuring proper governance by helping clients use AI in a responsible way. BS ISO/IEC 42001 can help businesses promote accountability by establishing clear lines of responsibility. The impact of BS ISO/IEC 42001 on the AI landscape The UK government has a ten-year plan to turn the UK into an AI ‘superpower’ and has a National AI Strategy to achieve this - balancing good governance with encouraging innovation. The release of this international standard provides agility in a fragmented market where regulations are still in development. This guidance will help accelerate trusted AI development and use, addressing the risks and building confidence as it becomes part of our daily lives. BS ISO/IEC 42001 will be a critical building block for the AI assurance ecosystem as outlined in the UK government’s roadmap. The UK government’s national AI strategy references the standard and its approach is likely to be supported by other regulators and legislators around the world bringing organizations to implement BS ISO/IEC 42001. Do you want to maximize the value of your AI technology? Add BS ISO/IEC 42001 to your collection today.Read more
ISO/IEC 27001 or ISO/IEC 42001: The AI and information security standard decision checklist
Article

ISO/IEC 27001 or ISO/IEC 42001: The AI and information security standard decision checklist

As artificial intelligence (AI) adoption accelerates across industries, ensuring information security and ethical AI governance has become paramount. According to our research, ‘81% of business leaders state their organization is already investing in artificial intelligence (AI).’ However, with this investment comes a host of new challenges, from managing operational risks to adhering to evolving regulations. To aid organizations in addressing these challenges, we’ve developed a free AI and information security standard decision checklist. Designed for decision-makers, consultants, and organizations exploring AI integration, this tool provides guidance on adopting ISO/IEC 42001 for Artificial Intelligence Management Systems (AIMS) or ISO/IEC 27001 for Information Security Management Systems (ISMS). This checklist will help you identify which standard aligns best with your goals, ensuring that your AI initiatives are secure and responsibly managed. Why information security matters in AI development AI systems require large datasets to deliver accurate, high-quality outputs, which raises unique information security and privacy concerns. ISO/IEC 27001 is an industry-leading framework for protecting sensitive data from unauthorized access, breaches, and data loss. It establishes a comprehensive management structure based on the principles of confidentiality, integrity, and availability, ensuring data is handled securely at every level. Key ISO/IEC 27001 components The ISO/IEC 27001 framework emphasizes: Organizational context: Understanding specific industry risks and operational factors. Central information security policies: Defining policies to guide security practices. Risk evaluation and treatment: Identifying and addressing security risks effectively. Resource allocation: Ensuring resources for maintaining and improving information security. Management involvement: Engaging leadership in continuous improvement of information security. Learn more about ISO/IEC 27001 by reading our article Achieve better information security management with the revised BS EN ISO/IEC 27001. Understanding AI risks with ISO/IEC 42001 With the growing focus on AI, ISO/IEC 42001 addresses the unique risks that AI technologies bring, providing an AIMS framework that promotes responsible AI governance across the AI lifecycle—from data collection to model deployment. This standard aids in managing AI-specific risks such as model bias, decision transparency, and unintended social impacts. Learn more about ISO/IEC 42001 by reading our article Maximizing the value of AI for society with BS ISO/IEC 42001. Key considerations for AI security and governance For organizations already utilizing ISO/IEC 27001, it’s essential to evaluate whether: AI risks should be treated separately from traditional information security risks: AI introduces risks that go beyond data protection, affecting model integrity and decision-making processes. Existing ISO/IEC 27001 controls are sufficient: AI’s unique challenges, such as model evasion and bias, may require additional controls provided by ISO/IEC 42001. Determining your path: ISMS, AIMS, or both? The decision to adopt ISO/IEC 27001, ISO/IEC 42001, or a combination of both should be informed by your organization’s data maturity and readiness for AI integration. For companies with robust data governance practices, ISO/IEC 42001 may provide the added structure needed for responsible AI management, while others may benefit from starting with the foundational security measures in ISO/IEC 27001. Take the next step: Get your free copy of the checklist Ready to secure your organization’s data and responsibly manage AI? Our checklist walks you through these considerations, allowing you to assess your readiness and understand how each standard fits within your organization’s risk management and governance strategy. Download our free AI and Information Security Standard Decision Interactive Checklist now to guide your strategy with ISO/IEC 42001 and ISO/IEC 27001.
How to tackle hazards in AI medical devices using BS AAMI 34971
Article

How to tackle hazards in AI medical devices using BS AAMI 34971

As AI becomes a more prevalent technology in medical devices it’s become clear that the associated risks need special consideration. This article charts the development history of a new, jointly produced document that answers this emerging need. Artificial intelligence (AI) has a strong role to play in healthcare and medical devices. It can improve clinical outcomes as well as efficiency and healthcare management. That said, it can also bring its own unique risks that can jeopardize patient safety or increase inefficiency. Recognizing the issues, BSI, the US Standards Development Organization and the Association for the Advancement of Medical Instrumentation (AAMI) have worked together to develop guidance on the use of AI as a medical technology. Initially we held a series of joint workshops with stakeholders in the US and UK and two whitepapers were published on the topic. One of the key outcomes from this work was a recommendation to develop “risk management guidance to assist in applying BS EN ISO 14971 to AI as a medical technology”. BS EN ISO 14971: Risk management guidance BS EN ISO 14971 is the established international standard that provides a methodology for assessing and managing the risks associated with medical devices. It’s been recognized by medical device regulators and adopted as a national standard in countries around the world. It has become the global standard used by medical device manufacturers and regulators to govern risk management – so it made sense to use it as the foundation of the new work.    It was then agreed that the new standard on applying BS EN ISO 14971 to medical technology with AI should be jointly developed and published by BSI and AAMI. A BSI drafting panel worked in conjunction with an AAMI drafting panel to agree on a common text for publication by AAMI and BSI. For simplicity purposes, BSI is publishing this joint work as a British Standard in the UK and AAMI is delivering a Technical Information Report (TIR) for the US market. The culmination of this work in the UK is the British Standard now available as BS/AAMI 34971:2023 Application of ISO 14971 to machine learning in artificial intelligence. Guide. Incidentally, the ambition all along has been for this guidance to become the basis of ISO/IEC guidance on risk management of AI as a medical technology in the future – so it’s been written with that in mind. Also it may be the first of a series of jointly developed work items by AAMI and BSI.   What’s in BS/AAMI 34971?  BS/AAMI 34971 provides guidance on applying a BS EN ISO 14971 risk management process to the evaluation of medical technology using artificial intelligence, and in particular, machine-learning. It tackles how hazards already identified in BS EN ISO 14971 could affect the safety and effectiveness of medical technology incorporating machine-learning AI. It also provides examples, suggests strategies for eliminating or mitigating the associated risk and explores additional unique or emergent hazards/hazardous situations. It’s important to state that BS/AAMI 34971 doesn’t modify the risk management process—rather it provides information and guidance to inform the application of BS EN ISO 14971 to AI/ML medical technology. What are the benefits of using BS/AAMI 34971? The benefit of using the standard is that it will help medical AI software developers and medical device manufacturers to identify the particular hazards relating to AI that need to be considered.  This will ultimately make devices more compliant and safer, plus it will help manufacturers to: enter new markets innovate more quickly develop their expertise and efficiency increase market confidence in their devices.  Ensure you are managing the AI risks in your medical devices by adding BS/AAMI 34971 to your collection today. Discover BSI Knowledge Over 100,000 internationally recognized standards are available for simple and flexible access with a BSI Knowledge subscription. Build your own custom collection of standards or opt for access to one of our pre-built modules and keep up to date with any relevant changes to your standards strategy. Request to learn more.
Little Book of AI: How your organization can leverage the benefits of artificial intelligence
Article

Little Book of AI: How your organization can leverage the benefits of artificial intelligence

When it comes to artificial intelligence (AI), organizations of all sizes are finding themselves at the crossroads of opportunity and complexity. Recognizing the potential AI has to transform and improve business operations, BSI has developed a Little Book of AI — a guide designed to support organizations in navigating the intricacies of AI implementation. As organizations strive to harness the transformative power of AI, the need for a comprehensive guide has never been more pressing. The Little Book of AI is an essential resource for businesses of all sizes and sectors. Demystifying AI complexity Artificial intelligence, with its vast array of applications, has the power to optimize processes, enhance decision-making, and elevate customer experiences. Yet, the complexity of AI implementation can be daunting, leaving organizations grappling with questions about where to begin and how to ensure a seamless integration. From start-ups venturing into the AI landscape for the first time to established enterprises seeking to optimize their existing AI system, the Little Book of AI serves as a demystifying tool. It provides user-friendly insights that resonate with businesses seeking to not only understand AI but to leverage it effectively. Understanding the role of AI standards The Little Book of AI also focuses on how and which standards can help organizations to facilitate compliance, ensure safety, and uphold ethical considerations with regards to AI governance in organizations. It delves into the current AI standards landscape, looking at how standards like BS EN ISO/IEC 22989:2023 Information technology. Artificial intelligence. Artificial intelligence concepts and terminology and BS ISO/IEC 25059:2023 Software engineering. Systems and software Quality Requirements and Evaluation (SQuaRE). Quality model for AI systems were developed, as well as their use. Businesses that understand how standards support AI implementation can position themselves for success by ensuring quality, mitigating risks, building trust and staying adaptable in a rapidly changing technological landscape. Discover more about how AI standards are supporting organizations across all sectors by visiting our Artificial Intelligence topic page. BS ISO/IEC 42001 at a glance As AI becomes more integrated into various sectors and applications, there is a growing need to ensure its deployment is done responsibly. BS ISO/IEC 42001:2023 Information Technology — Artificial intelligence — Management system is the international AI management system standard. It helps guide organizations in continuously improving and iterating responsible processes customized for AI systems. It is a transformative document that can help businesses overcome a lot of the existing challenges with AI implementation – namely safety, security and complexity. Navigate your organization’s AI journey with confidence. Download your free copy of the Little Book of AI here.

Key Artificial Intelligence Standards

Trending Topics in Artificial Intelligence