

More than 1,000 tech leaders (including Sam Altman, CEO of OpenAI), signed an open letter calling for an immediate pause to the development of “giant” digital minds for at least six months. While these signatories believe that AI is “going to lead to a much better world than what we can imagine today”, they also argue that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
On the other side of the argument, AI pioneers like Yann LeCun (Meta’s chief AI scientist), argue that talk of AI posing an existential threat to humanity is “preposterously ridiculous.”
While debate rages over the implications of advanced AI, one thing is for sure: there’s no going back. The focus for government, organizations and civil society now needs to be on ensuring that the technology is used responsibly.
Responsible AI is the practice of designing, developing and deploying AI in a safe, trustworthy and ethical way. Responsible AI principles are designed to influence everything from how systems are conceptualized and the purpose they are being created for, to how end-users interact with them.
This helps the users of AI solutions to understand what AI is doing, and why. In turn, this makes it easier to understand if AI is making accurate and bias-aware decisions, whether it is violating privacy, and whether it is being monitored and governed effectively.
AI optimists predict that “AI will fast become one of the most important factors of production in the global economy”, capable of “freeing up billions of hours of human labor.” But with these incredible opportunities comes a heavy responsibility.
Consumers and regulators are raising serious concerns about AI ethics, data governance, trust and legality. Many countries, including the UK, China, and Brazil, are already evolving existing regulation, and the EU has also voted to approve draft rules for the AI act. And, research by Accenture reveals that only 35% of consumers trust how AI is being implemented, and 77% feel that organizations should be held accountable for any misuse of AI. Failure to respond early to these concerns could put the developers and users of AI on the back foot.
Responsible AI allows organizations to reap the benefits, while minimizing the chances of eroding consumer trust or falling foul of new or pending regulation.
Many of the key figures and organizations behind AI are establishing frameworks for what responsible AI looks like. Microsoft’s Responsible AI Standard provides a framework for building systems, and provides useful food for thought for other organizations looking to develop and/or deploy AI responsibly:
Fairness and inclusiveness: systems should treat everyone fairly and avoid affecting similarly situated groups of people in different ways.
Reliability and safety: systems should operate as original designs, respond safely to unanticipated conditions and resist harmful manipulation.
Privacy and security: systems should deliver transparency around data collection, use and storage, and grant consumers appropriate controls to choose how their data is used.
Transparency: systems should be built in a way that allows users to understand how and why they function the way that they do. This makes it easier to identify performance issues, fairness issues, exclusionary practices, or unintended outcomes.
Accountability: systems should not be the final authority on any decision that affects people’s lives. The people who design and deploy AI systems must be accountable for how their systems operate.
BSI is at the forefront of developing industry specific and general standards to support the development of responsible AI. As far back as 2018, to better understand the standards landscape, BSI consulted with those in the field of AI to get an initial understanding of the need for standards in AI.
The top three requirements according to those AI experts were to aid regulatory compliance, reduce the unintentional bias of AI models, and to improve protection of privacy. These responses have contributed to informing standard development since.
More recently, BSI is a partner in a new AI Standards Hub which aims to ensure that as many groups as possible have input in the way artificial intelligence develops in the UK and the guidelines it is held to in the future.
An exciting output of this is the development of a new management system standard, specifically for AI. BS EN ISO/IEC 42001 Information Technology — Artificial intelligence — Management system will be the first international standard to provide best practice for governing AI effectively. Due to publish later this year, it aims to build trust in the technology, so it becomes more widely trusted and deployed to the advantage of organizations, as well as wider society.
The following standards can also help your business to provide greater clarity and reassurance to the questions around AI (including ethics):
BS ISO/IEC 22989:2022 Information technology. Artificial intelligence. Artificial intelligence concepts and terminology
BS ISO/IEC 23053:2022 Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)
BS ISO/IEC 38507:2022 Information technology. Governance of IT. Governance implications of the use of artificial intelligence by organizations
BS ISO/IEC 23894:2023 Information technology. Artificial intelligence. Guidance on risk management
PD ISO/IEC TR 24028:2020 Information technology. Artificial intelligence. Overview of trustworthiness in artificial intelligence
PD ISO/IEC TR 24368:2022 Information technology. Artificial intelligence. Overview of ethical and societal concerns
PD ISO/IEC TR 24027:2021 Information technology. Artificial intelligence (AI). Bias in AI systems and AI aided decision making
PD ISO/IEC TR 24029-1:2021 Artificial Intelligence (AI). Assessment of the robustness of neural networks - Overview
PD ISO/IEC TR 24372:2021 Information technology. Artificial intelligence (AI). Overview of computational approaches for AI systems
BSI has also developed Flex 236 v2.0:2023, which helps to ensure that standards-makers developing standards in fast-changing areas like AI can work with data with inclusion in mind.
Over 100,000 internationally recognized standards are available for simple and flexible access with a BSI Knowledge subscription. Our tailored subscription service allows you to build your own custom collection of standards or opt for access to one of our pre-built modules, keeping you up to date with any changes. With support from a dedicated BSI account manager, our subscription service helps you achieve a more coherent and effective approach to best practice. Request to learn more.