Bloomberg Law ©2024 Bloomberg Industry Group, Inc.
Addressing Transparency & Explainability
When Using AI Under Global Standards
Contributed by Arsen Kourinian, Mayer Brown
Editor's Note: For additional guidance on practice-specific areas of risk associated with the use of generative and
other forms of AI, see our AI Legal Issues Toolkit. For additional information on laws, regulations, guidance, and
other legal developments related to AI, visit In Focus: Artificial Intelligence (AI).
Global artificial intelligence (AI) principles, laws, and guidelines often require the transparent use of AI
and an explanation on how it works. Commonly referred to as “transparency” and “explainability,” these
principles are important components organizations should consider as part of their AI governance plan.
To address these principles, organizations developing or using AI should consider drafting a public-facing
AI notice, using plain and easy-to-understand language that incorporates common requirements
observed under global guidelines and laws.
What Are Transparency & Explainability?
Global AI standards often group transparency and explainability together because they are
interconnected terms. Transparency answers the question “what happened” in the AI system, while
explainability addresses “how” a decision was made using AI. See the National Institute of Standards and
Technology's (NIST) Artificial Intelligence Risk Management Framework (AI RMF 1.0).
Organizations are transparent when they provide meaningful information so that individuals are aware
that they are interacting with AI (e.g., chatbots), content was AI-generated (e.g., output of generative AI),
or a decision was made about the individual using AI (e.g., decision to invite a candidate for an interview).
See the Organisation for Economic Co-operation and Development's (OECD) Recommendation of the
Council on Artificial Intelligence.
Explainability means that organizations should provide individuals with a plain-language explanation of
the AI system's logic and decision-making process so that individuals know how the AI generated the
output or decision. See the White House's Blueprint for an AI Bill of Rights: Making Automated
Systems Work for the American People.
Why Are Transparency & Explainability Important?
Organizations should provide individuals impacted by AI systems with a transparency and explainability
notice for several reasons.
Initially, an AI notice is important to gain the public's trust and confidence in their AI systems by
empowering individuals to better understand the AI systems and providing them a mechanism to exercise
rights, challenge AI decisions, and take appropriate recourse. See the UK Information Commissioner's
Office's Explaining Decisions Made With AI. It is also important to have an open relationship with the
public through an AI notice because it provides an opportunity for the organization to solicit external
feedback regarding how its AI systems function in the real environment, detect model drift, and establish
bug bounty programs to help improve the AI systems.
Next, transparency and explainability are widely recognized AI principles necessary for trustworthy AI. For
example, the OECD includes transparency and explainability as one of its five values-based principles.
See OECD AI Principles Overview. With countries on six continents committed to the OECD's AI
principles, including the G7 (US, UK, Japan, Canada, France, Germany, and Italy), transparency and
explainability will be important components of future AI regulations in major jurisdictions. See Hiroshima
Process International Guiding Principles for Organizations Developing Advanced AI System.