Skip to Content

NIST Publishes Standards Aimed at Beginning to Address Bias in AI

on Friday, 22 April 2022 in Technology & Intellectual Property Update: Arianna C. Goldstein, Editor

NIST Publishes Standards Aimed at Beginning to Address Bias in AI

As the functionality and widespread use of AI increases, concerns regarding bias in the outcomes and predictions of these systems remain present.  Over the past several years AI has gained traction, and is used for prediction, classification, and recommendation.  However, analysis of these systems shows there is a pressing need to address bias in these systems, which is vital to their continued adoption and use.  The National Institute of Standards and Technology (NIST) issued a Special Publication aimed at identifying bias issues facing AI and establishing the first step toward “detailed socio-technical guidance for identifying and managing AI bias.”  The publication describes three categories of bias in AI and discusses challenges and guidance in addressing these biases.

Categories of Bias in AI

NIST describes that bias in AI may occur at various stages of the AI lifecycle, including the commission, design, development, and deployment of the AI.  A bias introduced at one stage of the AI lifecycle has propensity to prorogate as the AI moves to the next stage.  NIST identified the following categories of bias in AI.

1. Systemic Bias

Systemic bias results from procedures and practices of particular institutions that operate in ways that result in certain social groups being advantaged or favored and others being disadvantaged or devalued, whether unintentional or intentional. Systemic bias may manifest in AI by the datasets used to train models, or the institutions in which AI is deployed.

2. Statistical and Computational Bias

Statistical and computational bias results from errors that arising when the sample is not representative of the population in which the solution it is deployed.  This manifests in AI datasets and algorithmic processes used to develop AI.  For example, this bias may emanate from heterogeneous data sets, over simplification of complex data in mathematical representations, over or under fitting algorithms, and treatment of outliers.

3. Human Bias

Human biases results from systematic errors in human thought based on a limited number of heuristic principles and predicting values to simpler judgmental operations. Bias in this category is often implicit, thus affecting all aspects of the AI lifecycle.  While heuristic decision making has an important role in everyday decisions, because this decision making may involve implicit biases it can broadly affect and propagate within an AI system’s lifecycle.

In order to address and eventually solve for bias in AI, NIST proposes a socio-technical approach.  Rather than just a pure technical and statistical solution, a socio-technical approach provides that an AI system should be considered and evaluated in the broader societal structure in which it operates to help inform technical solutions to the bias.  Applying this approach to AI features and outcomes provides better understanding of how these features and outcomes are functions of and impact society.

The NIST publication provides guidance of three aspects of AI, namely, datasets, TEV V modeling, and human factors, where this approach may be deployed.  From a high level, the guidance on these features requires assessing the technical aspects of the functionality of the AI, while keeping the desired goal and impact of the AI as deployed in focus to inform the technical changes to be made to the AI to achieve the desired goal and impact.

A copy of the NIST Special Publication is available here.

1700 Farnam Street | Suite 1500 | Omaha, NE 68102 | 402.344.0500