FTC Issues Rulemaking Notice for Privacy, Security and Artificial Intelligence
On December 10, the Federal Trade Commission (“FTC”) issued an Advance Notice of Proposed Rulemaking (the “Notice”), stating that it was “considering initiating a rulemaking…to curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.”
The effort could lead to “market-wide requirements” targeting “harms that can result from commercial surveillance and other data practices,” FTC Chair Lina Khan announced in a letter addressed to Sen. Richard Blumenthal (a copy of which was released to media outlets by the Sen. Blumenthal’s office). “Rulemaking may prove a useful tool to address the breadth of challenges and harms that can result from commercial surveillance and other data practices,” Khan wrote. “Critically, rules could establish clear market-wide requirements and address potential harms on a broader scale.”
As previewed by the Notice, there are a number of privacy, security and artificial intelligence issues that the FTC may seek to regulate. For example, in an April 2021 release, the FTC warned that artificial intelligence may reflect existing racial bias and be utilized and “inadvertently introduc[e] bias or other unfair outcomes” to “medicine, finance, business operations, media” and other sectors. In addition, in a series of resolutions passed earlier this year, the FTC stated that algorithmic and biometric bias would be a focus of enforcement actions. The Notice builds upon this focus, with its reference to “unlawful discrimination” likely signaling rulemaking directed at artificial intelligence. Separately, since 2002 the FTC has brought nearly 100 cases against companies that have engaged in unfair or deceptive practices involving inadequate protection of consumers’ personal data. Clarification on what constitutes “reasonable and necessary” cybersecurity measures may come about as a result of the rulemaking.