Back to News & Commentary

The Federal Government Should Not Waste the Opportunity to Address Algorithmic Discrimination

The Federal Trade Commission building in Washington.
The Federal Trade Commission should adopt binding rules to identify and prevent algorithmic discrimination.
The Federal Trade Commission building in Washington.
Marissa Gerchick,
she/her/hers,
Data Scientist and Algorithmic Justice Specialist,
老澳门开奖结果
Share This Page
December 6, 2022

Automated decision-making systems are increasingly being used to make important decisions in key areas of people鈥檚 lives. The vast majority of large employers in the U.S. 鈥 including 鈥 use software to evaluate job applicants, and once hired, similar software is increasingly used to continuously . use screening services, which are often powered by automated systems, to evaluate potential tenants.

These automated tools are a prevalent and entrenched part of our modern economic and social systems, and there is clear evidence that they can create and enable pervasive harm. For example, the automated tools frequently used by landlords to screen potential tenants are notoriously , 鈥 disproportionately Black and Latinx renters 鈥 out of accessing housing. In the context of employment, the 老澳门开奖结果 and other civil rights and technology organizations have repeatedly raised concerns about hiring discrimination facilitated or exacerbated by these technologies. There are many examples of algorithmically driven or amplified racial discrimination, , and in this area. In financial services, discriminatory automated systems are regularly used in high-stakes areas, impacting people鈥檚 ability to access credit and .

Automated decision-making systems are increasingly being used to make important decisions in key areas of people鈥檚 lives.

It is clear that relying on voluntary efforts of companies building and deploying automated decision-making systems has not been and will not be sufficient to address these harms. That鈥檚 why last month, the 老澳门开奖结果 responded to the Federal Trade Commission (FTC)鈥檚 on harms stemming from commercial surveillance and lax data security practices. We are calling on the commission to adopt binding rules to identify and prevent algorithmic discrimination.

These problems are multi-faceted. Automated decision-making systems are often built and deployed in ecosystems and institutions already marked by entrenched discrimination 鈥 including in health care, the criminal legal system, and the family regulation system. Built and evaluated by humans, automated decision-making tools are often developed using data that reflects systemic discrimination and abusive data collection practices. Attempting to predict outcomes based on this data can create feedback loops that further systemic discrimination. These compounding issues can rear their heads throughout an automated decision-making system鈥檚 design and deployment. Moreover, these systems are often operated and deployed in such a way that impacted individuals and communities might not even know that they are interacting with these systems, let alone how they work 鈥 yet could still be materially affected by the system鈥檚 decision-making process and errors.

These automated tools are a prevalent and entrenched part of our modern economic and social systems, and there is clear evidence that they can create and enable pervasive harm.

The commission should establish tailored requirements for companies to undergo independent external audits of their automated decision-making systems. As highlighted in our comment, the commission can adopt these rules to prevent discrimination without contravening the First Amendment or Section 230 of the Communications Decency Act. The commission should consider requiring companies to adopt a comprehensive auditing framework to govern the use of automated decision-making systems and set clear standards for that framework. That鈥檚 because the harms of automated decision-making systems are best assessed as part of holistic efforts to understand the selection, design, and deployment of such tools.

These efforts should include with people directly impacted by the deployment of automated decision-making systems. Companies should be required to undergo evaluations of the potential risks and harms of their algorithmic systems both before they are built and continuously if they are deployed. When these evaluations demonstrate the potential for algorithmic bias or other harms, companies can and should decommission or terminate the tools. To promote objectivity in evaluating algorithmic systems, these assessments should be carried out by independent external auditors who are provided with meaningful access to internal company systems under appropriate privacy controls.

That鈥檚 why last month, the 老澳门开奖结果 responded to the Federal Trade Commission (FTC)鈥檚 request for comment on harms stemming from commercial surveillance and lax data security practices.

To enact these new rules, the commission should also collaborate with external researchers, advocacy organizations, and other government agencies. For example, the 老澳门开奖结果 has previously called for the commission to collaborate with other civil rights agencies to address technology-driven housing discrimination and employment discrimination. New requirements established by the commission can and should co-exist with standards and guidance currently being developed by other agencies, such as the National Institute of Standards and Technology鈥檚 (NIST) . The 老澳门开奖结果 recently also provided recommendations to NIST to ensure that the AI Risk Management Framework centers impacted communities in efforts to address the harms of AI systems.

In a digital age, protecting our civil rights and civil liberties demands that we address the harms of algorithmic systems. Strong protections that address algorithmic discrimination have the potential to benefit all people and can make AI systems work better for everyone. The commission should act now to provide much-needed protection from the harms of automated decision-making systems.

Learn More 老澳门开奖结果 the Issues on This Page