The Consumer Financial Protection Bureau (CFPB), the Department of Justice's Civil Rights Division (DOJ-CRD), the Federal Trade Commission (FTC) and the Equal Employment Opportunity Commission (EEOC), recently issued a joint statement concerning enforcement efforts to address discrimination and bias in automated systems (often referred to as "artificial intelligence" or "AI").
The federal agencies note that automated systems, marketed as AI, are becoming increasingly common. Automated systems include software and algorithms "that are used to automate workflows and help people complete tasks or make decisions."
The agencies note that use of automated systems "has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes."
The CFPB has stated that "federal consumer financial laws and adverse action requirements apply regardless of the technology being used" to make credit decisions and that technology being "too complex, opaque, or new is not a defense."
The DOJ-CRD "recently filed a statement of interest in federal court explaining that the Fair Housing Act applies to algorithm-based tenant screening services."
The EEOC has issued a technical assistance document on how using automated systems to make employment-related decisions interacts with the Americans with Disabilities Act (ADA).
Finally, a report issued by the FTC explores how "AI tools can be inaccurate, biased, and discriminatory by design." The FTC also warned that using automated tools could violate the FTC Act.
Potential discrimination risks associated with the use of automated systems includes discriminatory outcomes from correlating data with protected classes or including historical bias in datasets; a lack of transparency concerning the fairness of the system; and developers failing to account for how to system will be used in practice. "Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems" www.eeoc.gov.