Yesterday on 25 April 2023, the U.S. Justice Department’s Civil Rights Division, the Consumer Financial Protection Bureau (CFPB), the Equal Employment Opportunity Commission (EEOC) and the Federal Trade Commission (FTC) released their joint statement on enforcement efforts against discrimination and bias in automated systems (AI). Accordingly, they confirmed their responsibility for enforcing civil rights, non-discrimination, fair competition, consumer protection and other fundamental legal protections.
Image by wirestock on Freepik
Agencies’ actions
- The CFPB supervises, sets rules for, and enforces numerous federal consumer financial laws and guards consumers in the financial marketplace from unfair, deceptive, or abusive acts or practices and from discrimination. As a result, it published a Circular confirming the said laws and adverse action requirements applied regardless of technology used and the complexity, opaqueness, newness of a credit-decision-making system cannot be a defence for violating these laws.
- The Justice Department’s Civil Right Division enforces constitutional provisions and federal statutes prohibiting discrimination across many life aspects, e.g. education, the criminal justice system, employment, housing, lending, and voting.
- The EEOC enforces federal laws that make it illegal for an employer, union, or employment agency to discriminate against an applicant or employee due to protected attributes – within the meanings of direct discrimination. Thus, it issued a technical assistance document explaining how the Americans with Disabilities Act applies to the use of software, algorithms, and AI to make employment-related decisions about the management of applications and employees.
- The FTC protects consumers from deceptive or unfair business practices and unfair methods of competition across most sectors of the U.S. economy by enforcing the FTC Act and numerous other laws and regulations. The FTC has warned market participants’ potential violation to the FTC Act by using the automated tools that have discriminatory impacts, making claims about unsubstantiated AI, or deploying AI before taking steps to assess and mitigate risks. It has required firms to destroy algorithms or other work products that were trained on data that they should not have collected.
Discrimination sources
In addition to the biassed data, datasets, as well as the black boxes causing model opacity and access difficulty, the statement also pointed out the incoherence between the specifications required of the application context and the developers’ understandings and flawed assumptions about users, relevant context, practices and procedures. So, I found that the last bias sources as mentioned are the most difficult to seek a solution. The communications amongst the developers in each stage of AI-systems’ life cycle extend to many actors in the design, development and deployment chain, in some cases in several countries.
Sources:
https://www.justice.gov/crt/page/file/1581491/download
For more news about discrimination in AI systems on AstraIA Gear: https://www.astraiagear.com/2023/02/07/can-sensitive-data-cease-discrimination-in-ai-based-systems/
For more short news, follow our Page on LinkedIn, cheers!