Search Our Website:
BIPC Logo

Artificial intelligence has generated significant hype, and many companies are betting on this and other automated systems technologies for increased growth and efficiency; however, AI is also garnering scrutiny from multiple federal agencies. On April 25, 2023, the Federal Trade Commission (FTC), the Civil Rights Division of the Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the Equal Employment Opportunity Commission (EEOC) issued a joint statement on enforcement efforts related to automated systems. The joint statement recognizes that automated systems, which include AI, are becoming increasingly commonplace and used by private and public entities to automate workflows, so enforcers will be monitoring closely for any violations of law.

While the agencies issued the joint statement for informational purposes only, it highlights that artificial intelligence is a key focus for these federal agencies. The FTC, DOJ, CFBP, and EEOC warn that existing laws and regulations apply to the use of innovative technologies such as automated systems as they do other, established technologies. These federal agencies also assert their responsibility for enforcing laws and regulations regarding fair competition, consumer protection, non-discrimination, and civil rights in relation to automated systems.

As referenced in the joint statement, each federal agency has already raised concerns about potentially harmful and illegal uses of automated systems. The FTC, with both consumer protection and competition enforcement capabilities, has warned market participants that using AI tools that have a discriminatory impact and making unsubstantiated claims about AI may violate the FTC Act. Developments in technology have led to generative AI products and synthetic media (e.g., chat bots, voice clones, and deep fakes). The risk is that these products can be used for deceptive or unfair conduct. The FTC has warned that the prohibition on deceptive or unfair conduct can apply to creators of tools that are “effectively designed to deceive” even if that is not the sole or intended purpose of these products. The FTC also referenced previous actions it has taken against companies, including requiring the deletion of algorithms trained on data that was improperly collected in addition to deleting the underlying data itself.

The CFPB’s circular put companies on notice that using complex algorithms that prevent creditors from identifying specific reasons for denying credit does not excuse those creditors from complying with the Equal Credit Opportunity Act’s requirement to provide applicants with statements of specific reasons for adverse action. The EEOC is similarly focused on discrimination related to automated systems. Companies increasingly use AI and other technologies for making employment-related decisions about job applicants, an area of focus for the EEOC. The DOJ Civil Division, which has a broad remit, including enforcement of laws involving discrimination with respect to housing, lending, education, voting, and more, is focused on discrimination issues in these areas, often in conjunction with other agencies.

While each federal agency enforces different laws, they share concerns around data inputs, model opacity for automated systems, and how automated systems are designed and used. Data is a key input to automated systems and potential issues include unrepresentative datasets, errors in datasets or correlation of data with protected classes. Multiple agencies have raised concerns about the “black box” workings of automated systems and potential issues associated with the lack of transparency, particularly when consumer protection and other laws require it. Finally, the agencies note that developers do not always account for how automated systems will be utilized, which can lead to an automated system built on flawed assumptions.

Key Takeaways

Many companies are eager to embrace the potential of AI and other automated systems. However, companies need to comply with applicable law when doing so. Although the United States has yet to implement a federal law specifically regulating artificial intelligence, that will not prevent federal agencies from applying current laws to the new technologies. Additionally, many State AGs may seek to apply state consumer protection and other laws to these technologies. Companies need to assess whether they should even be developing such products and ensure that they have mitigated the risks “taking all reasonable precautions” before these products are available on the market. The FTC’s expectations serve as a warning to many companies embroiled in an AI arms race that compliance with applicable consumer protection and competition laws needs to be considered at every stage of development. The Buchanan team can assist with ensuring compliance with applicable consumer protection and competition laws and regulations.