The Trump administration is taking a pro-innovation approach to AI adoption, meaning that more laws and regulations on AI are unlikely at the federal level (at least in our view). But does that mean the compliance burden on AI will decrease?
Not quite. Despite the likelihood of decreasing federal oversight, the compliance burden for companies adopting artificial intelligence (AI) is actually growing—particularly in the area of bias testing. The fact is that companies are already being required to conduct rigorous assessments of their AI models to ensure they do not discriminate against protected groups. These obligations are becoming more formalized in state regulations—not to mention the EU AI Act—signaling that compliance burdens will only intensify for multinational companies later this year.
For example, businesses are increasingly citing AI-related risks and compliance efforts in their SEC Form 10-K filings, reflecting the growing recognition that AI governance is a material legal and financial concern.
In California, the forthcoming regulations under the California Consumer Privacy Act (CCPA) will impose new obligations on companies using automated decision-making technologies (ADMT). These rules will likely require businesses to provide consumers with pre-use notices, disclose their bias-testing methodologies, and offer opt-out rights. This builds upon broader national and international trends, such as the EU AI Act, with staggering fines of up to 7 percent of global revenue for noncompliance.
Importantly, these requirements are not just advisory—they are quickly becoming enforceable legal obligations. Laws like New York City’s Local Law 144 already mandate annual bias audits for AI-driven hiring tools, and similar legislation is expected in other jurisdictions. Companies that fail to comply may face regulatory enforcement actions, lawsuits, and reputational damage.
The key takeaway? AI compliance burdens are not plateauing—they are accelerating. Businesses deploying AI must prepare now for an era of heightened legal scrutiny, where failure to conduct proper bias testing and transparency measures could result in significant legal and financial repercussions.