Tech

Orrick State Attorney General Update | January 2026

Orrick State Attorney General Update | January 2026

State Attorneys General (AGs) are expected to remain active in 2026, with an emphasis on emerging issues surrounding artificial intelligence, consumer protection and civil rights.

New York Governor Kathy Hochul signed groundbreaking legislation requiring frameworks for “AI frontier models,” part of the Responsible AI Safety and Education (RAISE) Act. Under the law, AI developers will be required to establish and publish safety frameworks for advanced AI systems and report incidents of critical harm to the state within 72 hours of discovery. The measure also creates a new oversight office within the New York Department of Financial Services tasked with annual reporting on AI safety and transparency.

Additionally, the law grants the New York Attorney General authority to bring civil actions against AI frontier developers for failure to submit required reporting or making false statements, with penalties ranging from $1 million for the first violation and up to $3 million for subsequent violations.

A bipartisan coalition of 24 state Attorneys General, sent a comment letter to the Federal Communications Commission (FCC) opposing an inquiry that could lay the groundwork for federal preemption of state AI regulations. In the letter, the coalition argued that the FCC lacks authority to override state laws addressing AI and that any such preemptive action would limit states’ ability to protect consumers, especially in areas like privacy. Specifically, the state AGs argue that the FCC’s proposal would impinge the function of core state responsibilities protected by the Tenth Amendment.

The state AGs further noted that states often serve as first responders to consumer complaints and emerging technology risks, highlighting the need for states to retain flexibility to regulate AI responsibly.

In November 2025, the Attorney General Alliance launched a bipartisan Artificial Intelligence Task Force working in partnership with major AI developers such as OpenAI and Microsoft. Co-chaired by North Carolina Attorney General Jeff Jackson and Utah Attorney General Derek Brown., the task force aims to address emerging AI-related risks while supporting innovation by establishing an ongoing forum for monitoring and responding to AI developments, identifying new AI challenges and developing baseline safeguards—particularly to protect children.

During the inaugural meeting on January 14, 2026, AG Jackson described AI as “the issue of our time,” highlighting both its significant benefits, such as medical advancements, and its risks, including deepfakes and robocalls. He emphasized the task force’s focus on consumer protection, business innovation and public safety.

AG Brown echoed the need to balance innovation with risk mitigation, noting that while AI presents high rewards, it also carries high risks. He stressed that state AGs often have the authority to act more quickly than legislatures and, in some cases, may need to pursue litigation to address harms.

Tania Maestas, Deputy Executive and General Counsel of the AGA, facilitated a discussion among participating states, which raised shared concerns about child safety, deepfake pornography, privacy, security and confidentiality. At the same time, states recognized AI’s potential to positively transform the workforce and create business opportunities, provided it is deployed responsibly and without causing harm.

During a media question-and-answer session, AGs addressed federal efforts to preempt state action on AI, asserting that such moves raise constitutional concerns and undermine states’ authority. Both co-chairs identified specific priorities, including deepfake pornography, chatbots’ impact on children, and accountability of large technology companies for harmful content. They also discussed AI adoption within their offices, noting growing use and evolving policies, and expressed serious concern about candidate deepfakes and their potential impact on elections.