Senators Push DOE-Led AI Risk Reviews in Ambitious New Bill

What the bill proposes

Senators Josh Hawley and Richard Blumenthal introduced the Artificial Intelligence Risk Evaluation Act, a proposal that would create a federal program to assess risks posed by advanced AI systems. The plan would place a program at the Department of Energy to gather data on potential AI disasters, including runaway systems, security breaches, and the use of models as weapons by adversaries.

How the review would work

Under the draft, developers would be required to submit models for review before widespread deployment. That marks a major departure from current industry norms that favor rapid iteration and limited external oversight. The aim is to detect vulnerabilities and dangerous failure modes early, and to collect aggregated incident data that could inform future policy and resilience efforts.

Bipartisan concern and recent precedents

One striking detail is the bipartisan nature of the effort. Hawley and Blumenthal have worked together on AI issues before, such as proposals to protect content creators from AI replicas. Their cooperation signals that concern over AI risks crosses typical party lines. The proposal also follows California’s recent AI consumer safety and transparency law, suggesting that both state and federal actors are moving to establish guardrails.

Tension with innovation priorities

The White House has warned that over-regulation could slow innovation and hurt the United States competitive position in the global AI race, especially relative to China. Industry events and chipmakers continue to push agentic AI and advanced hardware, highlighting the tension between fostering competitiveness and ensuring safety. Lawmakers are trying to find a balance that protects the public without unduly hampering research and product development.

What this could change

If enacted, the bill could make the Department of Energy an unexpected gatekeeper for AI safety. Mandatory pre-deployment reviews would likely reshape developer timelines, compliance processes, and how companies document model capabilities and risks. At minimum, the proposal would move AI oversight from technical discussions into formal regulatory structures and create clearer expectations for accountability.

The broader significance

This bill represents a larger shift: AI has moved from niche tech conversations to the Senate floor. Whether the bill becomes law or merely sparks further debate, it reflects growing impatience with a wait-and-see approach to emerging risks. The core question remains how to create common sense oversight that prevents large scale harm while allowing beneficial innovation to continue.