What Guardrails Are Needed For AI As Its Creators Struggle To Control It?
- Team MIRS
- 7 days ago
- 3 min read
Artificial intelligence (AI) has become good at hacking and is on the cusp of being able to help novices create known biological threats, experts told the House Judiciary Committee Wednesday.
Andrew Doris, a senior policy analyst with Secure AI Project in Washington, D.C., and Michigan native Daniel Kroth, senior researcher with Center for AI Risk Management and Alignment, both testified about the importance of HB 4668 and HB 4667, which would create guardrails to large AI developers and criminal penalties as society tries to control what AI can do and how it is used.

“(AI) increasingly behaves in ways that its creators struggle to control,” Doris testified. “… Even by this time next year, a lot of experts worry that we’re going to be in a much scarier place and that the window to get out ahead of that is closing pretty quickly.”
HB 4668, introduced by Rep. Sarah Lightner (R-Springport), would create the Artificial Intelligence Safety and Security Transparency Act, which would require large developers – defined as those who spend more than $100 million per year developing foundation models or $5 million on one particular model – to create and implement certain risk management practices.
The bill would require developers to create, publish and follow the safety, security and protocols to guard against “critical harm,” which would be defined as causing more than 100 casualties or more than $1 billion in economic damage.
The bill also would require quarterly reports on how the company implements safety and security protocols; a third-party auditor to confirm safety protocols are being followed; and establish some whistleblower protections for employees.
Kroth said that some advanced AI models will – when losing in a chess game with a human – hack into the backend of the chess game and delete pieces of information in order to win.
“Hacking to win at chess is almost funny,” he acknowledged. “But it’s much less funny when our healthcare or industrial control systems are on the other side of the board.”
Rep. Douglas Wozniak (R-Shelby Township) questioned who would be held legally responsible if the AI system reprogrammed itself – as it did in the chess exercise – to commit a crime.
Kroth said the company or developer creating the AI program “can be held accountable.”
Kroth said advanced AI systems could allow criminals or foreign adversaries to execute sophisticated cyberattacks, adapting to and circumventing response efforts. AI systems also could be misused to identify dangerous substances, potentially allowing for potent new chemical or biological threats, he noted.
Rep. Brian Begole (R-Perry) asked who would police the organizations, and Doris said the Attorney General would have the power to bring a civil fine of up to $1 million for violations.
Lightner’s HB 4667 would add a new section to the Michigan Penal Code to create three felonies related to AI systems.
Those penalties would include an eight-year felony for anyone who possesses, develops, deploys or modifies an AI system with the intent to commit another crime and for those who use AI in the course of committing another crime.
A separate four-year felony for possessing or developing an AI system with the intent of allowing another person to commit a crime also would be created under the bill.
HB 4667, which would take effect 90 days after enactment, “would treat AI not as an afterthought, but as a digital weapon that deserves its own penalty,” Lightner said.
Rep. Kelly Breen (D-Novi) expressed concern about the bill, saying an eight-year felony “for any crime” could be someone trying to prank a tavern like Bart Simpson – a reference to the long-standing prank on The Simpsons where a character gets the bar owner to yell profane names and phrases.
“This is a starting point,” Lightner replied.
“I love it,” Breen quipped.
Breen also questioned what flexibility is available to merge with technologies, referencing the 1983 movie War Games, in which a young man discovers a backdoor into a military central computer confused with a game, and Skynet, the fictional AI network that achieves self-awareness and initiates a nuclear war against humanity in the Terminator movies.
Breen also wanted to know if there are potential conflicts with federal laws as the Trump administration has signaled it doesn’t want states to regulate AI.
“We would definitely support the same bill at the federal level,” Doris replied. “… I think, frankly, Congress has been pretty slow on this stuff and is behind. As I said, the technology is moving really fast, where we don’t have a lot of time and a lot of the risks are already here.”