As US states begin to respond to growing voter concerns about the risks associated with the use of artificial intelligence, Republican Senator Marsha Blackburn of Tennessee said progress on a federal preemption standard is “inevitable.”
Earlier this week, California Governor Gavin Newsom signed a series of bills aimed at these concerns — while also vetoing some strict AI terms that lawmakers were hoping for — that would require safeguards around chatbots, labels around the mental risks of social media apps and tools requiring age verification in device makers' app stores.
In addition, Utah and Texas have also signed laws implementing AI safeguards for minors, and other states have indicated that similar regulations are in the offing.
“The reason the states have stepped in, whether it's to protect consumers or to protect children, is because the federal government has not been able to pass any federal preventive legislation to date,” Blackburn said Wednesday at the CNBC AI Summit in Nashville. “We have to keep an eye on the states until Congress says no to the big tech platforms.”
Blackburn has long been a supporter of legislation around children's online safety and the regulation of social media, introducing the Kid's Online Safety Act in 2022 which aims to establish guidelines to protect minors from harmful material on the platforms. The bipartisan legislation passed the Senate overwhelmingly, and Blackburn said while major tech companies have worked to stop the legislation in both chambers, “We are hopeful that the House will take it up and pass it.”
The concerns the law was intended to address regarding social media have now emerged in tandem with the rise of AI, Blackburn said.
Senator Marsha Blackburn (R-TN) speaks during a rally hosted by Accountable Tech and Design It For Us to hold technology and social media companies accountable for taking steps to protect children and teens online on January 31, 2024 in Washington, DC
Jemal Countess | Getty Images Entertainment | Getty Images
“One of the things we've heard from so many people involved in this is that you need to have an online consumer privacy protection law so that people have the ability to set up those firewalls and protect the virtual you, as I call it,” she said, adding that “once an LLM has the scoop [your data and information]then they use that to train that model.”
Blackburn is also focusing on several other ways to protect the information AI uses, including a bill aimed at how AI can use your name, image or likeness without your consent.
“We need to have a way to protect our information in the virtual spaces just as we do in the physical space,” she said.
With the rapid advancement of AI, Blackburn recognized that regulation would require a focus on “end-use practices and legislate that framework in that way rather than focusing on a particular delivery system or technology.”
That also means responding to the ways AI companies are changing their products. Earlier this week, OpenAI CEO Sam Altman said the company will be able to “safely relax” most restrictions now that it has been able to mitigate “serious mental health issues,” adding that the company is “not the world's chosen moral police.”
Blackburn said lawmakers are increasingly hearing from “parents who know what's happening to their children and they can't un-experience or un-see something that they've experienced with these chatbots or in the virtual world or the metaverse.”
“I've talked to so many people now who say kids won't get a cell phone until they're 16, and a lot of parents think that's like driving a car,” she said. “They are not going to allow their children to have that because we as a society need to put in place rules and laws that protect children and minors.”


















