Washington:
The Biden administration is poised to open a new front in its efforts to protect US AI from China and Russia, with tentative plans to place guardrails around the most advanced AI models, Reuters reported on Wednesday.
Government and private sector researchers worry that U.S. adversaries could use the models, which collect vast amounts of text and images to summarize information and generate content, to launch aggressive cyberattacks or even create powerful biological weapons.
Here are some threats from AI:
DEEPFAKES AND DISINFORMATION
Deepfakes – realistic but fabricated videos created by AI algorithms trained on abundant online footage – are popping up on social media, blurring fact and fiction in the polarized world of American politics.
While such synthetic media have been around for several years, it has gained momentum in the past year with a slew of new “generative AI” tools like Midjourney, which make it cheap and easy to create convincing deepfakes.
Image creation tools, powered by artificial intelligence from companies like OpenAI and Microsoft, can be used to create photos that could promote election- or voting-related disinformation, despite the fact that they all have policies against creating misleading content, researchers said in a report in March.
Some disinformation campaigns simply exploit AI's ability to mimic real news articles as a means to spread false information.
Although major social media platforms such as Facebook, Twitter and YouTube have made efforts to ban and remove deepfakes, their effectiveness in policing such content varies.
For example, last year a Chinese government-controlled news site using a generative AI platform pushed a previously circulated false claim that the United States is running a laboratory in Kazakhstan to create biological weapons for use against China, according to the Department of Homeland Security (DHS). ) said in his 2024 domestic threat assessment.
National security adviser Jake Sullivan said Wednesday at an AI event in Washington that there are no easy solutions to the problem because it combines the capacity of AI with “the intent of state and non-state actors to use disinformation on a massive scale, to to disrupt the economy.” democracies, to promote propaganda, to shape world perception.”
“Right now the offense is beating the defense big time,” he said.
BIO WEAPONS
The U.S. intelligence community, think tanks and academics are increasingly concerned about the risks posed by foreign bad actors gaining access to advanced AI capabilities. Researchers from Gryphon Scientific and Rand Corporation noted that advanced AI models can provide information that could help create biological weapons.
Gryphon studied how large language models (LLM) – computer programs that draw from vast amounts of text to generate answers to questions – can be used by hostile actors to wreak havoc in the life sciences domain and found that they can “provide information that can assist a malicious actor in creating a biological weapon by providing useful, accurate and detailed information every step of the way.”
For example, they found that an LLM could provide postgraduate-level knowledge to solve problems when working with a virus that can cause pandemics.
Rand's research showed that LLMs could help plan and execute a biological attack. They found that an LLM could, for example, propose methods for aerosol delivery of botulinum toxin.
CYBER WEAPONS
The DHS said cyber actors will likely use AI to “develop new tools” to “enable larger, faster, more efficient, and more evasive cyberattacks” against critical infrastructure, including pipelines and railroads, in its 2024 homeland threat assessment.
China and other adversaries are developing AI technologies that could undermine U.S. cyber defenses, the DHS said, including generative AI programs that support malware attacks.
Microsoft said in a February report that it had tracked hacking groups linked to the Chinese and North Korean governments, as well as Russian military intelligence and Iran's Revolutionary Guards, as they tried to perfect their hacking campaigns using major language models.
(Except for the headline, this story has not been edited by Our staff and is published from a syndicated feed.)