Artificial Intelligence & Machine Learning , Government

US DHS Warns of AI-Fueled Chemical and Biological Threats

New Report Urges Public-Private Collaboration to Reduce Chemical, Nuclear AI Risks
US DHS Warns of AI-Fueled Chemical and Biological Threats
The U.S. federal government warned that artificial intelligence lowers the barriers to conceptualizing and conducting chemical or biological attacks. (Image: Shutterstock)

Artificial intelligence is lowering the barriers of entry for global threat actors to create and deploy new chemical, biological and nuclear risks, warns the U.S. Department of Homeland Security.

The department on Monday published a report on reducing risks at the intersection of AI and chemical, biological, radiological and nuclear threats, after teasing a draft in April. The report calls on Congress, federal agencies and the private sector to be "adaptive and iterative" in their AI technology governance and "to respond to rapid or unpredictable technological advancements."

"The revolutionary pace of change in the biotechnology, biomanufacturing, and AI sectors compounds existing regulatory challenges," the report states. It recommends "continued interaction among industry, government, and academia."

Current regulations and export controls fail to account for risks posed by potentially harmful nucleic acid sequences created with the assistance of AI, DHS said. Those sorts of gaps in current controls could allow threat actors to misuse AI to develop dangerous biological agents, evade detection and potentially cause widespread harm.

The report says the integration of AI into CBRN prevention, detection, response and mitigation efforts "could yield important or emergent benefits," but it also says that current regulatory limitations hinder the government's ability to properly oversee AI research, development and implementation. DHS also acknowledged in the report that the federal government "currently does not have an overarching legal or regulatory framework to comprehensively regulate or oversee AI," and it warned that various AI governance approaches could result in compliance challenges for many developers.

Among its recommendations is encouraging AI developers to voluntary release source code and AI model weights for models used in biological or chemical research.

The report says engagement with international stakeholders such as governments, global organizations and private entities "is needed to develop approaches, principles and frameworks to manage AI risks, unlock AI's potential for good, and promote common approaches to shared challenges in light of worldwide development and spread of AI technologies."

DHS Secretary Alejandro Mayorkas said in a statement accompanying the report that it was meant "to provide longer-term objectives around how to ensure safe, secure, and trustworthy development and use" of AI technologies.


About the Author

Chris Riotta

Chris Riotta

Managing Editor, GovInfoSecurity

Riotta is a journalist based in Washington, D.C. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president. His reporting has appeared in NBC News, Nextgov/FCW, Newsweek Magazine, The Independent and more.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing cio.inc, you agree to our use of cookies.