Skip to main content
Tags: artificial intelligence | human extinction | state department | national security

Report: Agency Needed for 'Extinction-Level' AI Threat

By    |   Friday, 15 March 2024 09:51 PM EDT

A consulting firm commission by the U.S. State Department has published a report this week that recommends the creation of a new government agency to mitigate the impending threat posed by artificial intelligence.

According to Gladstone AI, government involvement with the development of AI is needed to prevent "urgent and growing risks to national security" that could eventually result in "extinction-level threat to the human species."

Titled "An Action Plan to Increase the Safety and Security of Advanced AI," the 250-page report was commissioned by the State Department just prior to the release of ChatGPT, which was the first interaction many Americans had with publicly available artificial intelligence.

Gladstone AI observed the public's interaction with ChatGPT and drew several conclusions, particularly as it pertains to the next stage in AI evolution. AGI, or artificial general intelligence, is described as a "transformative technology with profound implications for democratic governance and global security" in the report.

AGI is an advanced AI system that can "outperform humans across all economic and strategically relevant domains, such as producing practical long-term plans that are likely to work under real world conditions."

The nightmare scenario the report envisions would be the "loss of control" — "a potential failure mode under which a future AI system could become so capable that it escapes all human effort to contain its impact."

Gladstone AI recommended to the State Department the creation of a new federal agency not only to control AI research but also to limit the amount of computer power that can be used in any given AI system. The authors of the report contend "frontier" companies will engage in reckless tactics to remain competitive. 

"Frontier AI labs face an intense and immediate incentive to scale their AI systems as fast as they can. They do not face an immediate incentive to invest in safety or security measures that do not deliver direct economic benefits, even though some do out of genuine concern," the report said.

In May 2023, 300 signatories including OpenAI CEO Sam Altman, signed a public statement alerting the public to the dangers of AI that read, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

© 2025 Newsmax. All rights reserved.


US
A consulting firm commission by the U.S. State Department has published a report this week that recommends the creation of a new government agency to mitigate the impending threat posed by artificial intelligence.
artificial intelligence, human extinction, state department, national security
374
2024-51-15
Friday, 15 March 2024 09:51 PM
Newsmax Media, Inc.

Sign up for Newsmax’s Daily Newsletter

Receive breaking news and original analysis - sent right to your inbox.

(Optional for Local News)
Privacy: We never share your email address.
Join the Newsmax Community
Read and Post Comments
Please review Community Guidelines before posting a comment.
 
TOP

Interest-Based Advertising | Do not sell or share my personal information

Newsmax, Moneynews, Newsmax Health, and Independent. American. are registered trademarks of Newsmax Media, Inc. Newsmax TV, and Newsmax World are trademarks of Newsmax Media, Inc.

NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved
Download the Newsmax App
NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved