Center for New American Security: “The arrival of ChatGPT in November 2022 initiated both great excitement and fear around the world about the potential and risks of artificial intelligence (AI). In response, several AI labs, national governments, and international bodies have launched new research and policy efforts to mitigate large-scale AI risks. However, growing efforts to mitigate these risks have also produced a divisive and often confusing debate about how to define, distinguish, and prioritize severe AI hazards. This categorical confusion could complicate policymakers’ efforts to discern the unique features and national security implications of the threats AI poses—and hinder efforts to address them. Specifically, emerging catastrophic risks with weighty national security implications are often overlooked between the two dominant discussions about AI concern in public discourse: present-day systemic harms from AI related to bias and discrimination on the one hand, and cantankerous, future-oriented debates about existential risks from AI on the other. This report aims to:
- Demonstrate the growing importance of mitigating AI’s catastrophic risks for national security practitioners
- Clarify what AI’s catastrophic risks are (and are not)
- Introduce the dimensions of AI safety that will most shape catastrophic risks
Catastrophic AI risks, like all catastrophic risks, demand attention from the national security community as a critical threat to the nation’s health, security, and economy. In scientifically advanced societies like the United States, powerful technologies can pose outsized risks for catastrophes, especially in cases such as AI, where the technology is novel, fast-moving, and relatively untested. Given the wide range of potential applications for AI, including in biosecurity, military systems, and other high-risk domains, prudence demands proactive efforts to distinguish, prioritize, and mitigate risks. Indeed, past incidents related to finance, biological and chemical weapons, cybersecurity, and nuclear command and control all hint at possible AI-related catastrophes in the future, including AI-accelerated biological weapons of mass destruction (WMD) production, financial meltdowns from AI trading, or even accidental weapons exchanges from AI-enabled command and control systems. In addition to helping initiate crises, AI tools can also erode states’ abilities to cope with them by degrading their public information ecosystems, potentially making catastrophes more likely and their effects more severe…”
Sorry, comments are closed for this post.