On October 24, 2024, the Biden administration released a National Security Memorandum (NSM) titled "Memorandum on Advancing the United States' Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence."
This document, nearly 40 pages long, serves as the most comprehensive articulation of U.S. national security strategy toward artificial intelligence (AI) to date, and its publication was a specified requirement of the administration's October 2023 AI Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI.
The NSM is accompanied by a closely related document, the Framework to Advance AI Governance and Risk Management in National Security released on the same day.
Together, these texts lay out a detailed vision of AI's role in securing the U.S. from emerging threats and strengthening national defense capabilities. OpenPolicy continues to work closely with the relevant agencies to inform Executive Orders and NSM implementation.
Key NSM objectives include:
1. U.S. Leadership in AI: By focusing on chip supply chains and the National AI Research Resource, the NSM seeks to expand U.S. AI capabilities with an emphasis on transparency, security, and accountability. The AI Safety Institute (AISI) serves as the primary liaison with the private sector, promoting safe AI model development.
2. AI in National Security: The NSM directs national security agencies to establish governance frameworks, ensuring that AI use aligns with democratic values, human rights, and transparency. It mandates regular risk assessments, procurement guidelines, and streamlined processes for collaboration with non-traditional AI vendors.
3. Global AI Governance: Building on the International Code of Conduct for AI, the NSM aims to collaborate internationally on AI norms. The U.S. will work with allies to establish frameworks upholding human rights and preventing authoritarian misuse of AI, with initiatives like a Political Declaration on Military AI use and a UN Resolution on AI.
4. AI Safety and Security: The AISI, in collaboration with other agencies, will oversee voluntary AI model testing, focusing on models with potential misuse in cybersecurity, bioweapons, and autonomous threats. Guidelines will address AI biases, privacy, and civil liberties protections, and agencies will conduct evaluations to mitigate risks associated with dual-use technologies.
The NSM reflects the administration's commitment to harnessing AI's potential responsibly for national security, prioritizing privacy, civil rights, and democratic values, and strengthening U.S. competitiveness through a proactive approach to AI governance and risk management.