Two weeks ago, U.S. President Biden issued an executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This document is some 60 pages long and structured in 11 sections, namely (1) purpose, (2) policy and principles, (3) definitions, (4) ensuring the safety and security of AI technology, (5) promoting innovation and competition, (6) supporting workers, (7) advancing equity and civil rights, (8) protecting consumers, patients, passengers, and students, (9) protecting privacy, (10) - advancing federal government use of AI, and (11) strengthening American leadership abroad.
I've divided the analysis of the executive order into 4 parts for better digestibility. In this second part, I summarise subsections 4.3 through 4.8 of section 4, and section 5. The key points are, in my opinion:
Section 4.3 - Managing AI in Critical Infrastructure and in Cybersecurity:
On an annual basis, each agency with relevant regulatory authority over critical infrastructure will have to evaluate, mitigate, and report to the Secretary of Homeland Security an assessment of potential risks related to the use of AI in critical infrastructure.
Best practices for financial institutions to manage AI-specific cybersecurity risks will be published.
The NIST AI Risk Management Framework AI 100-1 will be incorporated into relevant safety and security guidelines for use by critical infrastructure owners and operators.
An Artificial Intelligence Safety and Security Board will be established.
Pilots to leverage large-scale foundation models to detect vulnerabilities in U.S. Government software, systems, and networks will be launched.
For all of the above activities -as well as the ones listed below - tight timelines are given, ranging from 90 days (Jan 28, 2024) to 240 days (Jun 26, 2024).
Section 4.4 - Reducing Risks at the Intersection of AI and CBRN Threats:
The risk potential of AI with regard to chemical, biological, radiological, and nuclear (CBRN) threats will be assessed, safeguards suggested, and reported to the President.
A study will be conducted to assess the risk of biothreats through AI.
Special consideration will be given to threats through synthetic nucleic acids, the components of DNS material. The executive order points towards the establishment of procurement screening mechanisms.
Section 4.5 - 4.8:
Tools and methods for identifying and watermarking synthetic content and preventing the synthesis of content deemed particularly harmful will be investigated.
These techniques will also be used for authenticating governmental digital content.
Information - risks, benefits, safeguards, suggestions for regulatory approaches - about publicly available dual-use foundation models will be collected.
Because there is data that does not pose a security risk when considered in isolation but does so when combined with other available information, the release of all public data will undergo security reviews.
A National Security Memorandum about the use of AI in national security systems will be published.
Sec. 5 Promoting Innovation and Competition
To attract foreign AI talent, access to various types of visas will be sped up.
A National AI Research Resource (NAIRR) pilot program will be launched to integrate distributed resources into an infrastructure for AI research and development.
There will be additional NSF-funded AI research programs
At least 4 Additional National AI Research Institutes (in addition to the existing 25) will be established.
At least 500 researchers will be trained in AI by 2025.
AI-related patents will be promoted through guidance for AI patent publication and examination.
AI-related IP theft with regard to national security will be investigated and prosecuted.
IP theft mitigation guidance and technology will be developed.
AI-enabled tools that develop personalized immune-response profiles for patients will be funded.
Initiatives that explore ways to improve healthcare-data quality will be incentivized.
The potential for AI to improve planning, permitting, investment, and operations for electric grid infrastructure will be investigated.
Tools to use AI for basic and applied research, including mitigation of climate change risks will be developed.
The Department of Energy’s computing capabilities and AI testbeds will be utilized to build foundation models that support new applications in science and energy and for national security.
Completion will be promoted, and opportunities for small businesses in AI and the semiconductor industry will be provided through consortia and funding.
Stay tuned for part III of this mini-series coming up soon.