Search Our Website:
BIPC Logo

The White House issued an executive order on Monday outlining the initial steps for regulating the safe, secure, and trustworthy development and use of artificial intelligence (AI) in the United States. The Order aims to create new standards and requirements for American companies and government agencies, operationalizing several principles articulated in the administration’s Blueprint for an AI Bill of Rights. While the Order primarily applies to federal agencies and contractors, it sets a precedent for future AI development and regulation with significant implications for companies and individuals in every sector.

Scope of Impact

The Executive Order, which leverages the U.S. government’s position as a top customer for companies trading in AI and its ability to condition federal funding on compliance, primarily focuses on addressing national security risks, privacy concerns, safety issues, and civil rights implications associated with AI technology. Notably, the Order will not immediately affect most private companies or state and local government entities, which comprise the majority of early adopters of AI technology, for purposes such as facial recognition, license plate readers, and gunshot detection.

While the reach of the Order is limited to federal agencies and private government contractors, its requirements and principles are expected to impact the direction of future AI development and regulation. 

Key Takeaways

Protection of Dual-Use Foundation Models

The Executive Order focuses on protecting critical dual-use technologies by requiring developers of AI foundation models with potential dual-use applications to report how they protect their technology from malicious cyber actors. The Order specifically requires AI developers to share information with the government under the Defense Production Act. Private sector companies involved in AI development must ensure compliance with reporting requirements and establish robust measures to protect their technology from potential threats. Notably, the Order did not provide details on reporting criteria, penalties for noncompliance, or secure handling of disclosed information to mitigate security risks.

AI For Cybersecurity

The Executive Order recognizes the importance of leveraging AI to enhance cybersecurity. The Department of Homeland Security (DHS) and the Cybersecurity and Infrastructure Security Agency (CISA) will assess cross-sector risks and collaborate on mitigating vulnerabilities. DHS is also tasked with establishing an AI Safety and Security Board that can provide valuable guidance to critical infrastructure sectors to bolster their cybersecurity measures. Lastly, the DHS and the Department of Defense (DoD) will jointly evaluate AI models and capabilities to protect government and national security systems. Private sector organizations should closely follow the initiatives undertaken by the DHS, CISA, and the DoD to stay current on cybersecurity best practices.

New National Security Memorandum

The Order directs the development of a National Security Memorandum to ensure the safe and effective use of AI by the DoD and Intelligence Community. Private sector companies involved in defense-related AI applications should align their systems and practices with the forthcoming memorandum. Continued collaboration between the government and private sector is crucial in countering adversary use of AI and addressing national security concerns.

Sensitive Technology Oversight

The Executive Order highlights the involvement of multiple agencies in overseeing sensitive AI technologies. Cloud infrastructure organizations must report instances of foreign entities transacting with them for potentially malicious AI model training.

NIST Red-Teaming

The Order tasks the National Institute of Standards and Technology (NIST) with developing red-team testing standards to identify and exploit potential vulnerabilities in AI systems. Private sector companies should review NIST’s AI Risk Management Framework to enhance the security and resilience of their AI applications in preparation for this new requirement.

Collaboration and Balancing Innovation

The Executive Order underscores the importance of government and private sector collaboration in AI development and regulation. Private sector companies should actively engage with regulatory bodies, policymakers, and industry associations to shape the evolving AI landscape. While compliance with regulations is essential, it is equally important to strike a balance that fosters innovation and avoids unnecessary bureaucracy.

Upcoming Government Initiatives for Shaping AI Regulatory Frameworks

  • Watermarking: The Order calls for the Department of Commerce to develop watermarking standards for AI-generated content, such as audio or images, ensuring clear identification of content created by AI systems.
  • Cloud Service Providers: The Order also directs the Department of Commerce to develop regulations requiring cloud computing providers to report instances of foreign individuals using their services to train AI models that could be used for malicious cyber activities. These regulations will require providers to prohibit foreign resellers from offering their services unless they report such transactions. The Order sets further criteria for determining AI models with the potential for malicious cyber activities.
  • NIST Standards: The National Institute of Standards and Technology (NIST) is tasked with developing standards for critical infrastructure sectors and establishing an AI Safety and Security Board. NIST is also responsible for publishing a companion to its Secure Software Development Framework to incorporate secure development practices for generative AI.
  • Protection Against Bio- and Nuclear Weapons: Through the Defense Production Act, advanced AI systems will undergo mandated testing to prevent their misuse in producing biological or nuclear weapons. The federal government will require test results for compliance purposes.
  • Immigration: The Order aims to decrease visa requirements for overseas talent seeking to work in American AI companies, promoting the attraction of global AI talent and positioning the United States as a leader in AI development.
  • Health: The Department of Health and Human Services and other agencies will establish safety standards for AI use in healthcare and facilitate the acquisition of AI tools.
  • Labor: The Department of Labor and the National Economic Council will conduct studies to assess the impact of AI on the labor market and make recommendations based on their findings.
  • Civil Rights: Federal agencies will issue guidance to prevent discrimination resulting from AI algorithms used in AI tools, targeting areas such as housing, government contracts, and federal benefits programs.
  • Privacy: The Order emphasizes the need for federal privacy legislation to address concerns related to the vast amount of data used to train AI models and the potential privacy implications of AI technology.
  • Cybersecurity: A program will be established to develop AI tools that address cybersecurity vulnerabilities in critical software, enhancing the security of software and networks.

Companies should begin to consider the potential impact on their current and future AI applications. Compliance with federal rules and recommendations is essential, particularly in areas such as privacy, civil rights, and data collection. Early issue-spotting and planning can help ensure AI systems and implementations align with regulations and best practices.

Buchanan Artifex - Our Proprietary AI Platform

Buchanan has a powerful team of cybersecurity and data privacy attorneys, government relations and policy advisors, and expert technologists ready to assist organizations in evaluating their existing and forward-looking AI applications to be prepared to meet their new obligations. 

To navigate the complexities of AI technologies and regulations, BuchananInnovate has developed Buchanan Artifex—a powerful in-house AI platform designed to assist our attorneys and policy advisors in delivering results for our clients. This platform equips our team with the knowledge and expertise necessary to navigate the intricacies of AI technology effectively. It is an immersive platform that fosters confidence and competence in AI-related matters.

By immersing themselves in a controlled environment, our legal and government relations professionals gain valuable insights into AI capabilities and limitations. Buchanan Artifex enables informed conversations, education, and tailored guidance on AI-related matters, enabling us to provide tailored guidance that aligns with best practices and regulatory compliance. 

Next Steps for Navigating AI Compliance

The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence marks a significant step in addressing the challenges and opportunities presented by AI technology. Organizations should take note of the Order's key elements and potential implications for their organizations.

Early engagement with Buchanan’s legal counsel and government policy advisors can help clients navigate the evolving AI landscape, ensure compliance, and leverage the benefits of AI technology while protecting their interests and addressing potential legal risks.

We will provide regular updates as the AI legal and regulatory landscape evolves. Our attorneys and policy advisors stand ready to provide legal guidance and policy advice rooted in technological expertise. Reach out to our team today at cyber@bipc.com.