12 min read

Federal AI Executive Order Compliance: What Government Agencies Must Know

Navigate federal AI compliance requirements, Chief AI Officer mandates, and governance obligations under evolving executive guidance.

By Hashi S.

Federal agencies face a complex and evolving landscape of AI compliance requirements. Executive Order 14110, signed in October 2023, established 150 distinct requirements across government agencies.

While the order was rescinded in January 2025, many core requirements persist through subsequent Office of Management and Budget guidance. Agencies must navigate this shifting regulatory environment while continuing to innovate with AI technologies that improve public services.

The challenge for federal agencies is not simply understanding what was required under Executive Order 14110. It's recognizing which obligations remain in effect, what new guidance has emerged, and how to build sustainable AI governance structures that survive political transitions.

With Chief AI Officer appointments, AI Governance Boards, impact assessments, and rights-protection requirements still mandated, agencies need clear guidance on compliance priorities and implementation approaches.

Key Takeaways

  • Core requirements persist despite EO 14110 rescission through OMB Memorandum M-25-21
  • Chief AI Officer appointments remain mandatory with clear executive sponsorship and decision authority
  • AI Governance Boards must provide strategic direction, risk oversight, and accountability enforcement
  • Impact assessments required for all AI tools covering technical performance, rights impacts, and monitoring
  • Balance innovation with compliance through risk-based approaches and fast-track processes for low-risk AI

What Were the Key Requirements of Executive Order 14110?

Executive Order 14110 represented the most comprehensive federal AI policy to date. It established eight guiding principles and 150 specific requirements across agencies.

The order addressed **AI safety and security** through mandated testing and evaluation protocols. It required companies developing dual-use foundation models to report to the federal government. It directed NIST to develop AI safety guidelines and red-teaming procedures for high-risk systems.

The order emphasized **innovation and competition** by directing investments in AI education and research. It protected intellectual property rights and promoted fair marketplace competition.

It required agencies to support American workers through job training programs. It ensured collective bargaining rights and prevented harmful workplace surveillance enabled by AI systems.

**Equity and civil rights protections** formed a central pillar, building on the Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework. The order mandated technical evaluations to prevent discrimination.

It required oversight to detect bias and established accountability standards for AI systems affecting civil rights. Consumer protection provisions directed agencies to enforce existing laws against AI-enabled fraud and discrimination, with particular attention to healthcare, financial services, education, and housing.

**Privacy and civil liberties requirements** emphasized lawful data collection, secure retention practices, and deployment of privacy-enhancing technologies. The order directed agencies to modernize IT infrastructure, attract AI talent to federal service, and ensure workforce AI literacy.

Internationally, it positioned the United States to lead global AI governance discussions. It promoted responsible AI principles with allies and developed frameworks for managing AI risks across borders. While the US pursued this comprehensive approach, the European Union advanced its own regulatory framework through the EU AI Act, creating parallel but distinct compliance obligations for organizations operating globally.

Which Requirements Remain in Effect After the Order's Rescission?

The rescission of Executive Order 14110 in January 2025 created confusion about which requirements federal agencies must still meet. The answer is that many core obligations persist through subsequent guidance.

OMB Memorandum M-25-21, issued in April 2025, titled "Accelerating Federal Use of AI through Innovation, Governance, and Public Trust," maintains most critical requirements.

**Chief AI Officer appointments remain mandatory.** Agencies must designate a Chief AI Officer to champion AI goals, advise on implementation strategies, and coordinate governance activities.

The CAIO serves as the primary point of contact for AI matters. The role is critical for ensuring consistent leadership and accountability across the AI lifecycle, from procurement through deployment and monitoring.

**AI Governance Boards continue to be required** within 90 days of the current guidance. These boards must include senior leadership from technology, policy, legal, and operational functions.

They oversee AI strategy, review high-risk AI systems, ensure compliance with federal requirements, and make decisions about AI adoption and termination. Effective governance boards balance innovation enablement with risk management.

**AI impact assessments remain a core requirement** for all AI tools used by federal agencies. These assessments must evaluate risks to rights, safety, and security.

They must document data sources and quality, describe decision-making roles and automation levels, identify affected populations, assess fairness and bias risks, and establish monitoring and accountability measures. Rights-impacting AI systems face heightened assessment requirements and ongoing review obligations.

**Real-world testing before deployment** continues to be mandated. Agencies must validate AI system performance in realistic conditions, identify failure modes and edge cases, and assess accuracy across different populations.

They must evaluate human-AI interaction patterns and document testing results before operational use. Testing requirements are particularly stringent for systems affecting rights, safety, or critical government functions.

Navigating the evolving federal AI compliance landscape requires expertise in both technical AI governance and federal regulatory requirements. DigiForm helps federal agencies build sustainable AI governance structures that meet current requirements while remaining adaptable to future policy changes.

How Should Agencies Approach Chief AI Officer Appointments?

The Chief AI Officer role represents a critical leadership position that many agencies struggle to fill effectively. The challenge stems from the unique combination of skills required.

The role demands technical AI expertise, federal policy knowledge, senior leadership experience, and the ability to navigate complex organizational dynamics. Finding individuals who possess all these capabilities at government salary levels proves difficult.

**Successful CAIO appointments begin with realistic role definition.** Agencies should resist the temptation to create job descriptions requiring impossible combinations of qualifications.

Don't demand PhD-level AI research credentials, decades of federal policy experience, and C-suite leadership background all in one person. Instead, prioritize core competencies: understanding AI capabilities and limitations, ability to assess AI risks and opportunities, experience building governance structures, and strong communication skills to translate between technical and policy audiences.

**The CAIO role requires clear executive sponsorship and decision-making authority.** CAIOs positioned as mid-level coordinators without budget authority or direct access to senior leadership cannot succeed.

Effective CAIOs report to agency CIOs or directly to deputy secretaries. They participate in senior leadership meetings, influence budget decisions, and have authority to halt non-compliant AI deployments. Without this organizational positioning, the CAIO title becomes ceremonial rather than functional.

Agencies should leverage available hiring authorities to offer competitive compensation. The AI in Government Act and other authorities provide flexibility beyond standard GS scales.

Additionally, agencies can attract talent by emphasizing mission impact, opportunities to shape AI policy at scale, professional development through the federal AI community of practice, and the chance to work on challenging problems affecting millions of Americans.

What Should AI Governance Boards Actually Do?

Many agencies establish AI Governance Boards to meet compliance requirements but struggle to make them effective. Boards that meet quarterly to review PowerPoint presentations about AI initiatives provide little value.

**Effective governance boards serve four critical functions:** strategic direction, risk oversight, resource allocation, and accountability enforcement.

**Strategic direction** involves setting agency AI priorities, identifying high-value use cases, establishing risk appetite and tolerance levels, and aligning AI initiatives with mission objectives.

The board should answer questions like: Which agency functions could benefit most from AI? What level of risk is acceptable for different use case categories? How should we balance innovation speed with safety requirements? What capabilities do we need to build versus buy?

**Risk oversight** requires the board to review high-risk AI systems before deployment. They must assess whether impact assessments adequately identify and mitigate risks.

They ensure rights-impacting systems have appropriate human oversight, monitor ongoing AI system performance, and make decisions about continuing, modifying, or terminating problematic systems. This function demands that boards receive substantive information about AI risks, not sanitized summaries that obscure real issues.

**Resource allocation decisions** determine which AI initiatives receive funding, staffing, and organizational support. Effective boards evaluate AI investment proposals against strategic priorities.

They assess whether agencies have necessary technical capabilities, ensure adequate resources for governance and oversight functions, and make difficult decisions about stopping initiatives that aren't delivering value or pose unacceptable risks.

**Accountability enforcement** means holding individuals and teams responsible for AI outcomes. Boards should establish clear ownership for AI systems and define success metrics and monitoring requirements.

They review incidents and failures to identify systemic issues and ensure consequences when teams bypass governance processes or deploy non-compliant systems. Without accountability, governance becomes theater.

Building effective AI governance requires more than checking compliance boxes—it demands operational structures that actually influence decisions. DigiForm assists federal agencies in designing AI governance boards that provide real strategic value while meeting federal requirements.

How Do Agencies Conduct Effective AI Impact Assessments?

AI impact assessments represent a critical compliance requirement. Yet many agencies treat them as paperwork exercises rather than genuine risk evaluation tools.

**Effective impact assessments require systematic analysis across multiple dimensions:** technical performance, rights and safety impacts, data quality and governance, human oversight mechanisms, and ongoing monitoring approaches.

**Technical performance assessment** begins with understanding what the AI system actually does. What decisions does it make or influence? What level of automation is involved?

What are the system's accuracy rates overall and across different populations? What are known failure modes? How does performance degrade under edge cases or adversarial inputs? Agencies should demand evidence from vendors rather than accepting marketing claims about AI capabilities.

**Rights and safety impact analysis** identifies who is affected by the AI system and how. Does the system influence access to benefits, services, or opportunities? Could it affect employment, education, or legal outcomes?

Does it impact freedom of movement or expression? What are the consequences of false positives and false negatives? Impact assessments must evaluate differential impacts across demographic groups, recognizing that AI systems often perform differently for different populations.

**Data governance evaluation** examines the training data, ongoing input data, and data retention practices. Is training data representative of the populations the system will affect?

Are there known biases or quality issues in the data? How is input data validated before use? What data is logged and for how long? Who has access to data generated by the AI system? Data governance failures represent a common source of AI system problems.

**Human oversight mechanisms** determine how humans remain involved in AI-assisted decisions. Who has final decision authority? Under what circumstances can humans override AI recommendations?

How are overrides documented and reviewed? What training do human decision-makers receive? Are there safeguards against automation bias, where humans defer to AI recommendations without critical evaluation? Meaningful human oversight requires more than having a person "in the loop"—it demands genuine decision authority and accountability.

**Ongoing monitoring approaches** establish how agencies will track AI system performance after deployment. What metrics indicate the system is working as intended? How frequently are these metrics reviewed?

What triggers a deeper investigation or system modification? Who is responsible for monitoring? What happens when performance degrades? Effective monitoring catches problems before they cause significant harm and provides feedback for continuous improvement.

What Are the Most Common AI Governance Failures?

Federal agencies repeatedly make predictable mistakes when implementing AI governance. Understanding these common failures helps agencies avoid them.

**Treating governance as compliance theater** represents the most fundamental failure. Agencies create governance processes that look impressive on paper but don't actually influence AI deployment decisions.

Impact assessments become checkbox exercises. Governance boards rubber-stamp initiatives without substantive review. Testing requirements get waived under schedule pressure. This theater provides the appearance of governance without the substance.

**Insufficient technical expertise** prevents agencies from effectively evaluating AI systems. Governance bodies lack members who understand machine learning fundamentals, can assess vendor claims critically, or recognize when AI systems are inappropriate for specific use cases.

Without technical expertise, agencies cannot distinguish between responsible AI deployment and reckless experimentation. They cannot evaluate whether proposed safeguards will actually work or whether testing protocols adequately stress-test systems. This challenge is particularly acute in regulated industries like healthcare, where agencies must understand both federal AI governance and sector-specific requirements such as FDA AI/ML compliance for medical devices.

**Inadequate human oversight** occurs when agencies automate decisions without maintaining meaningful human involvement. Systems make consequential decisions about benefits, enforcement actions, or resource allocation with minimal human review.

When human review exists, it often suffers from automation bias—humans defer to AI recommendations without critical evaluation. Effective human oversight requires training, clear decision authority, and organizational support for overriding AI recommendations when appropriate.

**Poor incident response** means agencies lack processes for handling AI system failures. When systems produce discriminatory outcomes, make significant errors, or fail in unexpected ways, agencies don't have clear procedures for investigation, remediation, and prevention of future incidents.

Without incident response processes, agencies cannot learn from failures or demonstrate accountability to affected individuals and oversight bodies.

**Lack of accountability** occurs when no one faces consequences for bypassing governance processes or deploying non-compliant systems. Teams that rush systems into production without proper assessment face no repercussions.

Vendors that overpromise capabilities or underdeliver on safeguards continue receiving contracts. Without accountability, governance requirements become suggestions rather than requirements.

How Can Agencies Balance AI Innovation with Compliance Requirements?

Many agencies view AI governance as a barrier to innovation. This perspective creates unnecessary tension between compliance and progress.

**Effective governance enables innovation** by providing clear processes, reducing uncertainty, and building public trust that allows continued AI adoption.

**Risk-based approaches** allow agencies to focus governance resources where they matter most. Not all AI applications pose equal risks. A chatbot answering frequently asked questions requires different oversight than a system making eligibility determinations for benefits.

Agencies should establish clear risk categories with proportionate oversight requirements. Low-risk applications get streamlined approval processes. High-risk applications receive intensive review and ongoing monitoring. This approach prevents governance from becoming a bottleneck for low-risk innovation while ensuring adequate scrutiny of consequential systems.

**Fast-track processes for low-risk applications** reduce governance overhead without compromising safety. Agencies can create pre-approved AI use cases, standard impact assessment templates for common applications, and delegated approval authority for routine deployments.

These processes allow innovation to proceed quickly while maintaining appropriate oversight. The key is distinguishing genuinely low-risk applications from those that merely appear low-risk.

**Reusable compliance artifacts** reduce duplicative work across AI initiatives. Agencies should develop standard assessment templates, testing protocols, monitoring dashboards, and governance documentation that can be adapted for different applications.

When multiple teams work on similar AI use cases, they can leverage shared compliance work rather than starting from scratch. This approach improves both efficiency and consistency.

**Governance automation** uses technology to support compliance processes. Automated testing tools can continuously monitor AI system performance. Compliance dashboards can track whether systems meet requirements. Documentation systems can ensure impact assessments are complete before deployment.

These tools don't replace human judgment but make governance processes more efficient and effective.

**Cultural change** matters as much as processes and tools. Agencies should foster a culture where compliance enables rather than blocks innovation. This requires early engagement between governance teams and AI developers, collaborative problem-solving rather than adversarial relationships, and recognition that governance catches problems before they cause harm rather than creating unnecessary bureaucracy.

Frequently Asked Questions

What were the key requirements of Executive Order 14110?

Executive Order 14110 established 150 requirements across eight guiding principles. These covered AI safety and security through testing protocols and NIST guidelines.

They addressed innovation and competition through AI education investments and IP protection. They supported workers through job training and collective bargaining rights.

The order mandated equity and civil rights protections including bias detection and accountability standards. It required consumer protection measures for fraud prevention and sector-specific safeguards.

Privacy protections emphasized lawful data collection and privacy-enhancing technologies. Federal AI capacity requirements covered IT modernization and talent attraction. International leadership provisions addressed global governance discussions and cross-border risk frameworks.

Which requirements remain in effect after the order's rescission?

Core requirements persist through OMB Memorandum M-25-21. Chief AI Officer appointments remain mandatory as a leadership role.

AI Governance Boards are required within 90 days. AI impact assessments must be conducted for all AI tools. Real-world testing before deployment continues to be mandated.

Rights-protection requirements and ongoing monitoring obligations remain in effect. These requirements continue to apply despite the executive order's rescission.

How should agencies approach Chief AI Officer appointments?

Successful CAIO appointments require realistic role definition. Prioritize core competencies over impossible qualification combinations.

Ensure clear executive sponsorship with reporting to CIO or deputy secretary level. Provide decision-making authority including budget influence and deployment halt authority.

Offer competitive compensation by leveraging AI in Government Act hiring authorities. CAIOs need organizational positioning that enables real influence, not ceremonial titles.

What should AI Governance Boards actually do?

Effective AI Governance Boards serve four critical functions. They provide strategic direction by setting AI priorities, establishing risk appetite, and aligning initiatives with mission.

They conduct risk oversight by reviewing high-risk systems, assessing impact assessments, and monitoring performance. They handle resource allocation by evaluating investment proposals, ensuring governance funding, and stopping underperforming initiatives.

They enforce accountability by establishing ownership, defining metrics, reviewing incidents, and ensuring consequences for non-compliance.

How do agencies conduct effective AI impact assessments?

Effective impact assessments require systematic analysis across five dimensions. Technical performance assessment covers accuracy rates, failure modes, and edge case behavior.

Rights and safety impact analysis identifies affected populations, differential impacts, and consequences of errors. Data governance evaluation examines training data quality, input validation, and retention practices.

Human oversight mechanisms define decision authority, override procedures, and accountability. Ongoing monitoring establishes performance tracking, incident response, and continuous improvement processes.

What are the most common AI governance failures in federal agencies?

Common governance failures include treating governance as compliance theater—checkbox exercises without real risk evaluation. Insufficient technical expertise prevents agencies from assessing vendor claims or system capabilities.

Inadequate human oversight leads to automated decisions without meaningful review. Poor incident response means no processes for handling AI failures.

Lack of accountability results in no consequences for bypassing governance processes or deploying non-compliant systems.

How can agencies balance AI innovation with compliance requirements?

Balance requires risk-based approaches with clear risk categories and proportionate oversight. Create fast-track processes for low-risk AI applications.

Build reusable compliance artifacts including assessment templates, testing protocols, and monitoring dashboards. Invest in governance automation through compliance checking tools and monitoring dashboards.

Foster a culture where compliance enables rather than blocks innovation through early engagement and collaborative problem-solving.

What resources are available to help agencies with AI compliance?

Federal agencies can access OMB guidance documents including M-25-21 and subsequent memoranda. NIST provides the AI Risk Management Framework and related resources.

The federal AI community of practice offers peer learning opportunities. Agency-specific AI centers of excellence provide specialized support.

GSA maintains AI portfolio resources. Specialized consulting firms offer federal AI governance expertise. Agencies can leverage shared services and cross-agency collaboration to reduce duplicative compliance work.

HS

About the Author

Hashi S.

AI Governance & Digital Transformation Consultant at DigiForm. Expert in federal AI compliance, enterprise AI strategy, and regulated industries. Led 60+ AI projects with zero compliance incidents across government agencies and Fortune 500 companies.

Connect on LinkedIn →

Ready to build sustainable AI governance? Contact DigiForm to learn how we help federal agencies navigate AI compliance requirements while enabling innovation that serves the public mission.