The US AI regulatory landscape in 2026 is a patchwork: one federal executive order revoked, a new one with different priorities in place, dozens of state bills at various stages of passage, voluntary industry commitments of uneven enforceability, and the EU AI Act creating compliance requirements for US companies selling in Europe. For most individuals, this complexity is background noise. For businesses deploying AI, developers building AI products, and anyone interested in how AI is being governed, understanding the actual regulatory situation is increasingly essential. This guide cuts through the noise: what rules actually exist, what they require, and what they mean in practice.
The Federal Level: From Biden's Safety Order to Trump's Acceleration Order
In October 2023, President Biden signed a sweeping Executive Order on AI that established safety testing requirements for frontier AI models, directed federal agencies to develop AI governance frameworks, and put AI safety at the center of US AI policy. In January 2025, President Trump revoked Biden's executive order on his first day in office, describing it as an obstacle to American AI leadership. In its place, Trump signed his own executive order emphasizing AI dominance and explicitly prioritizing AI development over safety-first governance.
- What changed under Trump's AI policy: the revocation of mandatory safety evaluations for frontier AI models; reduced emphasis on AI safety standards from NIST; removal of reporting requirements for companies training large AI models; a posture of accelerating AI deployment rather than cautiously governing it.
- What did not change: the US does not have a comprehensive federal AI law. Congress has not passed any legislation creating AI-specific rights, liability frameworks, or content standards that apply uniformly across all sectors. The US regulatory approach remains sectoral — existing laws (FTC consumer protection, FDA medical device regulations, financial services regulations) apply to AI in those sectors, but there is no unified AI regulatory framework.
- The NIST AI Risk Management Framework: the National Institute of Standards and Technology published an AI Risk Management Framework (AI RMF) that provides voluntary guidance for AI governance. Many large companies have adopted it as a baseline, even though it is not legally mandatory. It remains an active resource for AI governance in 2026.
- AI safety and security provisions in appropriations: even without a specific AI law, Congress has included AI safety provisions in various budget and authorization bills, particularly for defense applications and critical infrastructure.
State-Level AI Regulation: The California Effect and Beyond
In the absence of comprehensive federal legislation, states have been the primary source of AI-specific legal requirements. California's regulatory journey has had the most significant national impact.
- California SB 1047 (vetoed): California's most ambitious AI safety bill, SB 1047, would have required safety testing, emergency shutdown capabilities, and whistleblower protections for developers of large AI models. Governor Newsom vetoed it in September 2024, citing concerns about stifling innovation. The bill's defeat signaled that even in California, AI safety legislation faces significant industry resistance.
- California AB 2013 (signed): requires AI training data transparency — companies must publish information about what data was used to train AI systems. Took effect January 1, 2026.
- California SB 1047's successor legislation: after SB 1047's defeat, California legislators introduced revised legislation in 2025. The regulatory battle continues and its outcome will influence AI regulation nationally.
- Texas AI regulation: Texas has taken a more business-friendly approach, passing AI governance legislation focused on government use of AI rather than private sector restrictions.
- Colorado's AI consumer protection law: Colorado became the first state to pass a comprehensive law specifically addressing high-risk AI systems, requiring impact assessments and consumer notifications when consequential decisions affecting employment, education, housing, and credit are made by AI.
- Illinois and New York: both states have passed AI-related employment laws requiring employers to notify applicants and employees when AI is used in hiring and employment decisions.
- Ohio's education AI mandate: Ohio law requires every school district to publish an AI policy by July 2026 — reflecting the state-level education AI governance trend.
The EU AI Act and What It Means for US Companies
The European Union AI Act — the world's first comprehensive AI regulatory framework — took full effect in 2026 and applies to any AI system that affects people in the EU. For US companies with EU customers, employees, or operations, the EU AI Act creates real compliance requirements.
- The risk-tier framework: the EU AI Act classifies AI systems into four tiers by risk level. Unacceptable risk (social scoring, mass surveillance, certain biometric applications) are banned. High risk (AI in healthcare, education, employment, credit, law enforcement) require conformity assessments, documentation, human oversight mechanisms, and registration in a public database. Limited risk (chatbots) require transparency disclosures. Minimal risk (spam filters, AI in games) have no specific requirements.
- What counts as high-risk AI: AI systems used in hiring and HR (including resume screening, interview analysis), credit scoring, medical device software, educational assessment, law enforcement, immigration, and critical infrastructure are all classified as high-risk and require full compliance by August 2026.
- The GPAI (General Purpose AI) rules: large foundation models like GPT-5.4, Claude, and Gemini — which the EU calls General Purpose AI Models — face transparency obligations including publishing training data summaries, evaluating potential systemic risks, and (for the most powerful models) undergoing adversarial testing.
- Fines: violations of the EU AI Act can result in fines of up to €30 million or 6% of global annual revenue — whichever is higher. For OpenAI or Google, 6% of global revenue is a very large number.
- The Brussels Effect: the EU AI Act is already influencing AI practices globally. US companies with EU operations are implementing EU AI Act compliance frameworks that they often apply globally rather than maintaining separate practices by geography — effectively exporting EU standards to US operations.
What AI Regulation Means for Individuals in the USA in 2026
- You have limited formal AI rights under current US federal law: unlike EU citizens who have rights under the EU AI Act (including explanation rights for consequential AI decisions), US citizens have no comprehensive federal AI rights framework. Your protections come from sector-specific laws (FCRA for credit, EEOC for employment) and state-level provisions where applicable.
- High-risk AI decisions affecting you should be disclosed in states with disclosure laws: if you live in Illinois or New York and are applying for a job, employers must disclose AI use in hiring. In Colorado, high-risk AI decisions in employment and credit must include consumer notification.
- You can request explanation in the EU — and this may apply if you are subject to a US company's EU-compliant AI system: US companies that have implemented GDPR and EU AI Act compliance globally may apply explanation rights and objection rights to US users as a matter of internal policy, even where not legally required.
- Voluntary safety commitments: major AI companies including OpenAI, Google, Anthropic, and Microsoft have made voluntary AI safety commitments to the White House. These include watermarking AI-generated content, implementing safety testing before model releases, and sharing security information. These are voluntary — enforcement depends on company culture and reputation incentives, not legal consequences.
Pro Tip: For small business owners deploying AI in the US: your most immediate practical compliance obligation is likely disclosure. If you use AI in hiring (resume screening, interview scheduling), lending decisions, or marketing targeting, check your state's AI disclosure requirements. Illinois, New York, and Colorado have the most active enforcement. More states are passing similar laws in 2026. The compliance cost of proper disclosure is low; the legal exposure of non-disclosure in states with active enforcement is real.