AI Legislation in the US: A 2025 Overview
In this article
Summary
As AI legislation evolves globally, the United States is introducing a growing number of AI bills, acts, and guiding principles at both the federal and state levels. This legislative momentum underscores the country’s commitment to adapting to artificial intelligence, a technology reshaping industries worldwide.
However, with the start of Trump’s second term a lot what has been introduced by Biden is now revoked. This article aims to provide a complete and up-to-date overview of the evolving AI legislation in the US.
Federal legislation
The federal landscape of AI legislation has been marked by contrasting approaches between administrations. The U.S. lacks a comprehensive AI Act; instead, its strategy revolves around fragmented policies aimed at fostering innovation and managing risks. Key milestones include:
The National Artificial Intelligence Initiative Act of 2020 (NAII)
Signed during President Trump’s first term, this act promotes and subsidizes AI innovation efforts across key federal agencies.
Bipartisan House Task Force Report on AI (2024)
Published in December 2024, the ‘Bipartisan House Task Force Report on AI’ articulates guiding principles, key findings and recommendations to help guide future actions that Congress can take to address advancements concerning artificial intelligence.
President Joe Biden’s Executive Orders (2023-2025)
Former President Biden’s policies included an Executive Order on “Safe, Secure, and Trustworthy Development and Use of AI” and the AI Bill of Rights. These sought to regulate AI risks while encouraging ethical use. When President Trump took office on January 2025, many of of these efforts were revoked. However, not all AI efforts undertaken by Former President Joe Biden’s administration were rolled back such as the Executive Order 14141 (The Biden 2025 AI Infrastructure EO) and the Executive Order 14144 (The Biden 2025 Cybersecurity EO).
President Donald Trump’s New Executive Order (2025)
On January 23, 2025, President Trump signed a new Executive Order, titled “Removing Barriers to American Leadership in Artificial Intelligence.” This policy focuses on revoking directives perceived as restrictive to AI innovation, paving the way for “unbiased and agenda-free” development of AI systems.
State level legislation
Given the decentralized nature of U.S. governance, much of the actionable AI legislation is emerging at the state level. States like Colorado, Illinois, and California are shaping the legislative framework for AI compliance in business. Recent examples include:
The Colorado AI Act of 2024
Drawing inspiration from the EU AI Act, this legislation uses a risk-based approach to regulate the deployment of high-risk AI systems. It emphasizes transparency, risk mitigation, and proper documentation of AI system development.
Illinois Supreme Court’s AI Policy (2025)
Effective January 1, 2025, this policy focuses on integrating AI responsibly into judicial systems. It provides guidelines on ethical use, accountability, and safeguarding judicial integrity, positioning Illinois as a leader in judicial AI governance.
These efforts highlight how states are stepping in to provide clarity where federal regulation lags. For business leaders, understanding state-level requirements is crucial to ensuring compliance and mitigating potential risks.

When technology meets legislation
Artificial Intelligence was once the stuff of science fiction, but at time of writing, it is the fastest adopted business technology in history. Today, a quarter of all US businesses are integrating AI into their operations and a further 43% considering AI implementation soon.
While the potential benefits of AI for business, society, healthcare, transport, and culture are significant, these advantages are overshadowed by real risks, including security breaches, misinformation, and flawed decision-making processes.
As with any emerging technology, AI’s rapid evolution has outpaced regulatory frameworks. However, the introduction of the European Union’s EU AI Act on August 1, 2024—the world’s first comprehensive AI law—marks a turning point in the legal landscape.
While the EU leads in AI regulation, other countries are also developing their own frameworks.
As individual regions like the EU continue to develop their own frameworks for AI legislation, multilateral coordination is also on the rise. For example, the ISO has published a number of standards to benefit businesses adopting AI, whilst the Organization for Economic Co-operation and Development released a similar series of AI principles.
The number of discussions about AI taking place within the United Nations and G7 is increasing, too, with emphasis on balancing AI’s potential risks against the many benefits it offers.
This article serves as a general an up-to-date overview of AI legislation in the US, so that business leaders in America and beyond can better prepare for compliance, whilst baking into their operations safer, more secure, more trustworthy AI use.
American AI legislation and the complexity of federalism
In the United States, the complexity of federalism has made it challenging to implement a unified AI policy. Currently, there is no overarching AI Act. The closest initiative is the National Artificial Intelligence Initiative Act of 2020 (NAII) introduced by President Trump in his first-term.
AI regulation in the U.S. consists of various state and federal bills, often addressing only specific aspects, such as the California Consumer Privacy Act, which governs AI in automated decision-making. In other words, America’s AI policy is more akin to a jigsaw puzzle of individual approaches and narrow legislation than it is a centralized strategy. Until a comprehensive AI Act is passed in the US, businesses operating in or with the country will need to be extra vigilant regarding compliance.
American AI legislation and the complexity of the Two-Party system
Up until very recently, there was also the Executive Order (EO) on the ‘Safe, Secure, and Trustworthy Development and Use of AI,’ issued on October 30, 2023, by former President Biden along with the AI Bill of Rights.
Biden’s EO directed agencies to implement new guidelines, rules, and policies, appoint AI officers, participate in international collaborations, and, in some cases, advance regulatory proposals.
However, as of January 20th, 2025 –as promised by current President Donald Trump during his previous election campaign– these efforts have been revoked.
At the Republican National Convention in July 2024, Trump stated:
“We will repeal Joe Biden’s dangerous Executive Order that hinders AI innovation and imposes radical left-wing ideas on the development of this technology. In its place, Republicans support AI development rooted in free speech and human flourishing.”
However, it is important to note that, not all Biden administration AI efforts were rolled back, for example, Executive Order 14141 (The Biden 2025 AI Infrastructure EO) and the Executive Order 14144 (The Biden 2025 Cybersecurity EO). To this date, these Executive Orders remain intact, as the Trump Administration has not revoked them.
At the same time, Trump (and JD Vance) have acknowledged that AI is “very dangerous” and noted the vast resources required to develop and operate it, suggesting this administration may address the growing risks associated with AI applications.
And with the Jan 23rd, 2025, Executive Order signed by President Trump titled “Removing Barriers To American Leadership in Artificial Intelligence”, it is now the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security and revoke certain pre-existing AI policies that hinder American AI Innovation.
While policy developments continue to evolve, the Trump administration’s focus on technological leadership and reduced regulatory oversight is a significant shift from past approaches taken by the former Biden administration.
Either way, from what we can tell the Trump administration has AI on the agenda. For example, by prioritizing AI infrastructure investments such as the recent Stargate Project with OpenAI. In addition, according to Reuters, it is expected that military AI development will be pursued by the Trump administration initiating a “Manhattan Project style” for AI in defense.
But, only time will tell how AI legislation will develop further under the current administration led by Donald Trump.
Let’s look at the current AI legislative landscape.
A 2025 overview of US legislative AI measures and principles
By studying the following key legislative measures and principles, organizations can better ensure their AI systems are safe, fair, and compliant with emerging US regulations.
National Artificial Intelligence Initiative Act of 2020 (NAII)
The National Artificial Intelligence Initiative Act of 2020, introduced under the Trump administration, was one of the first major national efforts specifically targeting artificial intelligence. However, its primary focus is less on regulating AI and more on fostering research and development in the field. The Act aims to solidify the United States’ position as a global leader in AI innovation.
Purpose of the NAII Act
The primary purpose of the 2020 act is to guide AI research, development, and evaluation at various federal science agencies, to drive American R&D into AI technology and champion AI use in government. The Trump-era Act advocated for “a more hands-off, free market–oriented political philosophy and the perceived geopolitical imperative of “leading” in AI.”
Impact of the NAII Act
The American AI Initiative’s central impact on business in the US has been the coordination of AI activities across different federal agencies. Below we list the main agencies affected and their directions, with emphasis on those affecting business across the country.
- The National Science and Technology Council is to establish an Interagency Committee to coordinate federal programs and activities in support of the initiative.
- The Department of Energy (DOE) is to establish the National Artificial Intelligence Advisory Committee to advise the President and the Initiative Office on matters related to the initiative.
- The DOE must also carry out an artificial intelligence research and development program to
- (1) advance artificial intelligence tools, systems, capabilities, and workforce needs; and
- (2) improve the reliability of artificial intelligence methods and solutions relevant to DOE’s mission.
- The DOE must also carry out an artificial intelligence research and development program to
- The National Science Foundation (NSF) is to enter a contract with the National Research Council of the National Academies of Sciences, Engineering, and Medicine to conduct a study of the current and future impact of artificial intelligence on the workforce of the United States.
- The National Institute of Standards and Technology is to develop voluntary standards for artificial intelligence systems, among other things.
- Crucially, the goal of these standards is not—as is the case in the EU—to make AI technology safer, more secure, and more trustworthy, but instead “to advance US AI leadership.”
- The NSF is also ordered to fund research and education activities in artificial intelligence systems and related fields.
- And finally, the National Artificial Intelligence Initiative Act of 2020 is to provide regulatory guidance on AI which “reflects American values.”
Executive order: Removing barriers to American Leadership in Artificial Intelligence
The recent executive order titled “Removing Barriers to American Leadership in Artificial Intelligence”, signed by President Trump on January 23rd,2025, aims to sustain and enhance America’s global AI dominance by promoting human flourishing, economic competitiveness, and national security.
Purpose
This order will revoke certain AI policies and directives that act as barriers to American AI innovation essentially clearing a path for the United States to act decisively to retain global leadership in artificial intelligence. In order to maintain this leadership, AI systems that are developed must be “free from ideological bias or engineered social agendas.”
Key aspects
- Developing an Artificial Intelligence Action Plan: Within 180 days of this order, key advisors on science, technology, AI, crypto, national security, economic policy, and domestic policy, along with relevant government officials, must create and submit a plan to the President to carry out the policy in section 2 of this order.
- Implementation of order revocation: Key officials, including the APST, the Special Advisor for AI and Crypto, and the APNSA, must immediately review all actions taken under the revoked Executive Order 14110 on AI. They will identify any actions that conflict with the new policy and work with relevant agencies to suspend, revise, or rescind them as necessary. If changes cannot be made immediately, exemptions will be applied until finalized.
Bipartisan House Task Force Report on Artificial Intelligence
On December 17th 2024, The U.S. House of Representatives published its Bipartisan House Task Force Report on AI. This report serves as a guide for Congress on how to handle advances in artificial intelligence in the future. It articulates guiding principles, 66 key findings, and 89 recommendations, organized into 15 chapters.
On the date of writing this article, it seems that this initiative is still active under the current Trump administration.
We’ve created a summarized overview of some of the key principles from the report. However, it should be noted that according to the U.S. House of Representatives.
Note: This report is certainly not the final word on AI issues for Congress. Instead, it should be viewed as a tool for identifying and evaluating AI policy proposals.
Identify AI issue novelty
Policymakers can avoid duplicative mandates, by looking at whether new problems caused by AI are actually new or if they’re similar to existing issues that already have laws addressing them.
Promote AI innovation
To keep leading and fully benefit from AI, policymakers should encourage innovation in this field.
Protect against AI risks and harm
We need to ensure safety for Americans against both unintentional and harmful uses of AI. Meaningful governance of AI will require combining technical fixes and policy efforts to understand and reduce the risks tied to AI. A thoughtful, risk-based approach to AI governance can promote innovation rather than stifle it. Also, while AI creates challenges, it can also be used to help solve those problems. Policymakers should think about how AI technology can assist in addressing issues as they create regulations.
Government leadership in responsible use
Trust is essential for people to feel comfortable using AI in businesses and everyday life. The federal government can build that trust by creating responsible guidelines and policies that maximize AI benefits while minimizing risks, and by setting a good example.
Support sector-specific policies
To create effective AI policies, federal agencies with sector-specific expertise (and other parts of government) should use their existing authority to respond to AI use within their individual domains of expertise and the context of the AI’s use. This allows for better communication and action between government bodies and those using AI.
Take an incremental approach
AI is changing quickly, and it’s unrealistic to have Congress create a one-time policy that will cover everything about AI. Developing a good policy framework requires careful planning and should evolve as AI progresses.
Keep humans at the center of AI policy
AI systems are influenced by the values of those who create them, and they need human guidance for training. The U.S. must also invest in attracting and training talent to stay competitive in AI. As AI automates tasks, it will affect jobs, so as laws and regulations are made, it’s important to consider how they impact people and their freedoms.
State-level legislation
We have just examined the three most significant nationwide legislation-related efforts concerning artificial intelligence in the U.S. and found that the first is not legally binding, the second primarily focuses on innovation rather than regulation and the third one is to support Congress.
Given America’s unique political landscape and the historical reluctance of the White House to impose heavily on state autonomy, it is at the state level that AI legislation may offer business leaders a clearer vision of what a future U.S. AI Act could entail.
Several states, led by Colorado, Maryland and California, have already passed AI-related laws to further regulate AI use. Recently, there has been a judicial development related to AI. The Illinois Supreme Court (Ill) announced its policy on artificial intelligence which is effective as of January 1st, 2025.
Let’s take a look at some comprehensive state-level legislation below.

The Colorado AI Act
The Colorado AI Act has arguably established the foundational framework for a comprehensive US AI Act, borrowing several elements from the EU AI Act.
Principles of the Colorado AI Act
As the first comprehensive AI legislation in the US, the Colorado AI Act adopts a risk-based approach to AI—similar to the EU’s recent AI act—primarily targeting the developers and deployers of high-risk AI systems.
Business leaders in IT and other sectors can prepare for compliance with the Colorado AI Act and other acts which may follow in its footsteps by:
- Ensuring proper transparency by making disclosures to consumers interacting with high-risk AI systems.
- Developers of high-risk AI systems should:
- Establish documentation around the purpose, intended uses, benefits, and limitations of each system, including:
- High-level summaries of the data used to train each system, a record of the data governance measures used to ensure that this data was suitable, and proof that the developer has mitigated biases.
- Documentation covering the purpose of each AI system, including information on their benefits, current and foreseeable intended applications, outputs and limitations, and the potential risks associated with inappropriate use of the system.
- Establish procedures to mitigate against any identified risks.
- Provide instructions to deployers of high-risk AI systems on how they should be used and monitored.
- Implement risk management policies and procedures which incorporate standards from industry guidance, such as the NIST AI Risk Management Framework or relevant ISO standards.
- Establish documentation around the purpose, intended uses, benefits, and limitations of each system, including:
- Developing AI impact assessments to identify and disclose known and foreseeable risks inherent in different AI systems.
- Implementing key compliance indicators to show that reasonable care was used to mitigate algorithmic discrimination when developing and deploying high-risk AI systems.
Illinois Supreme Court Policy on Artificial Intelligence
The Illinois Supreme Court (Ill), recognizing the rapid advancements in Generative AI (Gen-AI) technologies capable of creating human-like text, images, videos, audio, and other content, has published its artificial intelligence policy, effective January 1st, 2025.
The policy outlines certain key guidelines designed to address the integration of artificial intelligence into judicial and legal systems to ensure responsible and effective use while safeguarding the integrity of court processes.
Overall, the document advocates for responsible AI integration in the legal system while prioritizing ethics and trust. Guidelines in the policy relate to topics such as ethical standards, authorized use in judicial and legal proceedings, accountability and professional conduct, education and adaptation, and judicial responsibility.
American AI Legislation in a global context
As one of the wealthiest and most powerful nations in the world, the U.S. is understandably expected to align its AI legislation with the approaches taken by other leading entities. Allowing the technology to go unregulated could expose businesses across the nation to significant risks.
And yet, so far, Congress’s approach to AI legislation has been to avoid straying into the territory of regulating AI in business—i.e., the private sector—and instead to champion America’s status as a leader in AI R&D and governmental AI deployment. In part, this is thanks to a conflicting approach to AI law taken by the current and former Biden and Trump administrations.
But as AI use in business inevitably continues to expand and the risks associated begin to weigh on business leadership, the challenge for US governance will be to develop a comprehensive, nation-wide AI Act.
More challenging even still is to develop an Act which clearly defines AI, measures and categorizes its risks, accounts for its application across all sectors, establishes clear strategies for risk mitigation whilst preserving its benefits, and to do all this whilst gathering bi-partisan support for the Act so that it might pass Congress—all at a time when the future of US politics is deeply uncertain.
On the global front, the Trump administration is also expected to take a more poised approach to AI policy, influenced by current geopolitical pressures.
US AI legislation and its implications for business
Though legislation on AI in the US is piecemeal, businesses must still take note of the current, emerging, and potential future regulations discussed in this article.
According to the National Law Review (NLR), “Businesses should stay informed of policy developments while maintaining robust AI governance and compliance frameworks that can adapt to changing federal priorities while ensuring compliance with any applicable legal and regulatory obligations and standards.”
Moreover, the penalties for AI non-compliance in the US can be hefty.
AI non-compliance penalties in the US
At present, whilst there is no comprehensive federal Act governing AI use and risk mitigation, there is still a range of laws regulating AI which, if non-compliance is triggered, can result in severe financial penalties, such as in the following examples:
- In 2022, China-based iTutor Group was fined by the USA’s Equal Employment Opportunity Commission (EEOC) for automated age discrimination in their AI hiring software. The company was fined USD $365,000 under the charge of AI bias.
- Also in 2022, California-based FinTech firm Hello Digit was fined $2.7 million for a faulty AI algorithm in their app, which left users paying unnecessary overdraft fees. They were penalized on grounds of AI efficacy by the US Consumer Financial Protection Bureau (CFPB).
- Last year, in 2023, two US lawyers were fined $5,000 for submitting citations in a court case which had been falsely-generated by ChatGPT, breaching AI regulation on transparency and efficacy.

Conclusion
AI legislation in the US differs significantly from that in other parts of the world. So far, the focus has primarily been on innovation, government AI use, and reinforcing “traditional American values.”
For organizations operating within (and with) the US, navigating the myriad of AI bills, acts, and proposals can be challenging. Nevertheless, it’s essential to keep an eye on what is needed for US AI compliance, especially during a period with so much change.
Learn more about AI use and regulation in business by exploring the Software Improvement Group blog.
Are you ready for the complexities of US AI legislation? Our AI readiness guide, authored by Rob van der Veer, simplifies compliance with evolving regulations like the Colorado AI Act. With 19 steps covering governance, security, and IT. It helps organizations minimize risk and harness AI’s full potential, while staying ahead of regulatory changes.
Don’t wait for AI regulations to catch up. Download our AI readiness guide now to navigate U.S. legislation with confidence.