Updated: 03-01-2025
Reading time: 6-7 minutes

AI Legislation in the US: A 2025 Overview

Software Improvement Group
Yellow dots

Summary

In tandem with the evolution of AI legislation elsewhere in the world, the USA has been introducing a variety of AI bills, acts, and guiding principles over the past few years—both at the federal and state-level.

Federal

  • The National Artificial Intelligence Initiative Act of 2020 promotes and subsidizes AI innovation efforts across key federal agencies.
  • The Blueprint for an AI Bill of Rights, introduced in 2022, introduces a set of guiding, non-binding principles for the safe and secure development of AI.
  • Published in December 2024, the ‘Bipartisan House Task Force Report on AI’ articulates guiding principles, key findings and recommendations to help guide future actions that Congress can take to address advancements concerning artificial intelligence.

State-level

  • State AI legislation like the Colorado AI Act of 2024 takes inspiration from earlier US AI legislation and the EU AI Act, drafting a more comprehensive set of legally binding laws governing safe, secure, and transparent AI implementation in business.
  • As of January 1st, 2025, the Illinois Supreme Court published their AI policy which outlines certain key guidelines designed to address the integration of artificial intelligence into judicial and legal systems to ensure responsible and effective use while safeguarding the integrity of court processes.

Compliance with these pieces of US AI legislation will be key to avoiding financial penalties, maximizing the benefits of AI in business, minimizing its risks, and ensuring the proper safe, secure, and trustworthy development and deployment of new AI systems.

This is only the beginning, and the future is far from certain. With the re-election of Donald Trump, the U.S. AI policy is expected to be reshaped.

A scenic view of the city of San Francisco, California in the US.

When technology meets legislation

Artificial Intelligence was once the stuff of science fiction, but at time of writing, it is the fastest adopted business technology in history. Today, a quarter of all US businesses are integrating AI into their operations and a further 43% considering AI implementation soon.

While the potential benefits of AI for business, society, healthcare, transport, and culture are significant, these advantages are overshadowed by real risks, including security breaches, misinformation, and flawed decision-making processes.

As with any emerging technology, AI’s rapid evolution has outpaced regulatory frameworks. However, the introduction of the European Union’s EU AI Act on August 1, 2024—the world’s first comprehensive AI law—marks a turning point in the legal landscape. 

While the EU leads in AI regulation, other countries are also developing their own frameworks.

As individual regions like the EU continue to develop their own frameworks for AI legislation, multilateral coordination is also on the rise. For example, the ISO has published a number of standards to benefit businesses adopting AI, whilst the Organization for Economic Co-operation and Development released a similar series of AI principles.

The number of discussions about AI taking place within the United Nations and G7 is increasing, too, with emphasis on balancing AI’s potential risks against the many benefits it offers.

This article serves as a general an up-to-date overview of AI legislation in the US, so that business leaders in America and beyond can better prepare for compliance, whilst baking into their operations safer, more secure, more trustworthy AI use.

American AI legislation and the complexity of federalism

In the United States, the complexity of federalism has made it challenging to implement a unified AI policy. Currently, there is no overarching AI Act. The closest initiative is President Joe Biden’s executive order (EO) on the ‘Safe, Secure, and Trustworthy Development and Use of AI,’ issued on October 30, 2023.

AI regulation in the U.S. consists of various state and federal bills, often addressing only specific aspects, such as the California Consumer Privacy Act, which governs AI in automated decision-making. In other words, America’s AI policy is more akin to a jigsaw puzzle of individual approaches and narrow legislation than it is a centralized strategy. Until a comprehensive AI Act is passed in the US, businesses operating in or with the country will need to be extra vigilant regarding compliance.

An overview of US legislative AI measures and principles

By studying the following key legislative measures and principles, organizations can better ensure their AI systems are safe, fair, and compliant with emerging US regulations.

National Artificial Intelligence Initiative Act of 2020 (NAII)

The National Artificial Intelligence Initiative Act of 2020, introduced under the Trump administration, was one of the first major national efforts specifically targeting artificial intelligence. However, its primary focus is less on regulating AI and more on fostering research and development in the field. The Act aims to solidify the United States’ position as a global leader in AI innovation.

Purpose of the NAII Act

The primary purpose of the 2020 act is to guide AI research, development, and evaluation at various federal science agencies, to drive American R&D into AI technology and champion AI use in government. The Trump-era Act advocated for “a more hands-off, free market–oriented political philosophy and the perceived geopolitical imperative of “leading” in AI.”

Impact of the NAII Act

The American AI Initiative’s central impact on business in the US has been the coordination of AI activities across different federal agencies. Below we list the main agencies affected and their directions, with emphasis on those affecting business across the country.

  • The National Science and Technology Council is to establish an Interagency Committee to coordinate federal programs and activities in support of the initiative.
  • The Department of Energy (DOE) is to establish the National Artificial Intelligence Advisory Committee to advise the President and the Initiative Office on matters related to the initiative.
    • The DOE must also carry out an artificial intelligence research and development program to
      • (1) advance artificial intelligence tools, systems, capabilities, and workforce needs; and
      • (2) improve the reliability of artificial intelligence methods and solutions relevant to DOE’s mission.
  • The National Science Foundation (NSF) is to enter a contract with the National Research Council of the National Academies of Sciences, Engineering, and Medicine to conduct a study of the current and future impact of artificial intelligence on the workforce of the United States.
  • The National Institute of Standards and Technology is to develop voluntary standards for artificial intelligence systems, among other things.
    • Crucially, the goal of these standards is not—as is the case in the EU—to make AI technology safer, more secure, and more trustworthy, but instead “to advance US AI leadership.
  • The NSF is also ordered to fund research and education activities in artificial intelligence systems and related fields.
  • And finally, the National Artificial Intelligence Initiative Act of 2020 is to provide regulatory guidance on AI which “reflects American values.

Blueprint for an AI Bill of Rights

Building on the Trump-era initiatives, the Biden administration introduced the Blueprint for an AI Bill of Rights in October 2022. This proposal sought to establish core principles to guide how federal agencies and other entities could approach AI.

A legal disclaimer at the top of the document states that it is not legally binding. Instead, it serves as a voluntary framework that agencies, independent organizations, and businesses can choose to follow. In essence, the Blueprint is not official U.S. policy but, as its name suggests, a forward-looking guide for the future of AI.

Principles of the Blueprint for an AI Bill of Rights

The driving mantra of the AI Bill of Rights Blueprint is to make “automated systems work for the American people.” The Blueprint seeks to achieve this by establishing five key principles for a future, legally binding AI Act.

  • Safe and effective systems
    • Ensure AI systems are tested and monitored pre-deployment and throughout deployment.
    • Ensure they are developed in consultation with diverse communities.
    • Ensure they are not developed with the intent, or potential, to inflict harm on users.
    • Ensure they protect users against inappropriate or irrelevant data use
  • Protection against algorithmic discrimination
    • Prevent unjustified discrimination of users by algorithms.
    • Designers, developers and deployers of AI systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination.
    • Ensure accessibility to algorithms and AI systems for people with disabilities.
    • Encourage independent evaluation and regular reporting on algorithmic impact.
  • Protection against abusive data practices
    • Give users agency over how their data is used.
    • Protect users from abusive data practices with in-built protections.
    • Note: This is akin to the ‘privacy by design’ principle of AI ISO standard ISO/IEC 31700.
    • User permission should be sought and respected regarding collection, use, access, transfer, and deletion of user data.
    • Data consent requests should be brief and written in clear, plain, understandable language.
    • Continuous surveillance and monitoring should not be used in education, work, housing, or other similar contexts.
  • Transparency
    • Inform users about AI use and its impact; i.e., clearly communicating to people when AI is in operation and how and why it impacts them.
    • Again, the language used in communications must be clear, simple, and straightforward.
    • Users should be informed of how and why an outcome impacting them was decided upon or determined by AI.
  • Opt-out and human alternatives, consideration, and fallback
    • Allow users to opt out of AI use
    • Provide human users easy, fast access to human support to help them deal with any issues they encounter when using AI technology.

Bipartisan House Task Force Report on Artificial Intelligence

On December 17th 2024, The U.S. House of Representatives published its Bipartisan House Task Force Report on AI. This report serves as a guide for Congress on how to handle advances in artificial intelligence in the future. It articulates guiding principles, 66 key findings, and 89 recommendations, organized into 15 chapters.

We’ve created a summarized overview of some of the key principles from the report. However, it should be noted that according to the U.S. House of Representatives.

Note: This report is certainly not the final word on AI issues for Congress. Instead, it should be viewed as a tool for identifying and evaluating AI policy proposals.

Identify AI issue novelty

Policymakers can avoid duplicative mandates, by looking at whether new problems caused by AI are actually new or if they’re similar to existing issues that already have laws addressing them.

Promote AI innovation

To keep leading and fully benefit from AI, policymakers should encourage innovation in this field.

Protect against AI risks and harm

We need to ensure safety for Americans against both unintentional and harmful uses of AI. Meaningful governance of AI will require combining technical fixes and policy efforts to understand and reduce the risks tied to AI. A thoughtful, risk-based approach to AI governance can promote innovation rather than stifle it. Also, while AI creates challenges, it can also be used to help solve those problems. Policymakers should think about how AI technology can assist in addressing issues as they create regulations.

Government leadership in responsible use

Trust is essential for people to feel comfortable using AI in businesses and everyday life. The federal government can build that trust by creating responsible guidelines and policies that maximize AI benefits while minimizing risks, and by setting a good example.

Support sector-specific policies

To create effective AI policies, federal agencies with sector-specific expertise (and other parts of government) should use their existing authority to respond to AI use within their individual domains of expertise and the context of the AI’s use. This allows for better communication and action between government bodies and those using AI.

Take an incremental approach

AI is changing quickly, and it’s unrealistic to have Congress create a one-time policy that will cover everything about AI. Developing a good policy framework requires careful planning and should evolve as AI progresses.

Keep humans at the center of AI policy

AI systems are influenced by the values of those who create them, and they need human guidance for training. The U.S. must also invest in attracting and training talent to stay competitive in AI. As AI automates tasks, it will affect jobs, so as laws and regulations are made, it’s important to consider how they impact people and their freedoms.

State-level legislation

We have just examined the three most significant nationwide legislation-related efforts concerning artificial intelligence in the U.S. and found that the first is not legally binding, the second primarily focuses on innovation rather than regulation and the third one is to support Congress.

Given America’s unique political landscape and the historical reluctance of the White House to impose heavily on state autonomy, it is at the state level that AI legislation may offer business leaders a clearer vision of what a future U.S. AI Act could entail.

Several states, led by Colorado, Maryland and California, have already passed AI-related laws to further regulate AI use. Recently, there has been a judicial development related to AI. The Illinois Supreme Court (Ill) announced its policy on artificial intelligence which is effective as of January 1st, 2025. 

Let’s take a look at some comprehensive state-level legislation below.

2 lawyers discussing american state level legislation

The Colorado AI Act

The Colorado AI Act has arguably established the foundational framework for a comprehensive US AI Act, borrowing several elements from the EU AI Act.

Principles of the Colorado AI Act

As the first comprehensive AI legislation in the US, the Colorado AI Act adopts a risk-based approach to AI—similar to the EU’s recent AI act—primarily targeting the developers and deployers of high-risk AI systems.

Business leaders in IT and other sectors can prepare for compliance with the Colorado AI Act and other acts which may follow in its footsteps by:

  • Ensuring proper transparency by making disclosures to consumers interacting with high-risk AI systems.
  • Developers of high-risk AI systems should:
    • Establish documentation around the purpose, intended uses, benefits, and limitations of each system, including:
      • High-level summaries of the data used to train each system, a record of the data governance measures used to ensure that this data was suitable, and proof that the developer has mitigated biases.
      • Documentation covering the purpose of each AI system, including information on their benefits, current and foreseeable intended applications, outputs and limitations, and the potential risks associated with inappropriate use of the system.
    • Establish procedures to mitigate against any identified risks.
    • Provide instructions to deployers of high-risk AI systems on how they should be used and monitored.
    • Implement risk management policies and procedures which incorporate standards from industry guidance, such as the NIST AI Risk Management Framework or relevant ISO standards.
  • Developing AI impact assessments to identify and disclose known and foreseeable risks inherent in different AI systems.
  • Implementing key compliance indicators to show that reasonable care was used to mitigate algorithmic discrimination when developing and deploying high-risk AI systems.

Illinois Supreme Court Policy on Artificial Intelligence

The Illinois Supreme Court (Ill), recognizing the rapid advancements in Generative AI (Gen-AI) technologies capable of creating human-like text, images, videos, audio, and other content, has published its artificial intelligence policy, effective January 1st, 2025.

The policy outlines certain key guidelines designed to address the integration of artificial intelligence into judicial and legal systems to ensure responsible and effective use while safeguarding the integrity of court processes.

Overall, the document advocates for responsible AI integration in the legal system while prioritizing ethics and trust. Guidelines in the policy relate to topics such as ethical standards, authorized use in judicial and legal proceedings, accountability and professional conduct, education and adaptation, and judicial responsibility.

American AI Legislation in a global context

As one of the wealthiest and most powerful nations in the world, the U.S. is understandably expected to align its AI legislation with the approaches taken by other leading entities. Allowing the technology to go unregulated could expose businesses across the nation to significant risks.

And yet, so far, Congress’s approach to AI legislation has been to avoid straying into the territory of regulating AI in business—i.e., the private sector—and instead to champion America’s status as a leader in AI R&D and governmental AI deployment. In part, this is thanks to a conflicting approach to AI law taken by the current and former Biden and Trump administrations.

But as AI use in business inevitably continues to expand and the risks associated begin to weigh on business leadership, the challenge for US governance will be to develop a comprehensive, nation-wide AI Act.

More challenging even still is to develop an Act which clearly defines AI, measures and categorizes its risks, accounts for its application across all sectors, establishes clear strategies for risk mitigation whilst preserving its benefits, and to do all this whilst gathering bi-partisan support for the Act so that it might pass Congress—all at a time when the future of US politics is deeply uncertain.

US AI legislation and its implications for business

Though legislation on AI in the US is piecemeal, businesses must still take note of the current, emerging, and potential future regulations discussed in this article.

The careful regulation, measurement, assessment, and risk mitigation of AI—as promoted by the USA’s mosaic of AI related bills, acts, and principles—can actively help business leaders to develop and deploy AI safely, securely, and in a manner which fosters trust among stakeholders.

Moreover, the penalties for AI non-compliance in the US can be hefty.

AI non-compliance penalties in the US

At present, whilst there is no comprehensive federal Act governing AI use and risk mitigation, there is still a range of laws regulating AI which, if non-compliance is triggered, can result in severe financial penalties, such as in the following examples:

Discussion between multiple colleagues in a conference room.

Implications of US legislation on AI implementation in business

Current US legislation around AI either emphasizes AI as a tool for the country’s economic growth and continued innovation in the field, or AI risk mitigation.

The effects of Trump’s second term

Donald Trump’s second term is poised to significantly reshape U.S. AI policy.

During his campaign, Trump has strongly indicated his intent to dismantle the AI framework established by President Biden, particularly the 2023 AI Executive Order.

The 2023 Executive Order had introduced voluntary guidelines for model transparency and safety measures, but it has faced criticism from Trump’s allies. They argue that its requirements place a burden on innovators and enforce what they perceive as politically biased oversight.

At the Republican National Convention in July 2024, Trump stated:

“We will repeal Joe Biden’s dangerous Executive Order that hinders AI innovation and imposes radical left-wing ideas on the development of this technology. In its place, Republicans support AI development rooted in free speech and human flourishing.”

At the same time, Trump (and JD Vance) acknowledged that AI is “very dangerous” and noted the vast resources required to develop and operate it, suggesting this administration may address the growing risks associated with AI applications.

The Trump administration is expected to pursue a “light-touch” regulatory approach, aiming to minimize federal intervention in AI and bolster research and development investment.

The policy of The Supreme Court of Illinois is actually a recent example of this approach. Critics argue that the policy is too lenient. For example, the policy promotes AI use without adequate transparency and disregards key OECD guidelines on AI transparency and explainability.

Time will only tell how AI legislation will develop after Trump takes office on January 20th, 2025.

Conclusion

AI legislation in the US differs significantly from that in other parts of the world. So far, the focus has primarily been on innovation, government AI use, and reinforcing “traditional American values.” However, the introduction of comprehensive state-level laws, such as the Colorado AI Act, is beginning to shift this landscape.

For organizations operating within (and with) the US, navigating the myriads of AI bills, acts, and proposals can be challenging. Nevertheless, it’s essential to keep an eye on what is needed for US AI compliance.

Not only will compliance help companies avoid the often-severe penalties for non-compliance, but it also helps maximize the benefits of AI use whilst minimizing its risks.

Readers are encouraged to explore different AI strategies, reassess their current stance on AI in the context of national and international compliance, and align themselves with current and future US AI regulations.

Learn more about AI use and regulation in business by exploring the Software Improvement Group blog

Are you ready for the complexities of US AI legislation? Our AI readiness guide, authored by Rob van der Veer, simplifies compliance with evolving regulations like the Colorado AI Act. With 19 steps covering governance, security, and IT. It helps organizations minimize risk and harness AI’s full potential, while staying ahead of regulatory changes. 

Don’t wait for AI regulations to catch up. Download our AI readiness guide now to navigate U.S. legislation with confidence.