11 October 2024
AI Legislation in the US: A 2024 Overview
7 min read
Written by: Software Improvement Group
Summary
In tandem with the evolution of AI legislation elsewhere in the world, the USA has been introducing a variety of AI bills, acts, and guiding principles over the past few years—both at the federal and state level.
The National Artificial Intelligence Initiative Act of 2020 promotes and subsidizes AI innovation efforts across key federal agencies.
The Blueprint for an AI Bill of Rights, introduced in 2022, introduces a set of guiding, non-binding principles for the safe and secure development of AI.
State AI legislation like the Colorado AI Act of 2024 takes inspiration from earlier US AI legislation and the EU AI Act, drafting a more comprehensive set of legally binding laws governing safe, secure, and transparent AI implementation in business.
Compliance with these pieces of US AI legislation will be key to avoiding financial penalties, maximizing the benefits of AI in business, minimizing its risks, and ensuring the proper safe, secure, and trustworthy development and deployment of new AI systems.
When technology meets legislation
Artificial Intelligence was once the stuff of science fiction, but at time of writing, it is the fastest adopted business technology in history. Today, a quarter of all US businesses are integrating AI into their operations and a further 43% considering AI implementation soon.
While the potential benefits of AI for business, society, healthcare, transport, and culture are significant, these advantages are overshadowed by real risks, including security breaches, misinformation, and flawed decision-making processes.
As with any emerging technology, AI’s rapid evolution has outpaced regulatory frameworks. However, the introduction of the European Union’s EU AI Act on August 1, 2024—the world’s first comprehensive AI law—marks a turning point in the legal landscape. While the EU leads in AI regulation, other countries are also developing their own frameworks.
As individual regions like the EU continue to develop their own frameworks for AI legislation, multilateral coordination is also on the rise. For example, the ISO has published a number of standards to benefit businesses adopting AI, whilst the Organization for Economic Co-operation and Development released a similar series of AI principles.
The number of discussions about AI taking place within the United Nations and G7 is increasing, too, with emphasis on balancing AI’s potential risks against the many benefits it offers.
This article serves as a general an up-to-date overview of AI legislation in the US, so that business leaders in America and beyond can better prepare for compliance, whilst baking into their operations safer, more secure, more trustworthy AI use.
American AI legislation and the complexity of federalism
In the United States, the complexity of federalism has made it challenging to implement a unified AI policy. Currently, there is no overarching AI Act. The closest initiative is President Joe Biden’s executive order (EO) on the ‘Safe, Secure, and Trustworthy Development and Use of AI,’ issued on October 30, 2023.
AI regulation in the U.S. consists of various state and federal bills, often addressing only specific aspects, such as the California Consumer Privacy Act, which governs AI in automated decision-making. In other words, America’s AI policy is more akin to a jigsaw puzzle of individual approaches and narrow legislation than it is a centralized strategy. Until a comprehensive AI Act is passed in the US, businesses operating in or with the country will need to be extra vigilant regarding compliance.
An overview of US legislative AI measures and principles
By studying the following key legislative measures and principles, organizations can better ensure their AI systems are safe, fair, and compliant with emerging US regulations.
National Artificial Intelligence Initiative Act of 2020 (NAII)
The National Artificial Intelligence Initiative Act of 2020, introduced under the Trump administration, was one of the first major national efforts specifically targeting artificial intelligence. However, its primary focus is less on regulating AI and more on fostering research and development in the field. The Act aims to solidify the United States’ position as a global leader in AI innovation.
Purpose of the NAII Act
The primary purpose of the 2020 act is to guide AI research, development, and evaluation at various federal science agencies, to drive American R&D into AI technology and champion AI use in government. The Trump-era Act advocated for “a more hands-off, free market–oriented political philosophy and the perceived geopolitical imperative of “leading” in AI.”
Impact of the NAII Act
The American AI Initiative’s central impact on business in the US has been the coordination of AI activities across different federal agencies. Below we list the main agencies affected and their directions, with emphasis on those affecting business across the country.
- The National Science and Technology Council is to establish an Interagency Committee to coordinate federal programs and activities in support of the initiative.
- The Department of Energy (DOE) is to establish the National Artificial Intelligence Advisory Committee to advise the President and the Initiative Office on matters related to the initiative.
- The DOE must also carry out an artificial intelligence research and development program to
- (1) advance artificial intelligence tools, systems, capabilities, and workforce needs; and
- (2) improve the reliability of artificial intelligence methods and solutions relevant to DOE’s mission.
- The DOE must also carry out an artificial intelligence research and development program to
- The National Science Foundation (NSF) is to enter a contract with the National Research Council of the National Academies of Sciences, Engineering, and Medicine to conduct a study of the current and future impact of artificial intelligence on the workforce of the United States.
- The National Institute of Standards and Technology is to develop voluntary standards for artificial intelligence systems, among other things.
- Crucially, the goal of these standards is not—as is the case in the EU—to make AI technology safer, more secure, and more trustworthy, but instead “to advance US AI leadership.”
- The NSF is also ordered to fund research and education activities in artificial intelligence systems and related fields.
- And finally, the National Artificial Intelligence Initiative Act of 2020 is to provide regulatory guidance on AI which “reflects American values.”
Blueprint for an AI Bill of Rights
Building on the Trump-era initiatives, the Biden administration introduced the Blueprint for an AI Bill of Rights in October 2022. This proposal sought to establish core principles to guide how federal agencies and other entities could approach AI.
A legal disclaimer at the top of the document states that it is not legally binding. Instead, it serves as a voluntary framework that agencies, independent organizations, and businesses can choose to follow. In essence, the Blueprint is not official U.S. policy but, as its name suggests, a forward-looking guide for the future of AI.
Principles of the Blueprint for an AI Bill of Rights
The driving mantra of the AI Bill of Rights Blueprint is to make “automated systems work for the American people.” The Blueprint seeks to achieve this by establishing five key principles for a future, legally binding AI Act.
- Safe and effective systems
- Ensure AI systems are tested and monitored pre-deployment and throughout deployment.
- Ensure they are developed in consultation with diverse communities.
- Ensure they are not developed with the intent, or potential, to inflict harm on users.
- Ensure they protect users against inappropriate or irrelevant data use
- Protection against algorithmic discrimination
- Prevent unjustified discrimination of users by algorithms.
- Designers, developers and deployers of AI systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination.
- Ensure accessibility to algorithms and AI systems for people with disabilities.
- Encourage independent evaluation and regular reporting on algorithmic impact.
- Protection against abusive data practices
- Give users agency over how their data is used.
- Protect users from abusive data practices with in-built protections.
- Note: This is akin to the ‘privacy by design’ principle of AI ISO standard ISO/IEC 31700.
- User permission should be sought and respected regarding collection, use, access, transfer, and deletion of user data.
- Data consent requests should be brief and written in clear, plain, understandable language.
- Continuous surveillance and monitoring should not be used in education, work, housing, or other similar contexts.
- Transparency
- Inform users about AI use and its impact; i.e., clearly communicating to people when AI is in operation and how and why it impacts them.
- Again, the language used in communications must be clear, simple, and straightforward.
- Users should be informed of how and why an outcome impacting them was decided upon or determined by AI.
- Opt-out and human alternatives, consideration, and fallback
- Allow users to opt out of AI use
- Provide human users easy, fast access to human support to help them deal with any issues they encounter when using AI technology.
State-level legislation
We have just examined the two most significant nationwide legislation-related efforts concerning artificial intelligence in the U.S. and found that while one is not legally binding, the other primarily focuses on innovation rather than regulation.
Given America’s unique political landscape and the historical reluctance of the White House to impose heavily on state autonomy, it is at the state level that AI legislation may offer business leaders a clearer vision of what a future U.S. AI Act could entail.
Several states, led by Colorado, Maryland and California, have already passed AI-related laws to further regulate AI use. We’ll take a look specifically at Colorado’s state legislation, as it is the most comprehensive.
The Colorado AI Act
The Colorado AI Act has arguably established the foundational framework for a comprehensive US AI Act, borrowing several elements from the EU AI Act.
Principles of the Colorado AI Act
As the first comprehensive AI legislation in the US, the Colorado AI Act adopts a risk-based approach to AI—similar to the EU’s recent AI act—primarily targeting the developers and deployers of high-risk AI systems.
Business leaders in IT and other sectors can prepare for compliance with the Colorado AI Act and other acts which may follow in its footsteps by:
- Ensuring proper transparency by making disclosures to consumers interacting with high-risk AI systems.
- Developers of high-risk AI systems should:
- Establish documentation around the purpose, intended uses, benefits, and limitations of each system, including:
- High-level summaries of the data used to train each system, a record of the data governance measures used to ensure that this data was suitable, and proof that the developer has mitigated biases.
- Documentation covering the purpose of each AI system, including information on their benefits, current and foreseeable intended applications, outputs and limitations, and the potential risks associated with inappropriate use of the system.
- Establish procedures to mitigate against any identified risks.
- Provide instructions to deployers of high-risk AI systems on how they should be used and monitored.
- Implement risk management policies and procedures which incorporate standards from industry guidance, such as the NIST AI Risk Management Framework or relevant ISO standards.
- Establish documentation around the purpose, intended uses, benefits, and limitations of each system, including:
- Developing AI impact assessments to identify and disclose known and foreseeable risks inherent in different AI systems.
- Implementing key compliance indicators to show that reasonable care was used to mitigate algorithmic discrimination when developing and deploying high-risk AI systems.
American AI Legislation in a Global Context
As one of the wealthiest and most powerful nations in the world, the U.S. is understandably expected to align its AI legislation with the approaches taken by other leading entities. Allowing the technology to go unregulated could expose businesses across the nation to significant risks.
And yet, so far, Congress’s approach to AI legislation has been to avoid straying into the territory of regulating AI in business—i.e., the private sector—and instead to champion America’s status as a leader in AI R&D and governmental AI deployment. In part, this is thanks to a conflicting approach to AI law taken by the current and former Biden and Trump administrations.
But as AI use in business inevitably continues to expand and the risks associated begin to weigh on business leadership, the challenge for US governance will be to develop a comprehensive, nation-wide AI Act.
More challenging even still is to develop an Act which clearly defines AI, measures and categorizes its risks, accounts for its application across all sectors, establishes clear strategies for risk mitigation whilst preserving its benefits, and to do all this whilst gathering bi-partisan support for the Act so that it might pass Congress—all at a time when the future of US politics is deeply uncertain.
US AI Legislation and its Implications for Business
Though legislation on AI in the US is piecemeal, businesses must still take note of the current, emerging, and potential future regulations discussed in this article.
The careful regulation, measurement, assessment, and risk mitigation of AI—as promoted by the USA’s mosaic of AI related bills, acts, and principles—can actively help business leaders to develop and deploy AI safely, securely, and in a manner which fosters trust among stakeholders.
Moreover, the penalties for AI non-compliance in the US can be hefty.
AI non-compliance penalties in the US
At present, whilst there is no comprehensive federal Act governing AI use and risk mitigation, there is still a range of laws regulating AI which, if non-compliance is triggered, can result in severe financial penalties, such as in the following examples:
- In 2022, China-based iTutor Group was fined by the USA’s Equal Employment Opportunity Commission (EEOC) for automated age discrimination in their AI hiring software. The company was fined USD $365,000 under the charge of AI bias.
- Also in 2022, California-based FinTech firm Hello Digit was fined $2.7 million for a faulty AI algorithm in their app, which left users paying unnecessary overdraft fees. They were penalized on grounds of AI efficacy by the US Consumer Financial Protection Bureau (CFPB).
- Last year, in 2023, two US lawyers were fined $5,000 for submitting citations in a court case which had been falsely-generated by ChatGPT, breaching AI regulation on transparency and efficacy.
Implications of US legislation on AI implementation in business
Current US legislation around AI either emphasizes AI as a tool for the country’s economic growth and continued innovation in the field, or AI risk mitigation.
The future of US AI legislation and the upcoming elections
The future of U.S. legislation concerning the integration of AI in business could depend on the outcome of the upcoming 2024 presidential election.
If we would paint with broad brush strokes, we could say that if Donald Trump is reelected, it is likely that U.S. AI legislation will remain less stringent than in other countries, prioritizing investment in AI research and development. Conversely, if Harris wins, she may seek to build upon the Biden administration’s focus on AI safety, security, and trustworthiness.
In July of 2024, Trump mentioned the following during the Republican National Convention
“We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation and imposes Radical Leftwing ideas on the development of this technology, in its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.” – Donald Trump, during the Republican National Convention
However, Aaron Cooper, senior vice president of global policy for BSA The Software Alliance, highlights that there are a lot of similarities between how the Trump and Biden administrations approached AI policy.
Voters haven’t yet heard much detail about how a Harris, or a second Trump administration would change AI legislation developments.
“What we’ll continue to see as the technology develops and as new issues arise, regardless of who’s in the White House, they’ll be looking at how we can unleash the most good from AI while reducing the most harm, that sounds obvious, but it’s not an easy calculation.” – Aaron Cooper, Senior Vice President of global policy for BSA The Software Alliance
Conclusion
AI legislation in the US differs significantly from that in other parts of the world. So far, the focus has primarily been on innovation, government AI use, and reinforcing “traditional American values.” However, the introduction of comprehensive state-level laws, such as the Colorado AI Act, is beginning to shift this landscape.
For organizations operating within (and with) the US, navigating the myriads of AI bills, acts, and proposals can be challenging. Nevertheless, it’s essential to keep an eye on what is needed for US AI compliance.
Not only will compliance help companies avoid the often-severe penalties for non-compliance, but it also helps maximize the benefits of AI use whilst minimizing its risks.
Readers are encouraged to explore different AI strategies, reassess their current stance on AI in the context of national and international compliance, and align themselves with current and future US AI regulations.
Learn more about AI use and regulation in business by exploring the Software Improvement Group blog
Author:
Software Improvement Group
Let’s keep in touch
We'll keep you posted on the latest news, events, and publications.