fbpx

We Need a New Definition of AI

We Need a New Definition of AI

Senators Hawley (R-MO) and Blumenthal (D-CT) have recently been pushing proposed legislation to exclude generative artificial intelligence from section 230 liability protection, as part of wider efforts to regulate AI. The proposal has been criticized as a way to sneak in section 230 reform by attempting to attach it to concerns about AI. Their definition of generative AI is written in a way that could remove protection for vast swaths of the internet, instead of being a carefully considered attempt to maximize the benefits of AI while safeguarding against the harms.

Still, their clumsy attempt to change the rules governing almost the entire internet highlight the difficulty of creating sensible regulation of new technologies.

Before the current AI moment that started with improvements to generative AI and the launch of ChatGPT, Congress passed the National Artificial Intelligence Initiative Act in 2020. The Act instructed the President to create an initiative to promote the development and use of Artificial Intelligence. The Act defined “Artificial Intelligence” as:

… a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to—

(A) perceive real and virtual environments;

(B) abstract such perceptions into models through analysis in an automated manner; and

(C) use model inference to formulate options for information or action. (15 U.S.C 9401(3))

This definition seems appropriate for the purposes of the act, which was to coordinate AI usage among government departments and promote its development and usage. However, this definition is now being used in legislative proposals, the Biden administration’s AI Executive Order, and the FTC’s AI Omnibus. While that definition might have seemed appropriate at the time, even just a few years later it seems both too narrow and too broad: It does not readily include generative AI unless you interpret it broadly, which would cause it to also encompass computer systems few would consider to be AI.

Generative AI systems, like large language models such as GPT-4 and Gemini, or image models such as Stable Diffusion or DALL-E 3, can be construed as “making predictions” about the next word or pixel to output – that is how they’re trained after all. But for this argument to work, you have to take a wide definition of the phrase “predictions, recommendations, or decisions.” You could consider the text box in which the user enters a prompt to be a “virtual environment,” but that phrase sounds like it is intended to refer to simulations, not just anything happening on a computer. And are the “human-defined objectives” the prompts or the initial loss function that the model was trained on? If a very wide definition isn’t taken for these terms, we would have to exclude most generative AI from the definition, which would make the definition irrelevant for what is currently happening, and looks to happen more in the future.

But if we use the broad definition necessary to capture LLMs, then many other computer systems that few think of as “artificial intelligence” also fit the definition. A procedurally generated video game makes decisions influencing a virtual environment based on a human objective and input. Search engines make predictions as you type, and recommend websites. “Recommender systems” used by Netflix, Spotify, and TikTok are complex machine learning systems that some would consider to be AI, but basic “frequently bought together” systems like those used by Amazon would still be included in this definition.

An earlier definition may lead to more coherent rules. The 2019 John S. McCain National Defense Authorization Act defined “Artificial intelligence” as any of the following:

 (1) Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.

(2) An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.

(3) An artificial system designed to think or act like a human, including cognitive architectures and neural networks.

(4) A set of techniques, including machine learning, that is designed to approximate a cognitive task.

(5) An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision making, and acting.

This definition easily includes generative AI, seems flexible enough to include future iterations of what we will consider to be AI, and better excludes the kinds of computer systems few would consider AI. There are still quibbles to be had. Since “machine learning” itself is a broad term that can be construed to include simple linear regression, this could include many systems using learned equations, such as image filters. For a use such as Senator Schatz’s AI Labeling Act (S.2691), which currently uses the 2019 definition, this could prove problematic as it could lead to overlabeling and people ignoring the labels, as happened with California’s Prop 65 Warning Labels.


It remains to be seen what, if any, regulation of AI is or will be warranted. At a minimum, though, any regulatory definition should take care to focus on where people believe the need to be.

Share This Article

Privacy and Security

View More Publications by Nathaniel Lovin

Recommended Reads

Related Articles

Sign Up for Updates

This field is for validation purposes and should be left unchanged.