Christopher D. Warren
NYC Managing Partner
212-390-8060 cwarren@sh-law.comAuthor: Christopher D. Warren|January 21, 2025
The rapid growth of artificial intelligence (AI) has regulators scrambling to craft new laws to govern the technology. Businesses that develop AI solutions, as well as the companies that deploy them, should keep a close eye on regulatory developments.
Under a recent bi-partisan proposal, AI companies could lose key protections under Section 230 of the Communications Decency Act that other Internet companies enjoy. Separately, the attorneys general of several states are calling for a regulatory framework that addresses the risks associated with AI without hampering the development of trustworthy applications.
The proposed “No Section 230 Immunity for AI Act” seeks to clarify that Section 230 immunity will not apply to claims based on generative AI. The bi-partisan legislation was introduced by Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.), the Ranking Member and the Chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, on June 14, 2023.
“AI companies should be forced to take responsibility for business decisions as they’re developing products — without any Section 230 legal shield,” Sen. Blumenthal said in a statement. “This legislation is the first step in our effort to write the rules of AI and establish safeguards as we enter this new era. AI platform accountability is a key principle of a framework for regulation that targets risk and protects the public.”
As discussed in greater detail in prior articles, Section 230 provides:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
In basic terms, the far-reaching law shields online platforms from being sued for content posted by a third-party user.
The “No Section 230 Immunity for AI Act” would amend the statute to add the following:
Nothing in this section (other than subsection (c)(2)(A)) shall be construed to impair or limit any claim in a civil action or charge in a criminal prosecution brought under Federal or State law against the provider of an interactive computer service if the conduct underlying the claim or charge involves the use or provision of generative artificial intelligence by the interactive computer service.
The legislation defines the term “generative artificial intelligence” as an artificial intelligence system that is capable of generating novel text, video, images, audio, and other media based on prompts or other forms of data provided by a person.’’ ChatGPT, Dall-E and Bard are currently among the most well-known generative AI interfaces.
On June 12, 2023, a coalition of 23 attorneys general (AGs) wrote a letter to the chief counsel for the National Telecommunications and Information Administration (NTIA), calling for the creation of a risk-based regulatory framework for AI. The letter was drafted by the AGs of Colorado, Connecticut, Tennessee, and Virginia and joined by their colleagues from other states, including California, New York, and New Jersey.
The AGs emphasized the importance of fostering the proper development of dynamic and trustworthy tools without hampering innovation. “This means, for example, that a prescriptive regulatory regime may not be best suited to this challenge,” the letter states. “By contrast, commitments to robust transparency, reliable testing and assessment requirements, and after-the-fact enforcement is a very promising approach.”
The AGs recommended a risk-based approach to regulation, highlighting that some AI use cases (e.g., routes for package delivery) present lower risks compared to others (e.g., health care delivery options). They also emphasized the need for nuanced evaluation of risks in the context of AI.
The AGs specifically called for NTIA to establish independent standards for transparency, including:
1. Testing
2. Assessments
3. Audits of AI solutions
Additionally, the AGs argued for states to enjoy concurrent enforcement authority in a federal AI regulatory regime:
“Significantly, State AG authority can enable more effective enforcement to redress possible harms. Consumers already turn to state Attorneys General offices to raise concerns and complaints, positioning our offices as trusted intermediaries that can elevate concerns and take action on smaller cases.”
Since the introduction of the “No Section 230 Immunity for AI Act,” the debate has intensified.
Support from Consumer Advocacy Groups: Many consumer advocacy organizations support limiting Section 230 protections for AI companies, arguing it is necessary to hold companies
accountable for the risks posed by AI-generated content, including misinformation and bias. According to John Davison, director of the Center for AI Accountability:
The lack of accountability for AI outputs risks leaving consumers unprotected from harmful or misleading information. This legislation is an important step in closing that gap.
Industry Pushback: AI companies argue that stripping Section 230 protections could stifle innovation and lead to an avalanche of litigation, potentially hampering the development of new AI applications. OpenAI’s CEO Sam Altman recently stated:
While accountability is important, overly broad legislation risks discouraging startups and smaller players from entering the field. This is a space where thoughtful, balanced regulation is critical.
Broader AI Regulation Proposals: In addition to federal legislation, various states are exploring AI-specific regulatory frameworks, further complicating the compliance landscape for businesses. For example, California’s proposed “AI Accountability Act” would require AI developers to publicly disclose the datasets used to train their systems and submit regular bias audit reports.
Like most emerging technologies, AI has enormous risks and benefits. How regulators will strike a balance between protecting the public and fostering innovation remains to be seen, despite the clear need for regulatory developments at both the federal and state levels that do not stifle innovation.
NYC Managing Partner
212-390-8060 cwarren@sh-law.comThe rapid growth of artificial intelligence (AI) has regulators scrambling to craft new laws to govern the technology. Businesses that develop AI solutions, as well as the companies that deploy them, should keep a close eye on regulatory developments.
Under a recent bi-partisan proposal, AI companies could lose key protections under Section 230 of the Communications Decency Act that other Internet companies enjoy. Separately, the attorneys general of several states are calling for a regulatory framework that addresses the risks associated with AI without hampering the development of trustworthy applications.
The proposed “No Section 230 Immunity for AI Act” seeks to clarify that Section 230 immunity will not apply to claims based on generative AI. The bi-partisan legislation was introduced by Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.), the Ranking Member and the Chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, on June 14, 2023.
“AI companies should be forced to take responsibility for business decisions as they’re developing products — without any Section 230 legal shield,” Sen. Blumenthal said in a statement. “This legislation is the first step in our effort to write the rules of AI and establish safeguards as we enter this new era. AI platform accountability is a key principle of a framework for regulation that targets risk and protects the public.”
As discussed in greater detail in prior articles, Section 230 provides:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
In basic terms, the far-reaching law shields online platforms from being sued for content posted by a third-party user.
The “No Section 230 Immunity for AI Act” would amend the statute to add the following:
Nothing in this section (other than subsection (c)(2)(A)) shall be construed to impair or limit any claim in a civil action or charge in a criminal prosecution brought under Federal or State law against the provider of an interactive computer service if the conduct underlying the claim or charge involves the use or provision of generative artificial intelligence by the interactive computer service.
The legislation defines the term “generative artificial intelligence” as an artificial intelligence system that is capable of generating novel text, video, images, audio, and other media based on prompts or other forms of data provided by a person.’’ ChatGPT, Dall-E and Bard are currently among the most well-known generative AI interfaces.
On June 12, 2023, a coalition of 23 attorneys general (AGs) wrote a letter to the chief counsel for the National Telecommunications and Information Administration (NTIA), calling for the creation of a risk-based regulatory framework for AI. The letter was drafted by the AGs of Colorado, Connecticut, Tennessee, and Virginia and joined by their colleagues from other states, including California, New York, and New Jersey.
The AGs emphasized the importance of fostering the proper development of dynamic and trustworthy tools without hampering innovation. “This means, for example, that a prescriptive regulatory regime may not be best suited to this challenge,” the letter states. “By contrast, commitments to robust transparency, reliable testing and assessment requirements, and after-the-fact enforcement is a very promising approach.”
The AGs recommended a risk-based approach to regulation, highlighting that some AI use cases (e.g., routes for package delivery) present lower risks compared to others (e.g., health care delivery options). They also emphasized the need for nuanced evaluation of risks in the context of AI.
The AGs specifically called for NTIA to establish independent standards for transparency, including:
1. Testing
2. Assessments
3. Audits of AI solutions
Additionally, the AGs argued for states to enjoy concurrent enforcement authority in a federal AI regulatory regime:
“Significantly, State AG authority can enable more effective enforcement to redress possible harms. Consumers already turn to state Attorneys General offices to raise concerns and complaints, positioning our offices as trusted intermediaries that can elevate concerns and take action on smaller cases.”
Since the introduction of the “No Section 230 Immunity for AI Act,” the debate has intensified.
Support from Consumer Advocacy Groups: Many consumer advocacy organizations support limiting Section 230 protections for AI companies, arguing it is necessary to hold companies
accountable for the risks posed by AI-generated content, including misinformation and bias. According to John Davison, director of the Center for AI Accountability:
The lack of accountability for AI outputs risks leaving consumers unprotected from harmful or misleading information. This legislation is an important step in closing that gap.
Industry Pushback: AI companies argue that stripping Section 230 protections could stifle innovation and lead to an avalanche of litigation, potentially hampering the development of new AI applications. OpenAI’s CEO Sam Altman recently stated:
While accountability is important, overly broad legislation risks discouraging startups and smaller players from entering the field. This is a space where thoughtful, balanced regulation is critical.
Broader AI Regulation Proposals: In addition to federal legislation, various states are exploring AI-specific regulatory frameworks, further complicating the compliance landscape for businesses. For example, California’s proposed “AI Accountability Act” would require AI developers to publicly disclose the datasets used to train their systems and submit regular bias audit reports.
Like most emerging technologies, AI has enormous risks and benefits. How regulators will strike a balance between protecting the public and fostering innovation remains to be seen, despite the clear need for regulatory developments at both the federal and state levels that do not stifle innovation.
No Aspect of the advertisement has been approved by the Supreme Court. Results may vary depending on your particular facts and legal circumstances.
Let`s get in touch!
Sign up to get the latest from the Scarinci Hollenbeck, LLC attorneys!