Connecticut’s AI bill targets companion bots, hiring tools and frontier models


Connecticut’s Senate on April 21, 2026 passed one of the most structurally detailed AI regulatory proposals seen from a US state legislature, voting 32-4 to approve an amended Senate Bill 5 after hours of floor debate. The legislation spans 64 pages and 37 sections, touching nearly every dimension of how artificial intelligence intersects with commercial life: emotional companion chatbots, automated hiring pipelines, frontier model safety requirements, synthetic content labeling, state employment protections, and a publicly funded AI training academy. No single issue drives the bill. It is, instead, a broad legislative agenda assembled across multiple committees and folded together through a last-minute strike-all amendment introduced by Sen. James Maroney, D-Milford, the bill’s main author and co-chair of the General Law Committee.

The bill now moves to the House, which declined to take up last year’s AI proposal. According to CT Mirror’s reporting, the April 21 vote came on the same day Hartford officials were downtown celebrating “AI Day.”

The bill’s effective dates are staggered through a two-year implementation window. Core provisions take effect October 1, 2026, while the most technically demanding requirements for AI companion operators and automated employment systems do not apply until January 1, 2027 or October 1, 2027, depending on the section.

Legislative history and the path to passage

Connecticut has been wrestling with AI legislation for several years. A previous attempt in 2025 – Senate Bill 2, a comprehensive AI bill also authored by Maroney – cleared the Senate but collapsed when Governor Ned Lamont threatened a veto. According to CT Mirror, Lamont had long expressed concern that excessive regulation would hurt businesses and hamper innovation. This session, with the end of the legislative calendar approaching and multiple AI-related bills on the Senate’s docket, Maroney and colleagues opted to fold several measures into a single omnibus vehicle. That consolidation included provisions from the Labor and Public Employees Committee on automated employment decision-making, and two governor-backed proposals: Senate Bill 86 and House Bill 5037, covering a regulatory sandbox and youth social media and AI chatbot regulations respectively.

The strategy produced a bill resembling the scope of the 2025 effort that failed. According to CT Mirror, Maroney acknowledged the similarity, saying with limited time left in the session, combining measures was necessary. “There’s not as much time to get things done, so we’re trying to combine where we can,” he said.

The floor debate ran for hours. Sen. Paul Cicarella, R-North Haven, the ranking member on the General Law Committee, spent approximately the first hour asking Maroney to explain the bill section by section. Sen. Henri Martin, R-Bristol, expressed unease about the bill’s scale. “There’s so much to the bill,” Martin said, according to CT Mirror. “We may want to consider not doing so much and really keep it simple.” Sen. Rob Sampson, R-Wolcott, raised procedural objections, according to CT Mirror noting that the strike-all amendment arrived so late that legislators had little time to absorb what they were voting on. “I just got this bill,” he said. “This is yet another strike all amendment. I think the third or fourth one that we’ve gotten in about 24 hours.”

Senate Minority Leader Stephen Harding, R-Brookfield, argued that Connecticut, with a population of around 4 million, should not be legislating on a technology evolving so rapidly. According to CT Mirror, Harding said the matter should be handled at the federal level. Sen. Tony Hwang, R-Fairfield, worried the legislation could create obstacles for businesses. “We may be creating a roadblock that hampers business success, business innovation,” he said, according to CT Mirror.

The bill’s supporters responded with urgency. Sen. Saud Anwar, D-South Windsor, cited the case of a teenage boy who died by suicide after being encouraged by an AI chatbot. “We have to understand that there is a problem,” he said, according to CT Mirror. “We need to have important protections in place.” Senate President Pro Tem Martin Looney, D-New Haven, framed the stakes historically. “History will not look with favor on those who say that we should wait,” he said, according to CT Mirror. “Things are changing so quickly.”

The governor’s office issued a measured statement. “The Governor has said that the potential benefits posed by AI must be balanced with the safety of its users,” according to a statement from spokesperson Cathryn Vaulman cited by CT Mirror. “This bill provides helpful clarity and promotes user safety in specific use cases.”

AI companion operators face new disclosure and safety requirements

Section 5 of the bill, effective January 1, 2027, establishes the most immediately visible obligations for consumer-facing products. According to the bill’s text, no operator shall provide an artificial intelligence companion to a user unless the system includes a protocol taking reasonable efforts to detect and address any expression indicating a risk of suicide, self-harm or imminent violence. If such an expression is detected, the system must refer the user to appropriate mental health evaluation and treatment resources, including the 9-8-8 National Suicide Prevention Lifeline.

The disclosure obligation is specific. According to the legislation, operators must provide clear and conspicuous audible or written notice that the user is communicating with an artificial intelligence companion and not another individual, at the beginning of each interaction, and at minimum once every hour during any continuous session. The hourly reminder requirement goes beyond what California introduced with its SB-243 in October 2025, which also mandated companion chatbot disclosures but did not specify intra-session frequency. Violations under Section 5 carry civil penalties of up to $15,000 per day, enforceable by the Attorney General.

Section 6 imposes additional restrictions when the user is younger than 18. According to the bill, operators cannot provide an AI companion to a minor if it is reasonably foreseeable that the companion is capable of encouraging self-harm, suicidal ideation, violence, disordered eating or the unlawful consumption of alcohol or drugs. The list extends to romantic, erotic or sexually explicit interactions, and covers systems that implement variable ratio or variable interval reinforcement schedules for the purpose of maximizing engagement time. That last provision reflects documented concerns about behavioral engineering techniques borrowed from gaming and social media. Civil penalties for violations involving minors reach $25,000 per violation. Parents and legal guardians may also bring private civil actions, with a three-year statute of limitations from the date of the violation.

What constitutes a companion under this bill is defined narrowly. According to the text, the term means any AI model that communicates with individuals in natural language and simulates human conversation through text, audio or video. Systems used solely for internal business purposes, customer service account queries, or efficiency and research assistance are explicitly excluded.

Automated employment decisions require layered disclosure

Sections 7 through 13 create a separate regulatory regime for automated employment-related decision processes, effective for deployments occurring on or after October 1, 2027. The framework distinguishes between developers, who build these systems, and deployers, who put them into use against employees and job applicants in Connecticut.

An automated employment-related decision process is defined broadly. According to the legislation, it includes any computational process that generates outputs – constraints, ranks, scores, recommendations, or classifications – that are not a de minimis factor in making hiring, promotion, discipline, discharge, or similar decisions. The definition explicitly captures tools that analyze facial expressions, word choice, or voice captured during online interviews, as well as systems that screen resumes for particular terms or patterns, or direct job advertising to targeted demographic groups.

Deployers face three distinct obligations once a covered process is triggered. First, they must notify each employee or applicant that they are interacting with an automated system, in plain language. Second, before any employment-related decision is made using such a process, they must provide written disclosure describing the purpose of the system, the nature of the decision, and the applicant’s right to opt out of personal data processing. Third, if the decision is adverse, the deployer must supply a high-level statement explaining the principal reasons, including the degree to which and manner in which the automated output contributed to the outcome, the types of data processed, and the source of that data. Where personal data not provided by the applicant was used, the applicant has the right to examine and correct it.

These obligations can be contractually shifted to developers, but only through a binding written agreement clearly specifying which duties the developer has assumed. Trade secrets are protected: according to Section 12, nothing in the framework requires disclosure of information that is a trade secret or otherwise protected by law, though the withholding party must notify the affected person that information is being withheld and state the basis for doing so.

Violations are treated as unfair or deceptive trade practices under Connecticut law. Enforcement is reserved exclusively for the Attorney General, who must first issue a notice of violation where the violation is curable, giving the recipient 60 days to remedy the situation before initiating an action.

Frontier developer obligations and the catastrophic risk threshold

Section 2 addresses frontier developers – defined as any person doing business in Connecticut who trains or intends to train a frontier model using more than 10^26 integer or floating-point operations, inclusive of original training and any fine-tuning or reinforcement learning applied to a preceding model. That threshold is identical to the computational marker used in the Illinois SB3444 frontier model bill, covered by PPC Land earlier this month.

The catastrophic risk definition is precise. According to the bill, the term means any foreseeable and material risk that the development, storage, use or deployment of a foundation model will materially contribute to the death of, or serious injury to, more than 50 individuals, or more than $1 billion in damage to or loss of covered property, arising from any single incident. Three scenarios are named: the model providing expert-level assistance in the creation or release of a chemical, biological, radiological or nuclear weapon; engaging in malicious cyberattack activity or conduct that would constitute murder, assault, larceny or theft without meaningful human oversight; or evading the control of the developer or user.

The definition contains important carve-outs. Risks from information a model outputs that is otherwise publicly accessible in substantially similar form from other sources are excluded. Lawful federal government activities and risks arising from software combinations where the foundation model did not materially increase the risk are also excluded.

Large frontier developers – those with annual gross revenues exceeding $500 million – face a distinct set of obligations. According to Section 2, they must establish, by January 1, 2027, a reasonable anonymous internal reporting process through which covered employees can disclose information indicating the company has engaged in activity posing a specific and substantial danger to public health or safety due to a catastrophic risk. Monthly updates must be provided to reporting employees, with the company required to investigate and, where warranted, take immediate action to eliminate the danger. Starting May 1, 2027, and quarterly thereafter, large frontier developers must submit reports to their officers and directors summarizing all information received through this channel, and any actions taken in response. If a report alleges wrongdoing by an officer or director, that individual must not receive the quarterly report.

The whistleblower protections are worded as prohibitions on company policy. Frontier developers cannot adopt rules that would allow discharge, discipline or penalization of employees for reporting catastrophic risks under existing Connecticut whistleblower statutes. Civil penalties reach $1,000 per violation.

Synthetic digital content must be labeled

Section 15 establishes a synthetic digital content labeling requirement effective October 1, 2027. According to the bill, developers of AI systems or general-purpose AI models capable of generating synthetic digital content must ensure that outputs are marked and detectable as synthetic at or before the time consumers who did not create those outputs first interact with or are exposed to them. The marking must be detectable by consumers and must comply with applicable accessibility requirements.

Technically, the standard requires solutions that are, as far as feasible, effective, interoperable, robust, and reliable, consistent with nationally or internationally recognized technical standards. The bill explicitly acknowledges implementation costs as a factor in assessing feasibility.

Exemptions apply for artistic, creative, satirical or fictional works in audio, image or video format – in those cases, disclosure may be limited to a form that does not hinder display or enjoyment of the work. Text-only synthetic content published to inform the public on matters of public interest, and content unlikely to mislead a reasonable person, is also exempt, as are AI systems used for standard editing assistance that do not substantially alter input data.

Connecticut AI Academy and workforce infrastructure

Section 19, taking effect July 1, 2026, directs the Board of Regents for Higher Education to establish a Connecticut AI Academy, operated on behalf of Charter Oak State College, by December 31, 2026. The academy must, at minimum, curate and offer online courses on artificial intelligence and its responsible use, promote digital literacy, and offer courses and resources specifically directed at individuals between 13 and 20 years of age.

The academy’s mandate extends to small businesses and nonprofits, teachers and school administrators, and workers receiving unemployment compensation, who according to Section 21 must be notified of the academy’s existence and offerings when they file a claim. The Secretary of State is directed under Section 22 to use existing business communications channels to disseminate information about the academy’s small business AI marketing and efficiency courses. The Department of Housing is required under Section 23 to inform housing authority residents of the academy’s services.

Section 33 requires the Labor Commissioner to establish, by July 1, 2026, an Artificial Intelligence Workforce Research Hub within the Labor Department to track and analyze AI’s impact on Connecticut’s workforce, including scenario planning for a range of potential impact levels. Annual reports are required from October 1, 2026 onward, directed to the General Assembly committees overseeing appropriations, labor, and consumer protection.

AI regulatory sandbox and working group

Section 3, effective July 1, 2027, directs the Commissioner of Economic and Community Development to develop a plan for an AI regulatory sandbox program that would allow applicants to test innovative AI products on a limited basis under reduced licensing and regulatory requirements. A report containing recommendations for implementing the sandbox must be submitted to the Governor and relevant legislative committees no later than January 1, 2028.

Section 20, effective July 1, 2026, establishes a working group tasked with recommending best practices for AI implementation in state services, proposing legislation to regulate general-purpose AI models and require synthetic content signals on social media, and making recommendations on a permanent AI advisory council. Initial appointments to the working group must be made by July 31, 2026. A first meeting must occur no later than August 31, 2026. The working group must submit its report no later than February 1, 2027.

The working group includes 15 voting members, drawn from representatives of AI-developing and AI-using industries, academic concentrations in technology policy and government, labor organizations, small businesses, and fellows of the Connecticut Academy of Science and Engineering. Non-voting ex-officio members include the Attorney General, the Comptroller, the Treasurer, the Chief Data Officer, and the executive director of the Freedom of Information Commission, among others.

Connecticut Technology Advisory Board

Section 25, effective July 1, 2026, establishes a Connecticut Technology Advisory Board within the Legislative Department. Eight voting members will be appointed in equal parts by the House speaker, Senate president pro tempore, and minority leaders of each chamber. All must hold professional or academic qualifications in artificial intelligence, technology, or a related field. The board’s mandate includes developing and maintaining a state technology strategy updated at least every two years, and making recommendations to the legislative, executive, and judicial departments.

What this means for marketing and advertising professionals

State-level AI legislation has become a material compliance concern for technology companies and marketing platforms operating at scale. As PPC Land has documented, xAI filed a federal lawsuit in April 2026 challenging Colorado’s SB24-205, arguing that state-level AI bias laws impose unconstitutional speech compulsion on model outputs. Connecticut’s bill does not include a broad algorithmic discrimination framework of the type xAI challenged in Colorado, but several of its provisions – particularly the automated employment decision disclosures and the catastrophic risk reporting requirements – impose disclosure obligations that will require developers and deployers to maintain detailed operational records.

The automated employment provisions apply specifically to systems that direct job advertising to targeted demographic groups, a function that overlaps directly with programmatic ad distribution. If a platform uses a computational process to determine which recruitment materials are shown to which users, and that process affects employment outcomes in a non-de-minimis way, it may fall within the definition of an automated employment-related decision process. The applicability of this framework to ad targeting in recruitment contexts will require careful legal analysis.

The synthetic content labeling requirements, effective October 2027, will affect AI-generated advertising creative. Any system capable of producing AI-generated audio, images, text, or video for consumer exposure will need to implement detection-ready marking solutions that comply with interoperability standards – whatever those standards look like when finalized. That timeline parallels the EU AI Act’s Article 50 transparency obligations, which the European Commission opened for consultation in September 2025, suggesting that by 2027, synthetic content labeling may be expected in both transatlantic markets simultaneously.

Timeline

  • May 17, 2024: Colorado Governor Jared Polis signs SB24-205, the high-risk AI law that xAI later challenged in federal court
  • August 1, 2024: EU AI Act enters into force
  • 2025: Connecticut Senate passes Senate Bill 2, a comprehensive AI proposal authored by Sen. Maroney, but the legislation collapses amid a veto threat from Governor Lamont
  • October 13, 2025: California Governor signs SB-243, requiring companion chatbot AI disclosures
  • September 4, 2025: European Commission opens consultation on AI transparency guidelines under Article 50 of the AI Act
  • August 5, 2025: Colorado Attorney General publicly states SB24-205 “is really problematic, it needs to be fixed”
  • August 2025: Colorado special legislative session delays SB24-205’s effective date to June 30, 2026
  • December 29, 2025: xAI files federal lawsuit challenging California’s AB 2013 AI training data transparency law
  • February 2026: Connecticut General Assembly convenes February session; Senate Bill 5 introduced
  • April 9, 2026: xAI files federal lawsuit against Colorado to block SB24-205
  • April 2026: Illinois SB3444 introduced; OpenAI backs bill shielding frontier AI firms from mass casualty liability
  • April 21, 2026: Connecticut Senate passes amended Senate Bill 5 by a 32-4 vote after hours of floor debate; bill moves to the House
  • April 26, 2026: Substitute Bill No. 5 published with joint favorable recommendations from the Government and Labor (GL), Judiciary (JUD), and Appropriations (APP) committees
  • July 1, 2026: Sections 19, 20, 25, and 33 of the Connecticut bill take effect – Connecticut AI Academy, working group, Technology Advisory Board, and AI Workforce Research Hub all launch
  • October 1, 2026: Subscription AI disclosure (Section 1), frontier developer whistleblower protections (Section 2), automated employment framework definitions (Sections 7-13), and synthetic content developer obligations (Section 15) take effect
  • January 1, 2027: AI companion safety and minor protection obligations (Sections 4-6) take effect; large frontier developers must have anonymous reporting processes in place
  • October 1, 2027: Deployer-facing automated employment decision disclosure requirements begin applying to processes deployed on or after this date
  • January 1, 2028: Commissioner of Economic and Community Development must submit AI regulatory sandbox recommendations to the Governor and relevant legislative committees

Summary

Who: Connecticut General Assembly, advancing Substitute Bill No. 5 from the February 2026 legislative session. The bill received joint favorable recommendations from three committees: Government and Labor, Judiciary, and Appropriations.

What: A 64-page, 37-section omnibus AI bill covering subscription-based AI disclosure, frontier developer whistleblower protections, AI companion safety and minor protections, automated employment decision transparency, synthetic digital content labeling, the establishment of a Connecticut AI Academy and Workforce Research Hub, a state Technology Advisory Board, and an AI regulatory sandbox development plan.

When: The Connecticut Senate passed the bill on April 21, 2026, by a vote of 32-4. The amended bill text was published with joint favorable committee recommendations on April 26, 2026. The bill now moves to the House. Effective dates for provisions range from July 1, 2026 through October 1, 2027, depending on the section.

Where: Connecticut. The law applies to persons doing business in the state who provide AI technology to consumers physically present in the state, and to frontier developers using computational resources exceeding 10^26 operations.

Why: The bill responds to the rapid deployment of AI systems across employment, consumer, and public health contexts where Connecticut legislators determined that existing legal frameworks were insufficient to assign accountability, ensure transparency, or protect vulnerable users – in particular minors interacting with AI companion platforms and workers whose employment outcomes are shaped by automated decision systems.


Share this article


The link has been copied!




Leave a Reply

Your email address will not be published. Required fields are marked *