Skip to main content

How New California AI Laws Affect Businesses Using AI in 2026

March 18, 2026

Posted in Business Litigation

By Tony Liu, Founder and Principal Business Trial Attorney 

In Summary
Artificial intelligence is transforming how companies price products, market services, and automate operations. But new California AI laws effective January 1, 2026, introduce regulations on pricing algorithms, AI accountability, and disclosure requirements that businesses must understand. If an AI system causes harm or misleading outcomes, courts will likely hold the business responsible—not the software. Companies facing disputes involving automated decision-making may benefit from guidance from an experienced Irvine, CA business litigation lawyer before problems escalate. 

Why California Is Regulating Artificial Intelligence

Artificial intelligence has quietly become part of daily business operations. Companies now rely on AI tools to analyze competitors, automate pricing decisions, generate marketing content, and streamline operational workflows.

But lawmakers are increasingly concerned that the speed of adoption has outpaced accountability.

California has historically taken a leadership role in technology regulation—from privacy laws to gig economy legislation—and AI regulation is following a similar path. Policymakers believe businesses deploying AI must remain responsible for its consequences.

In fact, the federal government and state legislatures are already examining how algorithmic decision-making may impact competition and consumer protection. The Federal Trade Commission has repeatedly warned businesses that automated systems do not excuse unlawful conduct, emphasizing that companies remain responsible for the tools they deploy.

This regulatory mindset is reflected in several California laws that took effect in 2026.

What New California AI Laws Mean for Businesses in 2026

Several statutes passed during California’s 2025 legislative session directly address how companies develop, deploy, and manage artificial intelligence systems.

Together, these laws signal a broader shift: AI is no longer treated purely as software—it is increasingly treated as a regulated business activity. 

Below are the most significant developments. 


SB 53 – The Transparency in Frontier Artificial Intelligence Act

SB 53 created the Transparency in Frontier Artificial Intelligence Act, a regulatory framework designed to govern highly advanced AI systems.

The law requires large AI developers to implement safety frameworks and transparency reporting obligations. Companies developing frontier models must disclose risk assessments, cybersecurity practices, and mitigation strategies.

The statute also requires reporting certain safety incidents to California’s Office of Emergency Services.

Although this law primarily targets developers of powerful AI models, businesses using advanced AI tools should understand that vendors may now be required to disclose risk information and transparency reports.

For executives evaluating AI vendors, these disclosures may become a critical due diligence tool.


AB 325 – Algorithmic Pricing and Antitrust Risk

AB 325 addresses a rapidly growing concern in competition law: AI systems that coordinate pricing behavior between competitors.

The law targets what it calls a “common pricing algorithm.” This refers to software used by multiple businesses that analyzes competitor data to recommend or influence pricing decisions.

In traditional antitrust cases, plaintiffs typically must prove a “meeting of the minds” between competitors. However, this statute lowers the pleading standard by allowing claims based on plausible evidence of coordinated pricing behavior through algorithms.

This change reflects growing concern about “algorithmic collusion,” where automated systems respond to market data in ways that stabilize prices across competitors.

For companies relying on automated pricing tools—especially in sectors like e-commerce, hospitality, or real estate—this creates new compliance considerations.


AB 316 – Businesses Cannot Blame AI for Harm

One of the most consequential legal principles introduced by the new legislation is contained in AB 316.

The law prevents defendants from arguing that an AI system acted independently and therefore broke the chain of legal responsibility.

In practical terms, this means businesses cannot escape liability by saying: “The software made the decision.”

California’s legal system already imposes a duty of care requiring individuals and companies to avoid causing harm through negligence under California Civil Code §1714. The new law reinforces that businesses remain accountable for how their technology operates.

For business leaders, this creates an important operational reality: AI governance is no longer optional.


AB 723 – Disclosure for AI-Altered Real Estate Images

Another law that may affect businesses in Southern California is AB 723, which targets digitally altered real estate marketing images.

If a property listing includes a materially altered image generated by artificial intelligence, the marketing materials must disclose the alteration. Additionally, an unaltered version of the image must be made available online.

The law is designed to prevent deceptive advertising practices as AI-generated images become increasingly realistic.

Organizations such as the National Association of Realtors have also begun discussing the ethical implications of AI-generated listing images.

For developers, brokerages, and real estate investment groups, this means marketing teams must ensure that AI-generated visuals comply with disclosure rules.


AB 489 – Preventing AI Impersonation of Healthcare Professionals

Another statute addresses the growing concern that artificial intelligence systems may impersonate healthcare providers.

AB 489 strengthens enforcement of title protections for licensed professionals. AI tools cannot present themselves in ways that suggest they are licensed healthcare providers or authorized to give medical advice.

Although this law primarily affects healthcare companies, it reflects a broader regulatory trend: technology cannot misrepresent authority or professional credentials.


Can a Business Be Liable for Decisions Made by AI?

Yes—and under California law, that risk is becoming clearer.

Depending on the claim, courts may examine whether a company exercised reasonable care, complied with applicable statutes, or maintained adequate oversight when deploying an AI system.. If an AI tool produces harmful outcomes—such as misleading advertising, discriminatory decisions, or anticompetitive pricing—businesses may still face liability.

Several legal theories could apply, including:

  • negligence claims
  • deceptive business practices
  • antitrust violations
  • breach of fiduciary duty in shareholder disputes

What makes AI-related disputes particularly complex is that decision-making processes may not always be transparent.

Executives often rely on third-party vendors that provide “black box” systems—software whose internal logic cannot easily be explained.

When disputes arise, questions quickly emerge:

  • Who controlled the algorithm?
  • Who reviewed its output?
  • Did the company implement oversight policies?

These issues can become central to litigation.

Businesses confronting disputes involving technology decisions, governance conflicts, or automated systems may need experienced guidance from a business litigation lawyer in Irvine when conflicts escalate.

The Hidden Antitrust Risk of AI Pricing Algorithms

One of the least discussed risks of artificial intelligence involves automated pricing systems.

Many businesses now use algorithms that analyze market data and automatically adjust prices in response to competitor behavior.

This technology can improve efficiency, but regulators are concerned that algorithms may unintentionally produce coordinated pricing outcomes across an industry.

For example, imagine several competitors using the same pricing software.

If the algorithm continuously adjusts prices based on competitor data, the result may resemble coordinated pricing—even if no executives ever communicated with each other.

This scenario is known as tacit collusion, and regulators are increasingly studying how AI systems could facilitate it.

For executives, the risk is not necessarily intentional misconduct. Instead, the danger lies in deploying technology whose behavior may not be fully understood.

New Disclosure Rules for AI-Generated Real Estate Images

Real estate marketing has rapidly embraced artificial intelligence.

Developers and brokers now use AI to:

  • digitally stage empty homes
  • generate improved landscaping
  • simulate renovations or upgrades

While these images can help buyers visualize potential, regulators worry that AI-generated visuals may mislead consumers.

Under California law, real estate listings must disclose when images are digitally altered in ways that change how the property is represented, including through AI tools.

Failure to disclose alterations could expose businesses to claims involving:

  • misrepresentation
  • deceptive advertising
  • consumer protection violations

For companies operating in Southern California’s competitive property market, marketing compliance may now require additional oversight.

Early Warning Signs Your Company’s AI Use Could Create Legal Risk

Executives often discover AI-related legal risk only after a dispute emerges.

Several warning signs suggest that companies should review their governance policies.

  1. AI tools automatically adjust prices based on competitor data
  2. Marketing materials are generated entirely by AI without human review
  3. Operational decisions are delegated to algorithms without oversight
  4. Vendors cannot explain how the AI system reaches conclusions
  5. The company lacks written policies governing AI use
  6. No internal audit process exists to review algorithmic decisions

When these issues appear, companies should consider evaluating their risk exposure before regulators, investors, or competitors raise questions.

How Business Owners Can Reduce AI-Related Liability

Reducing legal exposure does not require abandoning artificial intelligence. Instead, companies should treat AI like any other operational risk.

Several steps can significantly reduce exposure:

  • implement internal AI governance policies
  • require transparency from technology vendors
  • conduct periodic audits of automated decision systems
  • maintain human oversight for high-impact decisions
  • document how algorithms influence business outcomes

Businesses facing disputes involving AI-driven operations, automated pricing tools, or investor conflicts may benefit from consulting with an experienced business litigation lawyer in Irvine to evaluate potential exposure.


Frequently Asked Questions About California AI Laws

1. What are the new AI laws in California for businesses?

Several laws effective January 1, 2026, regulate artificial intelligence in areas including transparency requirements, algorithmic pricing practices, liability for AI-generated harm, and disclosure rules for digitally altered marketing materials.

2. Can a company be sued for relying on AI software?

Yes. Businesses remain responsible for the outcomes produced by the technology they deploy. Courts typically examine whether the company exercised reasonable care in supervising and managing the AI system.

3. Are AI pricing algorithms illegal in California?

Not necessarily. However, algorithms that coordinate pricing behavior using competitor data may raise antitrust concerns if they produce outcomes that restrict competition.

4. Do real estate listings have to disclose AI-altered images?

Yes. California law now requires disclosure if marketing images are materially altered using artificial intelligence. Unaltered versions must also be made available when images are posted online.

5. What should businesses do before deploying AI tools?

Companies should evaluate legal risk, implement governance policies, maintain human oversight, and ensure compliance with applicable laws regulating artificial intelligence.


The Legal Risk of AI Often Appears After the Technology Is Deployed

Artificial intelligence offers remarkable opportunities for efficiency and innovation. But many companies adopt these tools long before considering the legal consequences.

California’s new AI laws reflect a broader regulatory shift: businesses must remain accountable for the technology they use.

For executives, the greatest risk is rarely the technology itself. The real danger is deploying powerful systems without the governance structures necessary to manage them responsibly.

When disputes arise—from pricing algorithms to marketing claims—they can quickly evolve into regulatory investigations, shareholder conflicts, or commercial litigation.

Businesses operating in Orange County that face disputes involving automated systems, governance conflicts, or technology-driven business decisions may benefit from guidance from an experienced business litigation team in Irvine before problems escalate. Contact Focus Law today for help.