Table of Contents >> Show >> Hide
- Why California Is Revisiting the Rulebook Yet Again
- What Changed in 2025 and Why It Matters in 2026
- The Priorities Have Been Revisited, Not Replaced
- What the Late-2025 Policy Proposals Reveal About the Next Wave
- Where Businesses Still Slip on the Banana Peel
- What Smart Companies Should Do Now
- Why This Matters Beyond California
- Real-World Experiences From the Privacy Front Lines
- Conclusion
California privacy law never really sits still. It paces, mutters to itself, rewrites the agenda, and then asks businesses whether they have a minute to discuss compliance. Again. That is exactly why the latest conversation around California privacy protection matters so much. The California Privacy Protection Agency, or CPPA, has already finished a major phase of rulemaking tied to the California Consumer Privacy Act and the California Privacy Rights Act. But instead of taking a victory lap and heading to the snack table, the agency has started revisiting what comes next.
The result is a privacy landscape that feels less like a single law and more like an operating system that keeps getting updates. Some updates are structural, like rules for automated decisionmaking technology, risk assessments, and cybersecurity audits. Others are deeply practical, such as making privacy rights easier to use, making browser-based opt-out signals work more consistently, and deciding whether deletion rights should reach more data collected from third parties. In other words, California is no longer just asking whether a business has a privacy policy. It is asking whether the policy actually works in the real world, for real people, on real websites, with real data flows, and without requiring a scavenger hunt plus a law degree.
Why California Is Revisiting the Rulebook Yet Again
The reason is pretty simple: the state has moved from building the privacy framework to stress-testing it. Early California privacy regulation focused on foundational rights and baseline obligations. That phase was always important, but it left a pile of hard questions on the table. How should businesses handle high-risk data processing? What counts as meaningful human review when software helps make decisions? When does a privacy request become so burdensome that a “right” starts looking suspiciously like a maze?
That unfinished business became the heart of the agency’s later rulemaking agenda. In 2025, the CPPA finalized a major package of regulations covering updates to existing CCPA rules, cybersecurity audits, privacy risk assessments, automated decisionmaking technology, and insurance-related issues. Those rules became effective at the start of 2026, although some obligations phase in over a longer timeline. So yes, the regulatory train has left the station. No, it did not leave a forwarding address for teams still treating privacy as a side project.
What makes the current moment especially interesting is that the CPPA appears to be shifting from “finish the mandated rules” to “improve how privacy rights actually function.” That is why the agency’s newest preliminary topics matter so much. California is no longer content with privacy rights that look lovely on paper and slightly tragic in practice.
What Changed in 2025 and Why It Matters in 2026
Automated Decisionmaking Rules Became Real
For years, discussions about California and AI often sounded like a panel discussion where everyone agreed the issue was important, then ran out of time before defining anything. The final ADMT rules finally put some guardrails around that debate. Rather than regulating every tool with a pulse and a spreadsheet, the rules focus on technology that replaces or substantially replaces human decision-making in significant decisions.
That matters because the final version is narrower and more practical than some earlier drafts. The emphasis is not on fashionable buzzwords. It is on actual consumer impact. If a system is used to make major decisions involving lending, housing, education, employment, independent contracting, compensation, or health care, California wants the business to slow down, provide notice, offer rights, and explain itself. In privacy terms, that is California saying: “You may automate, but you do not get to disappear behind the machine.”
Businesses using covered ADMT for significant decisions need to think about pre-use notices, access rights, opt-out rights in many cases, appeals, and the human review process. The deeper point is that California has taken the AI conversation and translated it into privacy governance. Less science fiction. More documentation, fairness, and accountability.
Risk Assessments Moved From Theory to Obligation
If ADMT rules are the flashy headliner, risk assessments are the disciplined drummer keeping the whole band together. The new framework pushes businesses to evaluate processing that presents significant risk to consumers’ privacy before the wheels come off. That includes activities such as selling or sharing personal information, processing sensitive personal information, using ADMT for significant decisions, and training certain systems with personal data in sensitive contexts.
This is a big deal because it changes the compliance question from “Can we do this?” to “Should we do this, and can we defend that answer?” A meaningful risk assessment is not supposed to be a ceremonial PDF written five minutes before a board meeting. It is supposed to weigh benefits against harms, document safeguards, and force decision-makers to confront whether the business upside actually outweighs the privacy downside.
For companies that operate across multiple jurisdictions, this also nudges California closer to the governance style seen in other privacy regimes. The difference is that California is doing it in its own characteristically sharp-edged way. The state is not merely suggesting mature governance. It is building a record-keeping and submission framework that makes performative compliance much harder to hide.
Cybersecurity Audits Got a Longer Runway, Not a Hall Pass
The cybersecurity audit rules are another sign that California privacy protection is increasingly inseparable from security. This should not surprise anyone, because a privacy promise backed by flimsy security is basically a screen door on a submarine. The rules require independent annual cybersecurity audits for certain businesses whose processing presents significant security risk. Compliance deadlines are phased in based on business size, which gives companies time to prepare.
That extra time should not be mistaken for permission to procrastinate. The structure of the rule tells businesses exactly where this is headed: formalized audit expectations, certifications to the agency, and a clearer regulatory view of what “reasonable security” looks like in practice. Privacy teams and security teams that still operate like distant cousins who only meet at holidays may want to become actual roommates.
The Delete Act and DROP Changed the Data Broker Story
The Delete Act may be the most eye-catching development in California privacy because it takes a long-standing consumer complaint and answers it with a giant red button. The new Delete Request and Opt-out Platform, or DROP, gives Californians a state-hosted way to send deletion requests to registered data brokers through a single mechanism. Beginning August 1, 2026, data brokers must check the system at least every 45 days, process matching requests, delete associated data including inferences unless an exemption applies, and report status back through the platform.
That one feature changes the tone of the data broker conversation. Instead of requiring consumers to chase down dozens or hundreds of obscure companies one by one, California is trying to centralize the process. It is a practical move, but also a symbolic one. The state is telling the market that privacy rights should not depend on how much free time a consumer has on a Tuesday afternoon.
It also highlights a deeper policy trend: California is becoming more skeptical of the gap between data collection at scale and consumer control at scale. The larger that gap gets, the more likely regulators are to reach for system-wide tools rather than case-by-case hopes and prayers.
The Priorities Have Been Revisited, Not Replaced
Here is where the “revisited priorities” angle becomes especially important. The CPPA appears to have finished the heavy lift of core mandated rulemaking, but it has not stopped moving. In March 2026, the agency opened preliminary comment on two new topics: reducing friction in the exercise of privacy rights and opt-out preference signals. Those sound modest, but do not be fooled. They go directly to the question that keeps tripping businesses up: are privacy rights easy enough to use that they actually function?
Priority One: Reduce Friction in the Exercise of Privacy Rights
“Friction” is an unusually honest regulatory word. It recognizes that many privacy failures are not dramatic scandals. They are tiny obstacles, layered on top of one another, until a consumer gives up. Maybe the request link is buried. Maybe the interface nudges the person toward the choice the business prefers. Maybe verification asks for too much. Maybe an authorized agent process is technically available but practically absurd. Maybe changing a prior privacy choice is far harder than making it in the first place.
California has already signaled where it sees these problems. The agency’s first enforcement advisory emphasized data minimization in consumer requests, warning against collecting more information than necessary when people try to exercise their rights. The Honda matter reinforced the same message by targeting excessive information requirements and uneven choice architecture. So when the CPPA now asks for comment on reducing friction, that is not abstract brainstorming. It is a preview of where rules and enforcement could tighten next.
The likely direction is greater standardization, simpler workflows, and a lower tolerance for verification practices that feel more like cross-examination than customer service. In plain English, privacy rights may need to start working like modern online tools instead of fax machines with emotional baggage.
Priority Two: Make Opt-Out Preference Signals Actually Work
The second emerging priority is opt-out preference signals, including tools like Global Privacy Control. This is one of those ideas that sounds wonderfully simple until it meets the internet. A consumer sends a browser-level signal saying, in effect, “Please do not sell or share my information.” Then businesses must decide how to recognize it, apply it across devices or profiles, and treat it for known versus pseudonymous users.
California’s interest here makes perfect sense. Opt-out preference signals are one of the cleanest ways to reduce friction, because they allow a consumer to express a privacy choice once rather than on every site, every page, and every pop-up box that leaps out like a caffeinated jack-in-the-box. The trouble is that implementation is messy, and enforcement is already heating up. California joined Colorado and Connecticut in a multistate sweep focused on companies that may not be honoring Global Privacy Control signals, while earlier actions involving Sephora, DoorDash, Honda, Todd Snyder, and Tractor Supply underscore that opt-out mechanics are no longer a niche compliance issue.
If the CPPA continues down this road, businesses should expect more clarity on signal handling and less patience for half-working implementations. California seems increasingly interested in outcomes, not excuses.
What the Late-2025 Policy Proposals Reveal About the Next Wave
The agency’s late-2025 policy proposals offer another clue about where California privacy protection could be heading next. Three themes stand out: whistleblower protections, extending deletion rights to cover all personal information a business holds about a consumer rather than only data collected directly from that consumer, and requiring alternative methods for submitting privacy requests.
Each of those proposals tells a story. Whistleblower protections suggest the agency wants more visibility into hidden practices inside complex organizations. Expanded deletion rights reflect a reality of modern data ecosystems: businesses often enrich profiles with purchased or third-party data, so a narrow deletion right may leave a large chunk of the profile untouched. Alternative request methods focus on usability and accessibility, particularly for online-only businesses that may currently offer too little support to consumers trying to act on their rights.
Taken together, these proposals show California moving beyond privacy formalities. The state is not just asking whether a right exists. It is asking whether the right reaches the data that matters, whether people can actually use it, and whether insiders can expose violations when the system is too opaque for outsiders to see. That is a much more muscular concept of privacy protection.
Where Businesses Still Slip on the Banana Peel
The recent enforcement record reads like a greatest-hits album of avoidable mistakes. Some businesses make opting out harder than opting in. Some demand more information than necessary when handling requests. Some fail to maintain proper contracts with advertising or service partners. Some publish privacy notices that either omit key rights or bury them in legal wallpaper. And some data brokers still appear to behave as though registration is an optional hobby.
The important lesson is that these are not exotic violations. They are operational failures. They happen at the messy intersection of legal interpretation, product design, marketing technology, vendor management, and internal ownership. In many companies, privacy still falls into the organizational crack between “someone should handle that” and “I thought another team owned it.” California regulators have become very interested in that crack.
What Smart Companies Should Do Now
The smartest response is not panic. It is preparation. Businesses should inventory where they use automated decision tools, map which workflows trigger risk assessments, test whether opt-out preference signals work as intended, simplify privacy request flows, reduce verification data collection, review contracts with partners, and determine whether any business unit may qualify as a data broker. They should also pay attention to comment opportunities, because California’s preliminary rulemaking stage is one of the few moments when industry can help shape the practical details before the rules harden.
Most of all, companies should stop treating privacy as a one-time notice exercise. California has made it increasingly clear that privacy compliance is a living operational system. If the system is clunky, asymmetric, opaque, or overly hungry for data, regulators may decide the problem is not the consumer’s patience. It is the business model.
Why This Matters Beyond California
Even companies that are not headquartered in California should pay attention, because California’s privacy influence rarely stays put. The state has a long history of setting de facto national baselines. When California defines meaningful opt-out tools, data minimization, risk documentation, or acceptable handling of automated decisions, those concepts tend to spread through vendor contracts, product design standards, and multistate compliance playbooks. The famous California effect is not famous because it whispers.
So when the CPPA revisits rulemaking priorities, the ripple effect goes well beyond Sacramento. It shapes how national brands build interfaces, how privacy engineers design workflows, how boards think about data risk, and how other states decide what strong privacy enforcement should look like. California is revisiting priorities, yes. The rest of the country is quietly revising checklists.
Real-World Experiences From the Privacy Front Lines
In practice, the experience of dealing with California privacy rulemaking often feels less like reading a statute and more like renovating an old house while still living in it. Privacy teams discover that the walls are full of wires nobody labeled. Marketing tools talk to analytics tools, analytics tools talk to ad tech vendors, and somewhere in the middle a consumer tries to click “Do Not Sell or Share My Personal Information” and expects the message to travel through the whole system without drama. That expectation sounds reasonable because it is reasonable. The challenge is that many organizations built their data environments for growth and convenience, not for graceful privacy controls.
One common experience involves request handling. A company may believe it offers consumers a straightforward path to exercise rights, but once the process is tested end to end, the journey gets bumpy fast. The intake form may ask for too much information. The identity-verification process may be designed by security teams that are trying to prevent fraud, while the privacy team is trying to reduce burden. Customer support may not know how to help authorized agents. Product teams may have no idea how a browser-based opt-out preference signal should be applied to a logged-in account versus a browser cookie. Individually, each issue looks manageable. Together, they form the kind of friction the CPPA is now openly scrutinizing.
Another repeated experience is discovering that automated decisionmaking is broader than expected. Many businesses hear “ADMT” and imagine a futuristic AI robot in a blazer making loan decisions. Then someone maps actual workflows and realizes that screening tools, ranking systems, scoring models, or automated eligibility logic may already play a substantial role in consumer decisions. That moment is usually followed by a very long meeting. The practical lesson is that California’s rules are not only about flashy AI. They are about the ordinary software logic that shapes important outcomes for real people.
Data broker questions create a different kind of experience: surprise. Organizations sometimes assume they are not data brokers because the term sounds like it belongs to a shadowy company operating out of a trench coat. Then someone reviews the statutory definition, looks at a business unit that buys and sells access to personal information without a direct relationship to consumers, and suddenly the room gets quiet. The Delete Act and DROP make that silence even louder, because data broker compliance now carries more visibility, more operational obligations, and more enforcement risk.
Perhaps the most useful experience-based takeaway is this: the companies that adapt best are usually the ones that stop treating privacy as a legal memo and start treating it as product design, engineering, governance, and customer experience all at once. California’s revisited rulemaking priorities reward that mindset. The state is pushing toward privacy rights that are easier to find, easier to use, harder to evade, and more meaningful across the full data lifecycle. Businesses that understand that shift early will not just survive the next round of California privacy protection. They will probably sleep better, too.
Conclusion
California privacy protection has entered a new phase. The state is no longer just drafting broad rights and hoping the market figures out the details. It is refining the machinery. The CPPA finished a major round of rulemaking in 2025, brought key requirements into force in 2026, and is already exploring the next frontier: lowering the friction of privacy rights and sharpening opt-out preference signals. Add in Delete Act implementation, tougher data broker scrutiny, and a steady stream of enforcement actions, and the message is unmistakable. California wants privacy rights to be practical, scalable, and difficult to dodge. Businesses that treat this as a temporary compliance weather pattern may want an umbrella. And maybe a canoe.