Imagine that taxi drivers could only use BMW cars and that, suddenly, BMW removed the passengers’ seat belts.
Would the headline “BMW simplifies taxis” seem correct to you?
Today the headline is: Google “simplifies” its privacy controls in Analytics.
The honest translation would be: Google liquidates an essential control for website publishers, who are the ones who are going to eat the responsibility in the EU.
Let’s see:
Since June 15, 2026, Google Analytics will stop using the Google Signals setting as a control for the flow of advertising data toward Google Ads.
From that date, the only mechanism governing that transfer will be the ad_storage parameter of Consent Mode. This is what Google announces in its own Help Center, calling it “simplification” with its usual reinforced-concrete face.
What changes? (sorry: what gets “simplified”?)
Until now, by disabling Google Signals it was possible to prevent Google Analytics 4 from sharing advertising cookies and user identifiers with Google Ads. Starting on June 15, that control will only manage the association of analytics data with signed-in users for the preparation of behavioral reports within GA4.
Everything else — that is, what matters for Google’s advertising business — will be managed by the ad_storage control.
· If ad_storage is in “granted” mode, Google Ads will be able to use all available advertising signals, including associating user activity with their Google account.
· If it is “denied”, Google will be limited to less persistent signals, such as URL parameters of the gclid type.
Google has always boasted about transparency and compliance in its data processing, but there are few examples as clear as this one of the exact opposite: its specialty is generating an illusion of control by restricting the options available to the user.
And rarely has a binary choice been less neutral: either you feed Google’s business with your users’ data, or you assume the performance cost of your campaigns.
Last but not least: Google grants a 90-day grace period for its users to update their privacy policies and cookie banners. If nothing had changed in the real processing of the data, updating the information clauses would be unnecessary.
The very existence of this deadline is the best refutation of the euphemism “simplification”.
Problems for website publishers
In Fashion ID (C-40/17), the CJEU clarified that inserting third-party code into a website (Fashion ID) is a design decision of the data controller that enables that third party to access users’ personal data. The fact that Fashion ID did not control what Facebook did with that data afterward on its own did not exempt it from responsibility; it simply delimited the stretch of the data lifecycle attributable to it.
Well then, the controller that embeds the GA4 tag on its website makes a design decision that enables collection. If its CMP is badly configured, if the banner defaults to “granted”, or if the implementation of Consent Mode is technically incorrect — a very frequent scenario, as shown by the history of sanctions against Google Analytics in several Member States — the data flows toward Google Ads without a valid legal basis. And the responsibility, in accordance with Fashion ID and Wirtschaftsakademie, falls entirely on the controller, not on Google.
Does this have anything to do with the Digital Omnibus thing, miss?
Ah, the Digital Omnibus, that other “simplification” maneuver...
Where the Omnibus proposes to simplify consent through automated technical signals, Google simplifies its own architecture by making Consent Mode the only point of control. Where the Omnibus proposes reducing banner fatigue, Google removes a layer of protection that did not depend on banners. Where the Omnibus wants more legal certainty for the data controller, Google shifts the technical responsibility to a layer — the CMP implementation — where errors are endemic and commercial pressure to configure “granted” is at its highest.
At the end of the day, when the Omnibus enters into force — in the best-case scenario, not before 2027-2028 after negotiation in the Council and Parliament — Google will already have consolidated an architecture where ad_storage is the central control:
.- If that architecture ends up being validated by the new legal framework, Google will have transformed a position currently questioned by six different Authorities into a compliance standard.
You are reading ZERO PARTY DATA. The newsletter on current affairs and technology law by Jorge García Herrero and Darío López Rincón.
In the free time this newsletter leaves us, we solve complicated messes related to personal data protection and artificial intelligence regulations. If you have one of those, give us a little wave. Or contact us by email at jgh(at)jorgegarciaherrero.com
🗞️News from the Data World 🌍
.- China leads the AI talent race. China not only graduates more top researchers, but now also retains young profiles more easily who used to migrate to the U.S. To no one’s surprise. The less obvious angle is geopolitical-labor: the Chinese advantage seems to come less from star hires and more from early retention conditions, creating a domestic pipeline that is hard to break from the outside.
.- Yesterday the European Commission confirmed that the app for age verification is ready. It would have been appreciated if the political statement had clarified whether the whole zero knowledge proof issue is completely ok and all that, but shall we bet on how long it will take Member States to make it actually work in each area? Will Discord come back before it is fully operational?
.- Booking has another security breach again. Almost one every year, and when one lets one’s guard down. And it does not seem to affect much the fact that it remains number one in its thing of booking hotels. The Guardian pokes it in the eye with the 475,000 they got hit with in 21 for another breach. Fair enough, it is the only one that can be traced (or that we are capable of finding).
.- Tremendous report on the capture of precise geolocation by the Webloc app and its delightful consequences for claims of data “anonymization” by companies that have access to this dataset and its use by delightful authorities such as ICE.
Let’s not forget the precision of Strava geolocation with military stuff. We forgot to mention the marvel that it became known that the French aircraft carrier Charles de Gaulle (one of those nuclear ones, flagship and the only one they have) was around Cyprus.
That is what happens when someone can be seen running in a strange floating perimeter at a specific point in the Mediterranean. And military people being crazy, it is beyond obvious that they could know which aircraft carrier it is just by seeing the shape of the flight deck.
📖 High density docs for data junkies ☕️
.- The CNIL publishes its final recommendations on tracking pixels in emails, updating its regulatory position on a widespread practice in digital marketing that is, at the same time, invisible to recipients. Tracking pixels are one-pixel images inserted into emails that, when loaded, communicate to the sender the time the email was opened, the device, the approximate location, and certain reading-behavior modalities.
The CNIL clarifies the applicable legal bases, the information requirements that must be met, and the opt-out mechanisms that must be offered. Luis Montezuma provides us with the automatic translation.
.- The PIA template that the EDPB has put out (in public consultation) does not look bad for pulling out some punctual thing. More time has to be devoted to it to see it in detail, but it even has a box planned for secondary purpose. And a direct link to all the DPAs’ guides on the subject (the additional PDF explaining the template).
And it serves as a LinkedIn social experiment to see the two big groups of professionals: those who see the glass half full of how good, and those who throw it off the table after seeing debatable things (no need to say which of the two we are in). Of the second type, we have Robert Bateman and Peter Craddokarguing that there are little things. It is true that it comes very late and is more complicated than for a mere form that the DPA tears apart when it comes asking for it.
💀Death by Meme🤣
Our Two Cents
.- The good people at Legal4Tech podcast had the terrible idea of letting me ramble in English (that is not one, but two MISTAKES in a row) about data protection in general, about the digital omnibus, and about the Scania doctrine, which quite sensibly they did not let me go deeper into because I would have bored everyone to death.
🤖NoRobots.txt or The AI Stuff
.- Google DeepMind’s watermarking system for AI-generated content has been deciphered and neutralized by an unemployed engineer using 200 black images and basic signal processing. Arrosh Denny’s “reverse-SynthID” project identified through spectral analysis that the watermark operates on a carrier-frequency structure consistent across images generated by the same Gemini model, without needing access to Google’s encoders.
.- The Belgian DPA has put out a document on data protection applied to AI (The impact of artificial intelligence on privacy). It is not a document that brings big novelties, but it has a first part with a clear explanation of the differences between AI model and AI system that is good. The most striking thing is the culinary simile to explain this difference:
The Cooking analogy is helpful to grasp the difference. AI model training consists of the creation of a cake recipe, while the AI system bakes the cake. The baked cake (the output) depends on the quality of the ingredients (the data), the reliability of the recipe (the AI model architecture), and the steps to follow (the algorithm). The integration of that model into the AI system enables the cake to be baked by following the recipe, with the correct ingredients, executing the steps in order, and using the oven, cake pan, mixing bowls, etc. (the infrastructure, APIs, interfaces, etc.) to complete the process.
📃The paper of the week
.- Despite how striking the headline is, the first paragraph (and in general all the content of the article by Jordi Perez Colome), the essential aside of this article is, IMHO, the following:
”Thousands of users share photos or murky fantasies with other men in groups that the messaging application Telegram DOES NOT MODERATE, BUT DOES MONETIZE, thanks to its subscription system.”
The paper coordinated by Silvia Semenzin and Salvatore Romano here.
.- Another paper (a total classic by Steven Kerr with two decades behind it) that is undoubtedly worth reading was recovered by Ethan Mollick as a result of one of Meta’s latest great ideas: incentivizing its employees to consume tokens like crazy.
The paper breaks down a lot of striking examples of terribly determined incentives that end up promoting the opposite result to the one pursued.
🙄 Da-Tadum-bass
Silliness, but damn how annoying it is when it happens to you. And surely Microsoft already considers it a “feauture” to preserve.
If you think this newsletter might appeal to someone and even be useful to them, forward it to them.
If you miss any doc, comment, or dumb thing that clearly should have been in this week’s Zero Party Data, write to us or leave a comment and we’ll consider it for the next one.











