The EDPS-SRB case is one of the main points of interest in the data protection community, as it could provide the definitive endorsement of what is known as the Scania doctrine or the “subjective doctrine” of the concept of personal data.
Last week, Advocate General Spielmann’s opinion was published. The text is somewhat convoluted, so I won’t dwell on it too much.
Key Insights:
Fundamentally, and to no one’s surprise, the Advocate General (AG) confirms the subjective doctrine.
He does so in the second part of the document. However, in the first and third parts, he enigmatically entangles himself in (i) the concept of opinion as data (first part) and (ii) the consent of data subjects—something that neither appeared nor was expected in the previous instance. Oh well.
But again, the core takeaway is that the AG reaffirms the subjective doctrine (Scania, for those in the know), which has already been applied at least twice by the CJEU.
But I Came to Talk About Something More Interesting:
Rethinking the Scania Doctrine: Why Is It So Controversial?
I’ve been a fan of this subjective doctrine from day one. In fact, when I read the CJEU ruling in 2023, I thought: Of course! This makes total sense.
Let’s recall that, according to this doctrine, a dataset may be personal for SRB and non-personal for Deloitte if SRB pseudonymizes it before handing it over to Deloitte, and the risk of Deloitte re-identifying it is negligible because:
It is legally prohibited from doing so.
It would require disproportionate efforts in terms of time, cost, or human resources.
The consequence is that if the data is not personal for Deloitte, that transfer (and hence, Deloitte) is not subject to the GDPR.
It’s Not About If, But How
In my opinion, we should be focusing less on whether we like this doctrine and more on how to manage it—after all, the CJEU has already applied it three times: Breyer, IAB Europe, and Scania.
Much of the data protection community is up in arms because applying this doctrine would exempt a significant number of large-scale data processing activities from GDPR compliance.
But let’s be clear—exempt from immediate application, not from protection (potential application: this is the key).
What makes this doctrine interesting (and tricky in practice) is that a dataset could easily be non-personal for your mother-in-law but personal for the tax authorities. It could be non-personal on Tuesday and personal on Thursday.
With that perspective in mind, consider what the AG says in his opinion:
“The fact that the rules stemming from Regulation 2018/1725 do not apply to data relating to non-identifiable persons would not preclude entities that are at the origin of misconduct from incurring legal liability where appropriate, for example in the event of disclosure of data resulting in harm.” (paragraph 58).
Sh*t Happens
The problem with lawyers who enjoy advocating for innovative interpretations (myself included) is the constant temptation—risk—of smoking your own supply.
The habit of too easily jumping from step 1: “This is unusual, but it makes sense; let’s see how it works in practice” to step 2: “Hold my beer.”
And like any risk, it must be managed.
Hold My Belgian Beer
There’s a widespread and problematic “Hold my beer” approach to this issue in Belgium: an indiscriminate application of the “legal prohibition” criterion to extend the concept of pseudonymous data indefinitely.
One popular interpretation reads Breyer as follows:
An IP address in Germany is pseudonymous data (personal for a telecom provider, which has subscriber data; personal for authorities that can legally compel the provider to disclose the subscriber’s identity, as happens in cases of illegal streaming of football matches). However, it is not personal for anyone else because the law prohibits the provider from sharing that information with third parties, such as online advertising companies.
In other words, if the controller cannot legally disclose the necessary information for third parties to re-identify a pseudonymized dataset, then there is nothing more to discuss—the dataset is not personal for those third parties. And just like that, we sidestep all the burdensome GDPR obligations.
But this interpretation is as absurd as the consequences it leads to.
Protecting individuals from illegal re-identification (which is often covert or disguised) in big data processing is one of the primary objectives of personal data protection regulations—and this objective becomes even more critical as the number of affected individuals and/or the intensity of processing increases.
Example: Online Personalized Advertising Auctions
Yes, we could interpret CJEU case law as follows: “The illegality of re-identification excludes identifiability,” meaning the dataset is not personal, and thus, the GDPR does not apply.
But that would be tantamount to saying that individuals are protected against the legal re-identification of their data, but not against illegal re-identification.
Meanwhile in Real World©
And the truth is, especially in the world of digital advertising, there is an entire category of companies specializing in cross-referencing databases to re-identify individuals (or at least to “single out” the person behind each cookie) to better personalize ads. To begin with.
For heaven’s sake, they even call themselves “identity providers.”
I wonder if the professionals defending this interpretation would leave their wallet lying in the middle of Brussels’ Gare du Nord, relying solely on the general legal prohibition against theft.
Exactly.
The CJEU, in Breyer, carefully prepared a sandwich.
Inside, we find the assessment of the prohibition on identification and whether the means to do so are proportionate. But then there’s the bread—on top (whether it is reasonable to expect that a third party could re-identify the data) and underneath (whether the risk of re-identification is insignificant).
And you have to eat the whole sandwich, kids. Not just the tasty middle part you love.
It is the controller’s responsibility to assess—under their own accountability—that “the risk of re-identification must be insignificant.” And that risk must be evaluated and mitigated by the controller, not ignored under the excuse of “If it’s illegal, it’s not my problem, WEEEEEEEEEE.”
Of course, there are many more issues to unpack and refine in this discussion, but with a bit of goodwill, we’ll get there. Together.
CJEU vs. CJEU
Where do I get this argument from? The CJEU itself. It’s the same logic that helped dismantle another scam circulating in 2018.
The old-school LOPD-99 experts used to argue (some still do) that if an exception under Article 9(2) GDPR applied, then there was no need to worry about meeting and documenting compliance with a legal basis under Article 6.
The CJEU’s response was irrefutable: if that were true, special category data—which enjoys greater legal protection—would end up being less protected than regular data.
So… no.
Wishing you all a great week.
Jorge García Herrero
Lawyer and Data Protection Officer
Good insight 😌 Can i translate part of this article into Spanish with links to you and a description of your newsletter?