#34 Bosco: another small step for Javier de la Cueva, a giant leap for all of us
Civio wins a crucial case for algorithmic transparency before the Supreme Court
It wasn’t easy, but Civio, with Javier de la Cueva, has succeeded in ensuring that the Supreme Court ratifies the obligation of public administrations to disclose the algorithms they use that impact citizens’ rights.
Information must be assessed on a case-by-case basis, and neither security reasons nor intellectual property can justify a total denial of the citizen’s right to information.
A very different ruling from the insane verdict of the National Court that supported secrecy to, among other things, mitigate cybersecurity risks.
Congratulations to the winners (which is all of us).
You are reading ZERO PARTY DATA. The newsletter about privacy news, technopolies, and tech law by Jorge García Herrero and Darío López Rincón.
In the spare moments this newsletter allows us, we enjoy solving complex issues in personal data protection and artificial intelligence. If you have one of those, give us a little wave. Or contact us by email at jgh(at)jorgegarciaherrero.com
Thanks for reading Zero Party Data! Sign up!
🗞️News of the Data-world 🌍
It’s been a while since The Zuckest gave us such a juicy week. Let’s take a look:
1.- Meta “torrented” (yes, I just made up that verb, so what) porn content (fun fact: specifically “feminist porn” or rather, “porn respectful toward women”, as I’m not really sure the former is “a thing”).
The curious part is that the torrenting was both erm “active” and “passive”.
Let me explain: not only was porn used to train Meta’s generative image and video AI models, but it was also shared as “popular” files among the “torrenters” (again? I’m doing my best to explain this) because when many users download the files you’re sharing, your own downloads go faster.
All this, of course, was told to me by a friend.
The legal angle here is as follows: Anthropic had to shell out in a “non -hehe- forced” settlement one and a half billion dollars… not for using copyrighted content to train its AIs, but for pirating it from Libgen.
Draw your own conclusions. Or remember Judge Alsup’s legendary lines in the Anthropic case we covered here.
Extra scoop: Suno just got nailed for exactly the same thing: downloading music for training… from YouTube.
2.- Total scandal in the UK when several parents who innocently posted images of their daughters “back to school” (age: 13) on Instagram suddenly saw those same images as recommended threads for 40- and 50-year-old men on Threads (remember: Meta’s Twitter clone).
Unsurprisingly, the terms and conditions of these platforms include this possibility.
A practical lesson in why that infamous “consent or pay” from Meta must be declared illegal, and replaced by a set of granular consents that users can accept or not (data processing for personalized ads, special category data proccesing, combining data across services - Instagram, Facebook, WhatsApp, Threads - and, while we’re at it, using your data also for AI training).
Friendly reminder: we already reported this. But we’re quite sure Zuckie won’t get the epic slap he deserves.
Why?
Keep reading:
3.- The enforcement authority will appoint as new Commissioner in October Niamh Sweeney, whose CV prominently includes being “a senior lobbyist at Meta.”
What does being a senior lobbyist mean, could you explain what this person’s role was?
Yes, I can.
But more importantly, Sarah Wynn-Williams, famous Meta whistleblower, explains it in her book “Careless people”, with this telling and funny anecdote (screenshots by Itxaso Domínguez de Olazabal):
The book’s author has, of course, been targeted by Meta and is now on the verge of bankruptcy.
📄Papers of the week
.- The paper Enhancing Clinical Decision-Making: Integrating Multi-Agent Systems with Ethical AI Governance (Ying-Jung Chen, Ahmad Albarqawi, Chi-Sheng Chen) evaluates a framework based on multi-agent systems (MAS) to support clinical decision-making, integrating modular agents for lab analysis, vital signs, and clinical context, along with integration, prediction, transparency, and validation agents.
The proposal includes a transparency agent, shared memories, agent decision logs, and clinical reasoning metadata that enable component-level traceability, facilitating regulatory audits and AI-assisted decision analysis in ICUs.
With this system, the hit series The Pitt would have been very different. I´m guessing here.
.- In what seems like a tribute to the “Joker” sequel, here’s a fascinating paper that dissects the dark and random journey of the average dumbass (that is, you reading this or me writing it: no one’s immune, remember) in his descent into mental illness, fueled by feedback loops of thought and opinion sharpened by “your favorite LLM.”
The paper is called Technological folie à deux: Feedback Loops Between AI Chatbots and Mental Illness by Sebastian Dohnány, Zeb Kurth-Nelson, Eleanor Spens
💀Death by Meme 🤣
Seems like a meme, but it’s pure reality. Enjoy the video from the U.S. Department of Homeland Security until The Pokémon Company takes it down for copyright (and they’re not shy with legal threats). They posted it on their official X account, but here you can enjoy it with one click.
🤖NoRobots.txt or The AI Stuff
.- We never imagined ourselves reading documents penned by UNESCO, but here we are. It’s not a technical paper that’ll make you rich, but it clearly explains the concept of “red teaming,” includes a template to create your own, and visually illustrates classic AI bias examples. For example, the typical case of downplaying women in fields historically dominated by men (or only men, for reasons we all know about world history).
.- A new AI model has been introduced that can predict susceptibility to more than 1,000 diseases. The model improves early detection in retrospective cohorts. It integrates an attention layer that weighs rare biomarker signatures, allowing identification of unusual associations between genetic variants and specific risks (remember: this is AI’s strength).
The bad news is that it complicates interpretability; the good news is that it broadens relevant clinical findings.
.- In this crazy corporate AI bubble world, and with no time to recover from Nvidia’s stake in Intel, here comes more news: Nvidia has agreed to invest up to $100 billion in OpenAI and supply chips for its AI infrastructure, formalizing a strategic alliance between two industry leaders. The deal is a sort of swap with some cash: OpenAI will pay Nvidia for chips, while Nvidia will take stakes that – obviously – don’t give it control. The first phase includes $10 billion with deliveries expected by late 2026.
.-Looks like we’ll have a split in high-risk AI guidelines. Mr. Luca Bertuzzi comments that the February 2026 guidelines from the EC are coming, but the AI Office will release its own specific ones on obligations for high-risk systems. Might be a good time to stop the crazy printer of docs.
🙄The Final Nonsense
New (joke) features in your favorite apps, courtesy of Soren Iverson.
If you think someone might like—or even find this newsletter useful—feel free to forward it.
If you miss any document, comment, or bit of nonsense that clearly should have been included in this week’s Zero Party Data, write to us or leave a comment and we’ll consider it for the next edition.