After IDFA and Cookies, the Identity Stack Is a Surveillance Upgrade
Apple's App Tracking Transparency and Google's cookie deprecation were meant to end cross-site tracking, but the ad-tech industry simply rebuilt it deeper, with less oversight and fewer ways to opt out.
adexchanger.com
In this article
On 22 December 2025, Italy's competition regulator fined Apple €98.63 million, roughly $116 million, over the company's App Tracking Transparency framework. SiliconANGLE reported that the Autorità Garante della Concorrenza e del Mercato found Apple's ATT regime gave the company's own advertising business preferential treatment while imposing stricter consent requirements on third-party developers. The fine was not about privacy. It was about whether the architecture of consent was also an architecture of competitive advantage.
The ruling landed four and a half years after Apple shipped ATT with iOS 14.5 in April 2021, a change that the mobile advertising industry claimed would destroy the in-app economy. It did not. What it did was reorganise the money. Between 2021 and 2026, ad spending migrated from user-level targeting on the open exchange toward three walled gardens (Apple, Google, Meta) and a sprawling secondary market of alternative identifiers, data clean rooms, and probabilistic matching that no regulator has yet fully mapped. The identifier-for-advertisers, or IDFA, was retired; the data flows were not retired at all.
That reorganised money is now the subject of a new standards war at the World Wide Web Consortium. In an April 2026 column for AdExchanger, Don Marti, principal at the consultancy Aloodo, described what he called an "attribution cartel" forming around a proposed W3C specification that would centralise ad-effectiveness measurement under rules set by the three platform companies that already dominate digital advertising. "A proposed W3C standard aims to redefine how ad effectiveness is measured," the piece warned. "But it would centralize measurement under the control of Google, Apple and Meta." The W3C Attribution proposal, as described, would standardise the collection of conversion data in a browser-mediated framework that, by design, limits independent verification.
The phrase that matters is "by design." Privacy Sandbox, Apple's SKAdNetwork, and Meta's Aggregated Event Measurement all share a structural feature: they move the point of measurement from the advertiser's server to the platform's controlled environment. An ad impression is logged. A conversion happens later. The platform reports back an aggregate, delayed signal: a conversion happened, or did not, within some privacy budget. The advertiser cannot verify the claim independently because the raw event-level data never leaves the platform. What the platform reports becomes the ground truth of the transaction. The platform is the buyer, the seller, the exchange, and now also the auditor.
That architecture has regulatory implications that extend far beyond competition law. Under the General Data Protection Regulation in Europe, an advertiser that relies on a platform's measurement system to determine how much to pay for a campaign is, in effect, making an automated decision with legal effects. GDPR Article 22 covers decisions based solely on automated processing. If the platform's measurement output determines budget, and the advertiser cannot audit the inputs, there is a question about whether meaningful human intervention exists anywhere in the loop. No data protection authority has yet opened a formal inquiry framed this way, but the legal scaffolding is visible to anyone who reads the regulation alongside the technical specification.
Meanwhile, the post-cookie identity market has consolidated around a handful of deterministic identifiers that operate with only the thinnest veneer of consent. The Trade Desk's Unified ID 2.0, or UID2, uses hashed email addresses as a persistent cross-site identifier. LiveRamp's RampID links offline identity data to online browsing through partnerships with data brokers and publishers. Both systems require a user to provide an email address during authentication; both rely on consent flows that bundle the "sign in" and "track me across the web" decisions into a single interaction. If you want to read that article behind a login wall, you are also agreeing to be stitched into an identity graph. The choice is not a choice.
The European digital rights organisation NOYB, founded by the privacy activist Max Schrems, has filed multiple complaints against the real-time bidding infrastructure that undergirds this system. Those complaints, lodged with data protection authorities across the EU beginning in 2018 and continuing through 2025, argue that the broadcast of personal data in bid requests to hundreds of potential advertisers cannot be consented to in any meaningful way. The French CNIL and the Belgian DPA have each issued rulings that align, in part, with this analysis. Yet the targeting industry has not shrunk. It has moved latency. Where once a bid request carried raw identifiers to an open auction, it now carries a pseudonymised token to a data clean room, where matching happens in a controlled environment operated by the platform or the cloud provider. The data changed hands fewer times, but the hands that touch it are larger.
The clean room has become the defining infrastructure of the post-cookie era. Amazon Web Services launched its AWS Clean Rooms service in early 2023. Google followed with Ads Data Hub, restricted to its own ad inventory. Snowflake, a data warehouse company, positioned its Data Clean Room product as a neutral environment for advertiser-publisher matching. Each clean room provider charges based on query volume; each benefits from more data flowing through its pipes. The incentive structure is not to minimise data collection. It is to maximise the number of queries that can be run without exposing raw data to the counterparty. Privacy, in this framing, is a property of the enclosure, not a property of the data subject.
At the same time, the IAB Tech Lab, an industry standards body, has pushed a framework called Seller-Defined Audiences, which allows publishers to label their inventory with audience attributes derived from first-party data. A publisher tells the supply-side platform, "this page view belongs to a user we believe is interested in mortgages." The SSP passes that label to the demand-side platform. The DSP bids. No third-party cookie is required. No identity sync is required. The entire transaction is semantic: a label, a bid, a price. But the label was produced by a publisher-side algorithm trained on browsing behaviour that the user has no way to inspect, correct, or delete. The data subject does not know they have been labelled "mortgage-intent," let alone whether the label was derived from a session on a news article about housing policy or a form they never submitted. Under GDPR Article 15, that is a right-of-access question. Under the current architecture, it is also unanswerable.
The ExchangeWire series that ran through April 2026 captured the industry's own framing of this moment. In the final instalment, published on 29 April under the headline "Rewriting the Rules of Ad Tech: From Black Boxes to AI Operating Systems," the publication surveyed ad-tech executives on what was shaping the sector. The answers clustered around artificial intelligence: predictive bidding, automated creative generation, dynamic audience segmentation. One thread running through the piece was the idea that the ad-tech stack is becoming an "AI operating system" in which human decision-making functions primarily at the level of budget and objective-setting. The machine determines the targeting, the creative, the price, and the measurement. The human approves the invoice.
This is the through-line connecting the post-IDFA identity stack, the W3C attribution standard, and the AI operating system narrative. Each layer removes a human audit point. The identifier is no longer the IDFA or the third-party cookie, both of which a technically literate user could inspect and block. It is now a vector: a probability distribution over demographic and behavioural categories, produced by a model that was trained on data collected before the consent banner was even rendered. The user sees "Accept All" or "Reject All" and believes those are the choices. They are not. The real choices were made six infrastructure layers deeper, in a bidding algorithm that decided which ad to show based on a profile the user cannot access, maintained by a vendor whose name does not appear on the consent management platform.
The data flow, end to end, now runs through roughly five layers. Layer one: the publisher's first-party data collection, often through authenticated sessions, subscription forms, or on-site behavioural signals. Layer two: the identity resolution provider, such as LiveRamp or The Trade Desk, which matches that pseudonymous signal to a persistent identifier. Layer three: the data clean room, where the publisher and the advertiser run matching queries without exposing raw data to each other but under terms governed by the clean room operator. Layer four: the demand-side platform or supply-side platform, which applies machine-learning models to predict conversion likelihood and set bids. Layer five: the attribution reporting API, whether SKAdNetwork, Privacy Sandbox, or the proposed W3C Attribution standard, which closes the loop. At no point in this chain can the data subject request deletion and be certain the deletion propagated to layers three, four, and five.
Researchers at the Irish Council for Civil Liberties have tracked the number of data brokers active in the European programmatic advertising ecosystem. Their public tracker, updated through 2025, lists more than 500 companies that process bid-stream data. Each broker operates under a legitimate interest claim or a consent framework that chains through a consent management platform with a 14 percent average opt-in rate, according to a 2024 study published by the European Data Protection Board's own measurement taskforce. The numbers do not add up: 500 brokers, each claiming legitimate interest or consent, but only one in seven users affirmatively opting in. The gap between claimed lawful basis and actual user choice is the space where the post-cookie identity stack lives.
The enforcement gap is wider still. Apple's SKAdNetwork privacy manifests, introduced in 2023 as a requirement for apps using ad attribution, created a declarative framework in which developers must list the tracking domains their apps contact. Declaring the domain is not the same as proving lawful basis for the data transfer. Apple's review process checks for correspondence between the manifest and the app's network behaviour; it does not assess GDPR compliance. Google's Privacy Sandbox, which AdExchanger described as a project that is both "over" and "not quite" finished in 2026, operates on a similar model: the browser enforces technical privacy boundaries, but the legal basis for the data processing within those boundaries is left to the data controller. The regulator cannot audit the browser. The browser vendor cannot audit the data controller. In between sits the user, holding a consent form that explains none of this.
With everything else going on in 2026, at least we don't have Google's 'Privacy Sandbox' to worry about. The high-profile project is over. Or is it? Not quite., AdExchanger, April 2026
That uncertainty, whether the Sandbox is dead or merely dormant, is itself a market signal. Ad-tech vendors have spent six years preparing for cookie deprecation, then for cookie retention, then for a hybrid model in which the Topics API and Protected Audience API coexist with whatever alternative identifiers the industry builds on top. The capital expenditure on identity resolution infrastructure, measured in billions of dollars across the sector, cannot be written down just because Google delayed the timeline. The pipes are laid. The clean rooms are operational. The machine-learning models are trained. What remains is the slow regulatory work of determining whether any of it is lawful.
The European Data Protection Board is expected to release updated guidance on the "consent or pay" model in the second half of 2026, following the Court of Justice of the European Union's ruling in Case C-446/21, which addressed Meta's use of contractual necessity as a basis for behavioural advertising. The CJEU found that personalised advertising cannot be justified under the performance-of-contract legal basis. The ruling pushed the industry toward consent, which in turn pushed publishers toward "consent or pay" paywalls, which in turn attracted fresh regulatory scrutiny. The cycle is now in its third iteration: each regulatory intervention narrows the lawful basis; the industry finds a new interpretation; the regulator intervenes again. The identity stack does not disappear. It just moves to shakier legal ground.
What is missing from this cycle is any mechanism for systemic accountability. A data subject who wants to know every identifier associated with their browsing in the past thirty days has no portal to query. A publisher who wants to verify that the audience labels attached to its inventory by a downstream SSP are accurate has no audit right in most standard contracts. An advertiser who wants to confirm that the attribution numbers reported by a platform reflect actual conversions, rather than modelled conversions, has no independent measurement path. The entire structure runs on trust in black boxes, and the black boxes are operated by the same companies that sell the ads, own the inventory, and set the measurement methodology.
Whose body produces the data?
The question that runs through every layer of the post-cookie stack is whose body is generating the signal. A single page load on a news site triggers, on average, between twenty and seventy bid requests, depending on the market and the ad density. Each bid request contains the user's IP address, device fingerprint components, and a publisher-provided content category. The bid request is broadcast to multiple demand sources. Each demand source logs the request, enriches it with data from third-party brokers, and returns a bid. After the auction closes, the SSP logs the winning bid, the clearing price, and the creative that was served. Twenty to seventy copies of the user's device signature are now distributed across the ad-tech supply chain. The user consented to a single "cookie" on a consent banner. What they actually consented to, according to the IAB Europe's Transparency and Consent Framework, was the processing of their personal data for purposes including "personalised advertising," "content measurement," and "audience research" by any vendor that has registered with the framework and declared a lawful basis. In 2026, the Global Vendor List contains several hundred such vendors. Most users cannot name one.
The discrepancy between what a consent banner communicates and what the ad-tech stack executes is not an accident. It is a structural feature of an industry whose revenue depends on processing more data than any consumer would voluntarily provide. Every new privacy restriction, from ATT to cookie deprecation to the DMA's data-portability requirements, has been met not with compliance in spirit but with architectural restructuring in letter. The identifier is removed from one layer and reinserted at a lower layer. The user is told tracking was blocked; the vendor knows it was merely rerouted.
The path forward depends on whether regulators begin auditing the data flows rather than the consent banners. A consent banner is a user-interface problem. A data flow is a systems problem. The EU's Digital Services Act includes provisions for researcher access to platform data, which could, if enforced rigorously, allow independent researchers to trace an identifier from the bid request through the clean room to the attribution report. The Digital Markets Act's interoperability requirements could require gatekeeper platforms to offer an independent measurement API that advertisers and researchers can use to verify conversion claims. Neither of these provisions has been meaningfully tested in the ad-tech context. The W3C Attribution standard, if adopted in its current trajectory, could make that testing harder, not easier, by standardising a measurement model that bakes platform control into the browser.
Watch for the European Data Protection Board's updated guidance in the second half of 2026. Watch for the first GDPR complaint framed specifically around data clean rooms and whether the matching query constitutes a separate processing operation requiring a separate lawful basis. Watch for the French CNIL and the Irish DPC to issue rulings on whether Seller-Defined Audiences labels are personal data or merely metadata, a distinction that determines whether the right of access applies. And watch for the W3C Attribution working group to publish its first draft. The identity stack is being rebuilt in real time, and the people whose identities it carries are still not in the room.