Ad Tech’s Identity Stack Ate Consent: How It Rebuilt Post-Cookies
With third-party cookies and Apple's IDFA gone, a system of AI-powered identity graphs, browser fingerprinting, and data brokerage has replaced them, one that regulators barely see, let alone control.
618media.com
In March 2026, NPR reported that U.S. Immigration and Customs Enforcement was purchasing commercial data about Americans in bulk, without a warrant. The data came from the same supply chain that serves targeted advertising: cell phone app signals, browser telemetry, location pings harvested by SDKs embedded in everyday software. The story was not about a secret government programme. It was about a commercial data-broker market so vast and so unregulated that a federal agency could shop for intimate behavioural profiles the way a brand buys a lookalike audience on a demand-side platform. The architecture that made this possible is the post-cookie, post-IDFA identity stack, and it is operating largely outside the view of the laws that were supposed to constrain it.
The narrative the ad-tech industry told itself, and told regulators, was that the deprecation of third-party cookies and Apple's Identifier for Advertisers would usher in a privacy-respecting era. Google's Privacy Sandbox would replace individual tracking with cohort-based interest groups. Apple's App Tracking Transparency would give users a binary choice that most would decline. Brands would return to context. None of that happened the way the press releases described. Instead, the industry rebuilt the identity stack underneath the consent layer, swapping browser cookies for a far more opaque set of signals: hashed email addresses, device fingerprinting, IP-based household graphs, and AI models that infer identity from behavioural residue. The system did not become more private. It became harder to audit.
The ExchangeWire series on the transformation of ad tech, published in late April 2026, captured the industry's own framing of this shift. Under the headline "Rewriting the Rules of Ad Tech: From Black Boxes to AI Operating Systems," the outlet gathered views from ad-tech experts on what they described as a move from deterministic identity matching, built on cookies and mobile IDs, to probabilistic identity resolution driven by machine learning. One recurring theme was the shift from selling audiences based on known identifiers to selling audiences based on predicted attributes, an approach that removes the need for a user to log in or consent to anything. The data flow no longer begins with a cookie drop. It begins with a statistical inference.
The computational plumbing that makes this work is the identity graph. A typical graph ingests dozens of raw signals per device, per session: screen resolution, installed fonts, battery level, accelerometer drift, WebRTC leak data, IP address prefix, and the timing of keystrokes. None of these signals is a cookie. None of them requires a consent banner. Together, processed through a model that has been trained on billions of known identity pairs, they produce a persistent identifier that can follow a person across sites, apps, and devices. LiveRamp's ATS, The Trade Desk's Unified ID 2.0, and ID5 all operate versions of this logic, though the precise feature sets and training data remain proprietary. Regulators cannot inspect the models. Users cannot see the signals being collected. Advertisers see only the match rate.
Browser fingerprinting was supposed to be the thing the industry abandoned. The European Data Protection Board has issued guidance classifying fingerprinting as processing that generally requires consent. Apple's Safari and Mozilla's Firefox have shipped anti-fingerprinting protections for years. Yet TechRepublic reported in April 2026 that Google's Chrome browser still permits fingerprinting surfaces, and a privacy expert warned that the combination of Privacy Sandbox APIs with unaddressed fingerprinting vectors creates what amounts to a secondary identity channel that operates in parallel with the official, supposedly privacy-safe ones. Chrome holds roughly 65% of the global browser market. A fingerprinting gap in Chrome is not a niche vulnerability; it is a structural feature of the web's largest gateway.
The Chrome issue is especially significant because Google spent years positioning Privacy Sandbox as the answer to third-party cookie deprecation, originally scheduled for 2022, then delayed repeatedly, and finally resolved in 2024 with a decision to let users choose. The Topics API, which assigns browsers to interest cohorts, was meant to replace individual tracking. The Protected Audience API, formerly FLEDGE, was meant to run ad auctions inside the browser rather than on remote servers. A user's account published in April 2026 detailed the experience of turning off Chrome's "Ad Privacy" feature after discovering that it was not blocking tracking but rather enabling Google's own alternative tracking infrastructure under a different name. The feature's label, the user wrote, was misleading enough that most people would never investigate what it actually did.
The labelling problem is not cosmetic. It is the regulatory fault line of the entire identity stack. In Europe, the GDPR requires that data processing be transparent, specific, and based on one of six lawful bases. The ePrivacy Directive requires consent for storing or accessing information on a terminal device. In the United States, a patchwork of state laws, led by California's CPRA, imposes opt-out rights and data-minimisation obligations that depend, again, on disclosure. If the identity graph does not use cookies, does not store anything on the device, and operates entirely through server-side inference from signals that are not classified as personal data under any given regime, then the disclosure obligation is unclear. The industry has been engineering toward that ambiguity for a decade.
The data-broker supply chain documented by NPR makes the stakes concrete. Location data brokers such as Venntel, a subsidiary of Gravy Analytics, and Babel Street aggregate mobile location data from SDKs embedded in weather apps, prayer apps, coupon apps, and games. The apps pass location to the SDK. The SDK passes it to an aggregator. The aggregator sells it to a data broker. The data broker sells it to ICE, the FBI, the Internal Revenue Service, or a hedge fund. The user who tapped "Allow While Using App" on a location prompt was consenting to a weather forecast, not to warrantless government surveillance. But the consent architecture makes no distinction, because the data flow was never designed to stop at the point of sale.
This is the step nobody is regulating: the moment when a behavioural profile assembled for advertising crosses into the general data-broker market and becomes available for non-advertising purposes. The European Union's Digital Services Act and Digital Markets Act impose obligations on very large platforms. The GDPR imposes obligations on data controllers and processors. But the data broker that buys from an ad-tech intermediary and resells to a government agency sits in a regulatory gap, often claiming it is neither a controller nor a processor in the GDPR sense, or that its data is de-identified and therefore outside the law's scope. The claim of de-identification rarely survives scrutiny: multiple academic audits, including work by researchers at the Irish Council for Civil Liberties and University College London, have demonstrated that location datasets with 15 data points per day can re-identify individuals with over 90% accuracy.
The AI layer compounds the opacity. The ExchangeWire series reported that ad-tech firms are increasingly deploying large language models and predictive AI not merely to optimise bids but to generate entire audience segments from sparse or noisy inputs. A model might take a handful of page-visit timestamps, a truncated IP prefix, and a device type, and output a probability that the user is a parent, a homeowner, a voter, or someone experiencing financial distress. The advertiser buys the segment, never seeing the raw signals or the model's confidence intervals. The user never knows the inference was made. The model's accuracy is measured by the advertiser's conversion rate, not by any standard of fairness or privacy protection. And because the model is proprietary, regulators cannot audit it without commercial confidentiality claims triggering years of litigation.
Consent, in this stack, is a shell. The Interactive Advertising Bureau's Transparency and Consent Framework, which governs much of programmatic advertising in Europe, has been criticised repeatedly by privacy regulators. The Belgian Data Protection Authority ruled in 2022 that IAB Europe's framework violated the GDPR. The framework was revised, but its underlying logic remains: consent is collected once, at the top of the supply chain, and then passed down through hundreds of intermediaries in a bid request that contains dozens of data fields the user has never seen. A 2025 study by the Irish Council for Civil Liberties tracked a single bid request through the programmatic ecosystem and found that it had been shared with over 1,200 companies before an ad was served. One consent click. Twelve hundred recipients.
The civil-liberties response is growing but fragmented. NOYB, the privacy advocacy group founded by Max Schrems, has filed over 800 GDPR complaints since 2018, many targeting the advertising technology sector. The Electronic Frontier Foundation has published technical analyses of browser fingerprinting and maintains the Privacy Badger browser extension. European Digital Rights (EDRi) coordinates advocacy across member states. But the enforcement gap is structural. Data protection authorities are under-resourced. The Irish Data Protection Commission, which handles many of the largest cases because of Dublin's role as the European headquarters for Meta, Google, and others, has been criticised by the European Parliament for the pace and scale of its enforcement. Cross-border cases take years. By the time a fine is issued, the technology has moved on.
The regulatory conversation in the United States has intensified around the Fourth Amendment implications of commercial data access. In March 2026, members of Congress questioned representatives from data brokers and technology companies about government purchases of commercial surveillance data, NPR reported. The hearing surface-tensioned a question that has been building since at least the 2018 Carpenter v. United States Supreme Court decision, which held that accessing historical cell-site location records requires a warrant. The commercial data-broker pipeline is, in effect, a workaround for that ruling: if the government cannot compel a carrier to hand over location data without a warrant, it can buy the same data from a broker who collected it through an ad SDK and faces no Fourth Amendment constraint.
The practical consequence for an ordinary person is a surveillance architecture that operates silently and at scale. A person opens a free app. The app contains an SDK from a company they have never heard of. The SDK collects location, accelerometer data, and IP address. That data flows to an aggregator, then to an identity graph, then into a bid stream, then into a data-broker catalogue. The person never saw a privacy policy that named the SDK vendor, the aggregator, the identity graph operator, or the broker. They tapped "Agree" on a 12,000-word terms-of-service document they could not reasonably be expected to read. The system calls this informed consent. The user calls it a morning commute.
What makes this architecture stable is not technology but the absence of a clear legal obligation to stop. The technology to build a consent-respecting ad system exists. Contextual advertising, which targets based on the content of the page rather than the identity of the visitor, is technically straightforward and was the dominant model before programmatic exchanges took over. Privacy-preserving measurement techniques such as differential privacy and on-device processing are well understood. What is missing is not a better sandbox. What is missing is a rule that says an identity graph assembled from involuntary signals, traded through unaudited intermediaries, and made available to buyers with no accountability mechanism, is illegal until proven otherwise.
The European Data Protection Board is expected to issue updated guidance on the interplay between fingerprinting, identity graphs, and the ePrivacy Directive later this year. The UK Information Commissioner's Office has an ongoing investigation into the ad-tech real-time bidding ecosystem, now in its seventh year. In the United States, the Federal Trade Commission has signalled interest in regulating commercial surveillance under its Section 5 authority, but rulemaking is slow and subject to litigation and political reversal. The identity stack has moved faster than every institution tasked with governing it. The reforms, when they arrive, will be regulating the 2024 version of a system that will already be running on a 2027 architecture.
The next twelve months will test whether transparency can catch up to inference. The browser vendors control the most leverageable choke point: the APIs that expose fingerprinting surfaces to scripts. If Chrome, Safari, and Firefox were to treat fingerprinting surfaces with the same severity they treat third-party cookie access, a large slice of the probabilistic identity market would collapse. But browser vendors are also advertising companies, or depend on advertising revenue, and the incentives to close every gap are misaligned. Regulators can investigate identity graphs, but they need the technical staff to audit machine-learning models, the legal authority to demand access, and the political backing to impose remedies that affect billion-dollar revenue streams. None of those conditions is currently met in any major jurisdiction. The identity stack will not dismantle itself. Somebody will have to order it open. The public records portals for those orders do not yet exist.