The data loss that followed iOS 14 and Consent Mode v2 is not a temporary problem waiting to be solved by a platform update. It is the new baseline. The advertisers who have accepted this and rebuilt their measurement infrastructure around what is actually observable are operating with a structural advantage over those still waiting for the ecosystem to return to 2019.
What Counts as First-Party Data
First-party data is any information collected directly from your audience through interactions with your own properties. This includes CRM data from purchases and sign-ups, behavioral data from your website and app, email engagement history, offline sales and transaction data, and customer support interactions.
What it does not include: inferred data from third-party platforms, lookalike audience models built by ad platforms, or data purchased or licensed from data brokers. The distinction matters because first-party data is both the highest-quality signal available and the one that is least affected by privacy restrictions, because consent belongs to you.
The quality hierarchy for ad platform signals runs: customer match lists from your CRM at the top, website event data collected with a robust server-side setup second, platform-side behavioral data third. The further you move from your own collected data, the more you are dependent on signals that can disappear with a platform policy change or a browser update.
Consent Mode v2 and What It Actually Changes
Consent Mode v2, which became mandatory for Google advertising in the EU in early 2024, changes the relationship between user consent and conversion reporting in ways that are still being underestimated by most advertisers.
When a user declines tracking consent, Google uses modeled conversions to fill the gap. The platform estimates how many conversions occurred based on aggregate patterns and statistical modeling rather than direct measurement. For accounts where a significant share of traffic is consent-declined, this means a meaningful portion of reported conversions are modeled rather than observed.
The practical implications: conversion totals look more stable than the underlying measurement quality warrants. CPA and ROAS metrics are partly real and partly model outputs. And the ratio of modeled to observed conversions varies by market, device, and demographic in ways that are not fully transparent.
This is not an argument against using Google Ads in the EU. It is an argument for building verification systems: server-side tracking to maximize observed conversion rates, regular reconciliation against backend transaction data, and incrementality testing to ground-truth your aggregate performance.
Server-Side Tracking: What It Solves and What It Does Not
Server-side tracking moves the conversion measurement logic from the user's browser to your own server. Instead of relying on a browser pixel that can be blocked by Safari's ITP, Firefox's Enhanced Tracking Protection, or ad blockers, events are sent from your server directly to the platform's API.
What this solves: browser-based tracking loss from ITP and similar restrictions, ad blocker interference with client-side pixels, and some of the signal loss from iOS app tracking restrictions for web conversions.
What it does not solve: user-consent-declined tracking (GDPR restrictions apply regardless of where the event is fired), cross-device attribution (a user who clicks on mobile and converts on desktop is still a challenge), and app-to-web attribution gaps.
Server-side tracking is necessary but not sufficient. It recovers some of the measurement signal that was lost to browser restrictions, but it does not return you to 2019-era tracking completeness. Plan for 15 to 30 percent structural data loss even with a well-implemented server-side setup.
Conversion APIs: Meta, Google, and TikTok
Each major platform now has a server-side Conversion API that accepts event data directly from your infrastructure. The implementation specifics differ, but the principle is the same: you fire events from your server with as many matching keys as possible (email, phone, IP address, user agent) and the platform uses those keys to match events to users and campaigns.
Match rates are the key performance indicator for Conversion API setup quality. Meta target is above 80 percent event match score. Google's Enhanced Conversions setup has similar benchmarks. Low match rates (below 60 percent) indicate that the events you are sending lack sufficient matching keys, which defeats much of the purpose.
The most common setup mistake is sending Conversion API events without hashing or with inconsistent hashing across properties. All personal data sent to platform APIs should be SHA-256 hashed before transmission, and the hashing implementation needs to be consistent: normalize email addresses to lowercase before hashing, include phone numbers with country codes, and use the same format consistently across all events.
Clean Rooms and Data Matching
Clean rooms are privacy-preserving environments where two parties can match datasets without either party seeing the other's raw data. In advertising, this typically means matching your CRM data against a platform's user data to measure performance, plan campaigns, or identify audience overlap.
Google's Ads Data Hub, Meta's Advanced Analytics, and Amazon Marketing Cloud all offer clean room functionality at different levels of maturity and accessibility. For large advertisers, they provide a legitimate path to audience insights and measurement that complies with privacy requirements.
For smaller advertisers, the practical utility is limited by minimum data requirements, technical complexity, and cost. The more accessible version of the same principle is Customer Match: uploading hashed CRM data directly to platforms for audience matching, which can be done without clean room infrastructure and produces meaningful signal improvement at most budget levels.
The Over-Reliance on Modeled Conversions
The risk worth naming directly is this: as platforms model more of the conversion data they report, the feedback loop between your actual business results and your campaign optimization decisions gets weaker.
If your account has 30 percent modeled conversions and you are optimizing bidding, budget allocation, and creative decisions based on reported CPA, you are partly optimizing against statistical estimates rather than observed behavior. The estimates are better than nothing, but they are not the same as measurement.
The safeguard is a regular reconciliation practice: compare platform-reported conversions to actual transactions in your backend system at least monthly. If the ratio is stable, the models are tracking reality reasonably well. If it drifts, something in your measurement setup has changed and the platform reports may be misleading.