Every major ad platform over-reports its own contribution to your results. This is not a conspiracy; it is the inevitable outcome of attribution models that count what they can see and ignore what they cannot. The further you optimize based purely on platform attribution, the more budget you shift toward channels with aggressive attribution and away from channels doing real work that is harder to measure.
Attribution vs Incrementality: The Core Difference
Attribution answers the question: which touchpoints were present before a conversion? It gives credit to the channels and campaigns a user interacted with on their path to converting.
Incrementality answers a different question: which touchpoints caused the conversion? It measures whether the conversion would have happened without the advertising.
The gap between these two questions is where most budget waste lives. A user who was going to convert regardless of whether they saw your retargeting ad will still be attributed to that retargeting campaign when they do convert. The campaign looks efficient. It is not actually driving incremental sales.
Retargeting campaigns are the most common example. They routinely show strong CPA and ROAS in platform attribution because they target users who already have high purchase intent. Incrementality testing on retargeting campaigns consistently shows that 40 to 60 percent of attributed conversions would have occurred without the retargeting ads. The platform numbers look good; the actual incremental contribution is much smaller.
How to Run Incrementality Tests
The gold standard for incrementality testing is a randomized holdout experiment: divide your audience into a treatment group that sees ads and a control group that does not, then measure the difference in conversion rates between the two groups.
The practical challenge is implementing this cleanly. Meta offers built-in Conversion Lift studies. Google offers Conversion Lift and Search Lift tools. For cross-channel testing or situations where platform tools have limitations, geo-based experiments are more reliable: identify comparable geographic markets, run advertising only in the test markets, and measure conversion volume differences against control markets.
Geo experiments require sufficient market pairs to be statistically valid, a clean enough measurement setup to detect the difference, and a long enough run time to capture the full conversion window. Four weeks is typically the minimum; eight weeks produces more reliable results for channels with longer consideration cycles.
The output of a properly run incrementality test is a true contribution number for a campaign or channel: not how many conversions it was present for, but how many conversions it actually caused. This number is almost always lower than what platform attribution reports.
Marketing Mix Modeling: When It Works and When It Does Not
Marketing Mix Modeling uses statistical regression analysis to estimate the contribution of each marketing channel to revenue, using aggregate data rather than individual-level tracking. Because it operates on aggregates, it is not affected by cookie loss, app tracking restrictions, or consent mode limitations.
MMM works best when you have at least two years of weekly data across channels, meaningful variation in channel spend over time (which gives the model enough signal to estimate coefficients), and external data inputs for factors that influence sales independently of advertising, such as seasonality, pricing changes, and competitor activity.
MMM struggles with recency: models are typically run quarterly or annually, which makes them poor tools for near-term optimization decisions. They also have limited granularity, measuring channel-level contributions rather than campaign or creative-level performance. And they are sensitive to model specification choices that can produce quite different results from the same data.
The appropriate role for MMM in most mid-sized accounts is strategic budget allocation across channels, not tactical optimization. Use it to answer whether your split between Google, Meta, and TikTok is roughly right. Use incrementality testing to answer whether specific campaigns are working.
LTV vs CAC: The Optimization Horizon Problem
Most performance marketing is optimized against a conversion event that happens early in the customer relationship: a purchase, a sign-up, a lead form. The cost to acquire that conversion is measured and minimized. The lifetime value of the customer that conversion represents is often not factored in at all.
This creates a systematic bias toward the channels and campaigns that produce cheap first conversions, which are often not the channels that produce the most valuable customers. A channel that generates leads at twice the CPL of another might generate customers with three times the LTV. Optimizing purely on CPL will allocate budget away from the better channel.
The solution requires connecting your ad platform data to your backend CRM or revenue data. This is technically feasible for most businesses and strategically important for any business where customer LTV varies meaningfully by acquisition source. Passing LTV-weighted conversion values back to platforms as offline conversions changes the optimization signal significantly: campaigns that looked expensive by CPA suddenly look efficient by revenue contribution.
Blending Data Sources for Decision-Making
The measurement approach that produces the most reliable decisions is not choosing between attribution, incrementality, and MMM. It is using all three in combination, at the decision timescale each is suited for.
Day-to-day optimization uses platform attribution data, with a clear understanding of its limitations and a calibrated adjustment factor from incrementality testing. Tactical decisions about which campaigns to scale or pause, which creative to invest in, and which bidding adjustments to make all happen at this timescale.
Monthly or quarterly budget allocation decisions use a combination of incrementality test results and GA4 or backend data that provides a cross-platform view. The question at this timescale is not which ad performed best today; it is which channels are producing incremental revenue at an acceptable cost.
Annual strategy decisions use MMM outputs and LTV analysis to validate whether the overall channel mix is aligned with long-term business objectives, not just short-term conversion volume.