All articles
2026-06-01 9 min read

Beyond Last Click: Incrementality, MMM, and LTV in a Post-Attribution World

Last-click attribution was always a simplification. In a fragmented, cross-device, multi-platform environment with significant data loss, it is actively misleading. Here is how to build a measurement approach that tells you what is actually working.

Every major ad platform over-reports its own contribution to your results. This is not a conspiracy; it is the inevitable outcome of attribution models that count what they can see and ignore what they cannot. The further you optimize based purely on platform attribution, the more budget you shift toward channels with aggressive attribution and away from channels doing real work that is harder to measure.

Attribution vs Incrementality: The Core Difference

Attribution answers the question: which touchpoints were present before a conversion? It gives credit to the channels and campaigns a user interacted with on their path to converting.

Incrementality answers a different question: which touchpoints caused the conversion? It measures whether the conversion would have happened without the advertising.

The gap between these two questions is where most budget waste lives. A user who was going to convert regardless of whether they saw your retargeting ad will still be attributed to that retargeting campaign when they do convert. The campaign looks efficient. It is not actually driving incremental sales.

Retargeting campaigns are the most common example. They routinely show strong CPA and ROAS in platform attribution because they target users who already have high purchase intent. Incrementality testing on retargeting campaigns consistently shows that 40 to 60 percent of attributed conversions would have occurred without the retargeting ads. The platform numbers look good; the actual incremental contribution is much smaller.

How to Run Incrementality Tests

The gold standard for incrementality testing is a randomized holdout experiment: divide your audience into a treatment group that sees ads and a control group that does not, then measure the difference in conversion rates between the two groups.

The practical challenge is implementing this cleanly. Meta offers built-in Conversion Lift studies. Google offers Conversion Lift and Search Lift tools. For cross-channel testing or situations where platform tools have limitations, geo-based experiments are more reliable: identify comparable geographic markets, run advertising only in the test markets, and measure conversion volume differences against control markets.

Geo experiments require sufficient market pairs to be statistically valid, a clean enough measurement setup to detect the difference, and a long enough run time to capture the full conversion window. Four weeks is typically the minimum; eight weeks produces more reliable results for channels with longer consideration cycles.

The output of a properly run incrementality test is a true contribution number for a campaign or channel: not how many conversions it was present for, but how many conversions it actually caused. This number is almost always lower than what platform attribution reports.

Marketing Mix Modeling: When It Works and When It Does Not

Marketing Mix Modeling uses statistical regression analysis to estimate the contribution of each marketing channel to revenue, using aggregate data rather than individual-level tracking. Because it operates on aggregates, it is not affected by cookie loss, app tracking restrictions, or consent mode limitations.

MMM works best when you have at least two years of weekly data across channels, meaningful variation in channel spend over time (which gives the model enough signal to estimate coefficients), and external data inputs for factors that influence sales independently of advertising, such as seasonality, pricing changes, and competitor activity.

MMM struggles with recency: models are typically run quarterly or annually, which makes them poor tools for near-term optimization decisions. They also have limited granularity, measuring channel-level contributions rather than campaign or creative-level performance. And they are sensitive to model specification choices that can produce quite different results from the same data.

The appropriate role for MMM in most mid-sized accounts is strategic budget allocation across channels, not tactical optimization. Use it to answer whether your split between Google, Meta, and TikTok is roughly right. Use incrementality testing to answer whether specific campaigns are working.

LTV vs CAC: The Optimization Horizon Problem

Most performance marketing is optimized against a conversion event that happens early in the customer relationship: a purchase, a sign-up, a lead form. The cost to acquire that conversion is measured and minimized. The lifetime value of the customer that conversion represents is often not factored in at all.

This creates a systematic bias toward the channels and campaigns that produce cheap first conversions, which are often not the channels that produce the most valuable customers. A channel that generates leads at twice the CPL of another might generate customers with three times the LTV. Optimizing purely on CPL will allocate budget away from the better channel.

The solution requires connecting your ad platform data to your backend CRM or revenue data. This is technically feasible for most businesses and strategically important for any business where customer LTV varies meaningfully by acquisition source. Passing LTV-weighted conversion values back to platforms as offline conversions changes the optimization signal significantly: campaigns that looked expensive by CPA suddenly look efficient by revenue contribution.

Blending Data Sources for Decision-Making

The measurement approach that produces the most reliable decisions is not choosing between attribution, incrementality, and MMM. It is using all three in combination, at the decision timescale each is suited for.

Day-to-day optimization uses platform attribution data, with a clear understanding of its limitations and a calibrated adjustment factor from incrementality testing. Tactical decisions about which campaigns to scale or pause, which creative to invest in, and which bidding adjustments to make all happen at this timescale.

Monthly or quarterly budget allocation decisions use a combination of incrementality test results and GA4 or backend data that provides a cross-platform view. The question at this timescale is not which ad performed best today; it is which channels are producing incremental revenue at an acceptable cost.

Annual strategy decisions use MMM outputs and LTV analysis to validate whether the overall channel mix is aligned with long-term business objectives, not just short-term conversion volume.

Common questions

What is the difference between incrementality testing and standard A/B testing in advertising?

Standard A/B testing compares two creative variants, audiences, or campaign structures to determine which drives more conversions. Incrementality testing asks a more fundamental question: would these conversions have happened without the advertising at all? The methodology involves a holdout group, a randomly selected set of users shown no ads, compared against an exposed group seeing the campaign normally. The difference in conversion rate between the two represents incremental lift: the conversions that only happened because of the advertising. This is meaningfully different from platform attribution, which measures correlation rather than causation. For retargeting campaigns in particular, the gap between attributed and incremental conversions can be very large because retargeted users already have high purchase intent regardless of ad exposure.

What is Media Mix Modelling and when is it worth investing in?

Media Mix Modelling is a statistical methodology that uses historical data to estimate the relationship between advertising spend across channels and business outcomes. Unlike attribution models that track individual user paths, MMM works at aggregate level and can measure channels that platform attribution cannot track accurately: TV, radio, offline, and upper-funnel digital spend where individual-level attribution breaks down. MMM is worth investing in when you spend across three or more channels and are making significant cross-channel budget allocation decisions, when your digital attribution is heavily skewed toward last-click models that cannot account for cross-channel influence, or when you have upper-funnel spend where performance is hard to measure directly. The minimum useful MMM requires at least 12 months of weekly data across channels.

How should customer lifetime value change how you measure campaign performance?

Most campaign performance reporting measures CPA or ROAS against the first conversion. This systematically undervalues campaigns that acquire high-LTV customers and overvalues campaigns that acquire low-LTV customers who churn quickly. The correction requires knowing average LTV by acquisition source and adjusting performance targets accordingly. If customers acquired through Brand Search have a two-year LTV of 800 USD and customers acquired through Performance Max have a two-year LTV of 300 USD, your CPA target for brand campaigns should be significantly higher, not equal. Connecting acquisition source to long-term customer value requires a CRM that tracks customer behavior post-acquisition and an attribution join between ad platform data and your CRM.

How do you run an incrementality test without a large budget or dedicated data team?

The most accessible incrementality test for a mid-market advertiser is a geo-based holdout test. Select two to four geographic markets that are similar in baseline conversion rate and performance. Hold out advertising spend in one set of markets for a defined period of two to four weeks while maintaining normal spend in matched markets. Compare conversion rates between the exposed and holdout markets. The difference, adjusted for baseline variance, is an estimate of your incrementality rate. This is less statistically precise than a cookie-based holdout test but requires no platform-level experiment setup and works even when cross-device tracking is limited. The minimum test duration is two weeks, run during a period without significant promotional events.

What percentage of attributed conversions are typically incremental across common campaign types?

Incrementality rates vary significantly by campaign type. Brand search campaigns typically show 30 to 50 percent incrementality, because a significant share of brand searchers would have navigated directly to the site without the ad. Retargeting campaigns typically show 40 to 60 percent incrementality, because retargeted users already have high purchase intent. Non-brand search campaigns typically show 60 to 80 percent incrementality, because capturing high-intent non-brand queries represents genuine discovery. Upper-funnel display and video campaigns vary most widely, from 10 to 80 percent, depending on audience and creative quality. These figures should be treated as directional, not prescriptive. The value of running your own incrementality tests is replacing industry averages with actual measurements from your specific campaigns.