All articles
2026-06-22 8 min read

AI Transparency, Brand Safety, and What Regulation Means for Advertisers

Automated ad placement has created a gap between where advertisers intend their ads to appear and where they actually appear. As EU regulation tightens and brand safety incidents accumulate, here is how to think about accountability in an AI-driven media environment.

The same automation that makes performance marketing more efficient has also made it less transparent. Advertisers who do not know where their ads are appearing, which signals are driving placement decisions, or how the algorithm defines a conversion cannot fully evaluate whether their campaigns are operating safely, legally, or effectively. This is a governance problem as much as a performance problem.

The Lack of Explainability in AI Ad Decisions

Modern ad platforms make billions of placement decisions per day, each based on a combination of signals that includes user behavior, content context, competitive bids, audience signals, and model outputs that are not individually inspectable. When a platform serves your ad to a user, or alongside a piece of content, it cannot always explain why in terms a human can evaluate.

This lack of explainability has practical consequences. If your ads appeared next to brand-unsafe content, you cannot audit the decision chain that led to that placement. If your campaign is underperforming, the platform can tell you that the algorithm is "learning" or that your asset quality score is below threshold, but it cannot tell you the specific signal pattern that is limiting delivery. And if a bias is present in how the algorithm distributes your ads, there is often no mechanism to detect it from within the reporting interface.

The correct response is not to demand explainability that the platforms cannot currently provide. It is to build verification processes that operate independently of platform reporting: brand safety audits, placement exclusion lists, third-party measurement, and regular reconciliation against your own data.

Brand Safety Risks in Automated Placements

Brand safety in automated campaigns is a different problem from brand safety in manually placed media. In manual buying, you know what you are buying. In programmatic and platform-automated environments, placement is determined in real time by auction dynamics and algorithm decisions that happen faster than any human review process can intercept.

The most common brand safety failures in automated campaigns are: ads appearing alongside content that contradicts the brand's values or creates reputational association; ads served to audience segments or in geographic markets that were not intended; and creative assets being combined in ways that produce unintended or misleading messages when the AI mixes and matches across asset groups.

Google's automated asset combinations in Performance Max and Meta's automated creative optimization both involve the platform generating ad combinations from your asset inputs. The combinations the system tests are not always combinations you would have approved. An asset group review process that evaluates the actual combinations being served, not just the individual assets you uploaded, is a basic quality control step that many advertisers skip.

The EU AI Act and What It Actually Means for Advertisers

The EU AI Act, which is being phased in through 2025 and 2026, creates a risk-based regulatory framework for AI systems. Advertising AI systems that target individuals based on profiling are among the regulated categories.

The practical implications for advertisers are still developing as enforcement guidance emerges, but the directional requirements are clear: AI systems used for targeting must be explainable in terms of the data inputs they use, individuals must have meaningful rights to understand and challenge AI-driven decisions that affect them, and advertisers using platform AI must understand the AI they are using well enough to document their compliance.

This last point is more demanding than it sounds. An advertiser who relies entirely on a platform's black-box optimization and cannot describe what signals are being used to target their ads may face compliance exposure as enforcement matures. The documentation requirement is an incentive to demand more transparency from platform partners and to maintain internal records of the AI systems deployed in advertising.

Platform Responsibility vs Advertiser Responsibility

There is an ongoing and unresolved tension between platform responsibility and advertiser responsibility in automated advertising. Platforms make the placement decisions, but advertisers accept the terms of service, provide the creative assets, and benefit from the results. When brand safety incidents occur, the responsibility is shared in practice and contested in principle.

The regulatory trend is toward expanding advertiser accountability. The assumption that "the platform placed it, not us" is becoming less tenable as regulators develop frameworks that treat advertisers as having due diligence obligations for the AI systems they deploy. This is analogous to how food brands are responsible for supply chain safety even when they do not directly operate every step of the supply chain.

The practical implication is that advertisers need a documented approach to AI governance: not just using platform tools, but understanding what those tools do, having exclusion lists and brand safety settings configured and documented, and being able to demonstrate that reasonable safeguards were in place.

Auditing and Risk Management in Practice

The audit practices that provide meaningful brand safety coverage in automated campaigns are less complex than most teams assume.

Placement exclusion lists, maintained and updated regularly, prevent ads from appearing in the highest-risk content categories. Both Google and Meta allow category and placement exclusions that can significantly reduce exposure to problematic content contexts.

Regular creative combination reviews in asset-based campaigns identify combinations the algorithm is testing that you did not intend. Pull the actual ad combinations being served monthly, not just the individual assets in your library.

Third-party brand safety verification through tools that independently log where your ads appear provides documentation independent of platform reporting. This matters both for internal accountability and for any future regulatory review.

Sensitive audience targeting reviews verify that your campaigns are not inadvertently targeting protected categories or vulnerable groups in ways that create regulatory exposure. Meta and Google both have restrictions on certain targeting combinations; reviewing your active audience setups against those restrictions is a compliance step worth building into your regular account management process.

Ethical Data Use as a Competitive Consideration

Beyond regulatory compliance, there is an emerging competitive dimension to ethical data use. Consumers in most markets are increasingly aware of how their data is used in advertising, and brands that handle this well can derive brand equity from it, while those with data practices that come to light negatively can face disproportionate reputational consequences.

The brands that will navigate the next phase of the regulatory environment best are not the ones that do the minimum required to comply. They are the ones that treat data privacy and AI transparency as genuine product and marketing commitments, build infrastructure that supports both compliance and consumer trust, and engage with these questions now rather than when they become enforcement priorities.

Common questions

What are the specific brand safety risks in automated ad placements and how do you mitigate them?

The primary brand safety risks in automated placements are: ads appearing alongside brand-unsafe content on open web placements, ads served in app environments with fraudulent or low-quality traffic, and ads appearing in competitive brand contexts creating negative association. For Google, the main mitigation is account-level content exclusion settings under Tools and Settings, where you can exclude content categories like violent content, mature themes, and parked domains. For Performance Max specifically, placement exclusions must be set at account level to apply to PMax. For Meta, brand safety controls are available at the ad account level allowing exclusion by content category. These controls are probabilistic, not absolute. A regular placement report review is the audit mechanism that catches placements that bypassed categorical exclusions.

What EU regulations should performance marketers understand in 2026?

The most practically relevant regulations for performance marketing in 2026 are the EU AI Act, the Digital Services Act (DSA), and ongoing GDPR enforcement. The AI Act classifies certain ad targeting uses as high-risk AI systems requiring documentation and risk assessments. The DSA requires large platforms to provide more transparency about how ad targeting works, including user opt-out rights for profiling-based targeting. For campaign setup, GDPR remains the most operationally relevant: you must have a legal basis for using personal data in advertising audiences. Customer match uploads require that email addresses were collected with appropriate consent. Retargeting audiences require valid consent under most EU interpretations. Consent Mode v2 is now effectively mandatory for EU traffic on Google properties.

How do you evaluate whether automated campaigns are meeting brand safety standards?

Brand safety evaluation requires an audit cadence, not just initial configuration. Monthly review of placement reports for Display and PMax campaigns to identify off-brand URLs or app placements that slipped through categorical exclusions. Quarterly review of automatically created assets in PMax and Search to ensure AI-generated copy is factually accurate and on-brand. For accounts with significant YouTube spend, review the video placement report to identify which YouTube channels are receiving the most impressions and whether they are brand-appropriate. The metric that signals a brand safety problem operationally is very low CTR alongside low CPM: off-brand or irrelevant placements typically show cheap inventory where the ad receives impressions but not relevant engagement.

What transparency information are ad platforms required to provide advertisers in 2026?

Under the DSA and evolving transparency requirements, major ad platforms are required to provide information about why specific ads were shown to specific users, access to ad archives for political and issue-based advertising, and clearer explanation of ad targeting criteria. In practice, Google's Ad Transparency Center and Meta's Ad Library provide public access to ad content. Within-platform, Google provides the Insights tab in Performance Max and placement-level reporting. Meta provides audience insights within Ads Manager. The transparency gap that still exists: precise signal weighting in algorithmic placement decisions, full visibility into which data sources are used for audience inference, and granular explanation of why individual impressions were served. These limitations are structural as of 2026 and cannot be resolved through standard reporting access.

How should you think about AI regulation as a performance marketing practitioner in 2026?

Practically, the most important regulatory implications for day-to-day performance marketing in 2026 are: consent infrastructure (a technically correct and legally compliant consent management system for EU traffic), ad content compliance (AI-generated ad copy must be reviewed for accuracy, especially in regulated categories like financial services and health), and documentation practices (large accounts may need to document targeting decisions and optimization logic for potential regulatory review). The strategic posture: treat regulatory requirements as a baseline, not a ceiling. Advertisers building robust consent infrastructure, accurate conversion tracking, and human oversight processes for AI-generated content are building practices that will be legally required in more markets over time.