AI Disclosure Requirement: What the EU AI Act Requires
- Christian Cansin Meyer
- Feb 25
- 5 min read
Updated: Feb 27
AI Disclosure Is Coming. What the EU AI Act Means for Companies, Marketing and Deepfakes from 2026 Onwards – and Why Transparency Becomes a Brand Issue
AI Disclosure Requirements: What the EU AI Act Really Means
Artificial intelligence has long become part of creative processes. Images are produced faster. Texts more efficiently. Campaigns scale with ease.
But with the new EU regulation on artificial intelligence, the EU AI Act, one thing becomes clear: anyone who uses AI assumes responsibility.
From 2 August 2026, a binding disclosure requirement will apply to certain AI-generated content. This is particularly relevant for brands, agencies and companies working with realistic images, videos or texts.
What exactly does this mean? And what are the implications for marketing, content and brand management?

What Does the EU AI Act Regulate Regarding AI Disclosure?
The EU AI Act establishes, for the first time, a binding and harmonized legal framework across Europe for the use of artificial intelligence. The aim of the regulation is to enable innovation while protecting fundamental rights, transparency and trust. A key element is the introduction of clear requirements for labeling AI-generated content.
Specifically, Article 50 defines the so-called transparency obligations. At its core lies one decisive question:May AI-generated content give the impression that it is real, unaltered or created by a human?
If such an impression can arise, the disclosure requirement generally applies. The legislator seeks to prevent users from being unknowingly confronted with artificially generated or manipulated content, especially when it appears deceptively authentic or could undermine trust in media and public communication.
This regulation is particularly relevant for so-called deepfakes. These include image, audio or video content that imitates real people, places or events while suggesting a high degree of authenticity. Whenever such content is capable of being confused with genuine recordings, disclosure becomes mandatory. At the latest upon first contact with the content, it must be clearly and unmistakably recognizable that it has been artificially generated or manipulated.
Importantly, the issue is not only whether disclosure is required, but also how it is implemented. The information must not be hidden or ambiguously phrased. It has to be clear, easily understandable and accessible. No concealed notices in fine print. No technical language that laypersons cannot comprehend. Transparency must be genuinely perceptible, not merely formally fulfilled.
The EU AI Act therefore sends a clear signal: artificial intelligence may be used creatively, efficiently and innovatively. But wherever it imitates or alters reality, openness is required. Trust is not built through perfection, but through transparency.
When Does the Disclosure Requirement Apply to AI Content?
Not every use of AI automatically triggers an obligation to disclose.
A closer look at the structure of the regulation reveals three key distinctions.
Primary AI Generation
If an image, video or text is entirely generated by AI and it is not recognizable to viewers or readers that it was machine-created, disclosure is required.
This primarily affects realistic visual worlds, AI avatars or synthetic voices.
Supportive Use of AI
If AI is used merely as a tool, for example for drafting assistance, translation or technical image enhancement, a disclosure obligation generally does not apply.
The decisive factor is that a human retains creative and editorial control and substantially shapes the final result.
Texts of Public Interest
Texts are subject to a specific nuance. A disclosure obligation primarily arises when AI-generated content is published to inform the public about matters of public interest.
Marketing copy or product descriptions typically do not fall into this category. Nevertheless, there remains a quality and reputational risk if content is inaccurate or misleading.

Deepfakes in Marketing: Why the Issue Is Bigger Than Expected
The definition of deepfakes in the EU AI Act is deliberately broad.
It does not only concern manipulated political speeches or viral fake videos. Photorealistic product images, artificially generated models or realistic lifestyle scenes can also fall within this category if they convey the impression of a real recording.
In e-commerce and brand communication in particular, this represents a sensitive area.
Customers make purchasing decisions based on visual impressions. If those impressions are artificially generated without transparent disclosure, a risk to trust arises.
And trust is the true currency of every brand.
What Are the Risks of Failing to Disclose?
The EU AI Act provides for significant fines in cases of violations of the transparency obligations.
Penalties can amount to up to 15 million euros or up to 3 percent of global annual turnover, whichever is higher.
However, the legal risk is only one side of the equation.
There are also potential risks under unfair competition law, for example in cases of misleading commercial practices. Even more serious may be the reputational damage if AI use is concealed and later becomes public.
Today, a lack of transparency weighs heavier than any technical flaw.
Why Transparency Becomes a Strategic Decision
Transparency is no longer merely a legal duty. It is a stance. And precisely for that reason, it becomes a strategic decision for brands.
Recent studies paint an ambivalent picture. Many consumers demand openness in the use of AI-generated content. They want to know whether an image, a video or a voice was artificially created. At the same time, another effect becomes visible. As soon as content is explicitly labeled as AI-generated, parts of the audience react more skeptically. Perceived authenticity may decline and trust is evaluated more critically.
The tension is obvious. Transparency creates clarity, yet it can also raise questions. This is where the strategic dimension begins.
For brands, this means that the decision to disclose is not only about legal compliance, but about positioning. What does the brand fundamentally stand for. Maximum efficiency, scalable content production and technological innovation. Or maximum credibility, closeness and a sense of authenticity. Perfectly staged visuals. Or a deliberately visible human touch.
This decision shapes the entire brand image. Those who openly integrate AI and communicate transparently send a signal of modernity and responsibility. Those who rely on naturalness and documentary aesthetics strengthen the feeling of proximity and trust. Both approaches can be right. What matters is internal consistency.
In the long term, clarity pays off. Not only in regulatory terms, but emotionally in terms of brand loyalty. People forgive technical perfection. What they do not forgive is the feeling of being deceived.
Transparency is therefore not a risk. It is a strategic instrument. And at its best, an expression of attitude.

How Companies Should Prepare Now
The disclosure requirement will take effect in August 2026. But those who only react then are reacting too late.
It is advisable to establish an internal AI policy with clear definitions. When is content considered AI-generated. When is it AI-manipulated. When merely AI-supported.
Equally important is a transparent workflow. Who reviews content. Who bears editorial responsibility. Where is disclosure implemented. How visible is it.
Technical solutions such as machine-readable metadata may be useful, but they do not replace visible disclosure within the user context.
AI competence thus becomes a matter of compliance. And a matter of brand identity.
Conclusion: The AI Disclosure Requirement Is More Than a Law
The EU AI Act does not force companies to abandon AI.
It forces them to take a stance.
Those who use AI should do so consciously. Transparently. Strategically. In alignment with their brand identity.
The real challenge does not lie in the label. It lies in trust.


