EU AI Act in force

Reading time:
minutes
On February 2, 2025, the new EU AI Act will officially come into force. For many companies—especially in the field of marketing—this marks the beginning of a new era: artificial intelligence may no longer be used unchecked but must comply with specific legal requirements. The aim of the regulation is to protect consumers, minimize risks, and promote innovation in a responsible manner. But what does this mean in practical terms for marketing teams that have already integrated AI tools into their strategies?
Why Was an AI Law Necessary?
Artificial intelligence is no longer a futuristic topic—it has become an integral part of our daily work. Whether it’s content creation, image generation, customer analytics, or chatbots, AI is increasingly taking over tasks that were once carried out by humans.
However, the more widespread it becomes, the greater the concerns about misuse. Systems like ChatGPT, Midjourney, or Meta’s LLaMA2 are powerful but difficult to understand. Many users don’t know how their data is being used or whether algorithmic decisions are fair. AI systems that analyze or assess human behavior—such as for credit approval or hiring processes—are particularly critical.
This is where the EU AI Act comes in: it establishes the first unified legal framework across the European Union, based on a risk-based approach.
How the EU Classifies AI Systems
The legislation defines four risk categories, which all AI applications must be evaluated against:
-
Unacceptable Risk: Prohibited. This includes systems that assess people based on their social behavior (social scoring) or perform emotional manipulation.
-
High Risk: Subject to approval. This includes AI used in healthcare, credit assessments, or human resources.
-
Limited Risk: Transparency requirements. For example, chatbots must be clearly identifiable as such.
-
Minimal Risk: Free use. This includes many creative tools, such as those for image or text generation.
Where AI Use Is (Still) Permitted
For marketers, this means: most tools remain usable—but new transparency and documentation obligations apply.
Many companies are currently wondering which AI tools they may continue using. There’s good news for the marketing field: most applications like content generation, chatbots, or image generation are considered low-risk. Tools like Jasper AI, Neuroflash, or ChatGPT can continue to be used, as long as they do not process personal data or manipulate users. AI-based analysis of website visitors is also permitted, provided it is transparent and no sensitive data is used without consent.
Chatbots like Drift or Hubspot can be deployed as long as they are clearly identifiable as automated systems. In the visual domain, programs like Canva AI or Midjourney still allow for quick creation of creative content, as long as no misleading or discriminatory representations are produced. This means AI remains a valuable tool for effective, data-driven marketing—if used responsibly.
How to Prepare Now
Proper handling of the EU AI Act should begin well before February 2025. Companies that start preparations now will have a clear advantage. The first step is to review all currently used AI tools and classify them into the relevant risk categories. Based on this, company-specific guidelines can be developed to regulate AI usage clearly. This also includes clearly labeling all AI-generated content, such as adding disclaimers to text.
The new EU AI Act marks an important step toward responsible use of technology. For marketing, it primarily means greater transparency and clear boundaries—without having to forgo the benefits of modern AI tools. Those who engage with the new requirements early on build trust with customers, minimize legal risks, and continue to harness the power of intelligent automation effectively.