February 2025 Deadline: What You Need to Know About Prohibited AI Practices
The first major EU AI Act deadline is approaching. Here's a comprehensive breakdown of prohibited AI practices and how to ensure your organization is compliant before February 2, 2025.
February 2025 Deadline: What You Need to Know About Prohibited AI Practices
The EU AI Act's first major deadline is fast approaching. On February 2, 2025, the rules on prohibited AI practices come into force. This means organizations must stop using—or never start using—AI systems that fall into prohibited categories.
What's Prohibited?
The EU AI Act bans AI systems that pose unacceptable risks to fundamental rights. Here are the main categories:
1. Subliminal Manipulation
AI systems that deploy subliminal techniques beyond a person's consciousness to materially distort behavior in a way that causes or is likely to cause physical or psychological harm.
2. Exploitation of Vulnerabilities
AI systems that exploit vulnerabilities of specific groups of persons due to their age, disability, or social or economic situation.
3. Social Scoring
AI systems used by public authorities for social scoring—evaluating or classifying people based on their social behavior or personality characteristics, where this leads to detrimental treatment.
4. Biometric Categorization
AI systems that categorize individuals based on biometric data to infer race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation.
5. Real-Time Remote Biometric Identification
The use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement, with very limited exceptions.
6. Emotion Inference in Workplace/Education
AI systems that infer emotions of people in the workplace or education institutions, except for medical or safety reasons.
7. Untargeted Facial Image Scraping
Creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
8. Criminal Risk Profiling
AI systems that assess the risk of a natural person committing criminal offenses based solely on profiling or personality traits.
What Should You Do Now?
Step 1: Inventory Your AI Systems
Document all AI systems in use across your organization. You can't assess risk without knowing what you're using.
Step 2: Screen for Prohibited Practices
For each AI system, ask: does it fall into any of the prohibited categories above? If you're unsure, escalate to legal review.
Step 3: Document Your Analysis
Even if a system is clearly not prohibited, document your reasoning. This creates an audit trail and demonstrates due diligence.
Step 4: Stop or Modify
If you identify a prohibited practice, you must stop using that system or modify it to remove the prohibited functionality before February 2, 2025.
Next Steps
Klarvo's Prohibited Practices Screening tool can help you quickly assess your AI systems against the prohibited categories. Start your free trial today and ensure you're compliant before the deadline.
This article is for informational purposes only and does not constitute legal advice. Consult with qualified legal counsel for specific guidance on your situation.
Get More Insights
Subscribe to receive the latest EU AI Act updates and compliance tips.