The Compliance Officer's EU AI Act Survival Guide
The Compliance Officer's New Reality
If you are a compliance officer, you have likely spent the last few years navigating GDPR, anti-money laundering regulations, sanctions compliance, and whatever sector-specific requirements your industry threw at you. Now the EU AI Act lands on your desk, with its own vocabulary, risk tiers, and documentation demands.
Here is the good news: you already know how to do this work. Risk assessment, documentation, audit prep, stakeholder wrangling. The AI Act asks for all of it. The learning curve is not the compliance methodology; it is the subject matter. AI as a domain is still unfamiliar to many compliance professionals, and that is okay.
This guide walks you through building an EU AI Act compliance program from scratch. No AI expertise required, just the systematic approach you already bring to regulatory work.
Phase 1: Discovery -- What AI Do We Actually Have?
Before you can comply, you need to know what you are dealing with. This is the inventory phase. And in our experience, it is almost always more revealing than organizations expect.
Step 1: Define "AI System" for Your Organization
The AI Act defines an AI system as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments" (Article 3(1)). That definition is broad. It covers machine learning models, expert systems, optimization algorithms, and plenty of tools that teams would never think of as "AI."
Step 2: Cast a Wide Net
Send questionnaires to every department. Interview team leads. Check procurement records for AI-related purchases. Review vendor contracts for AI capabilities. AI systems tend to hide in places you would not expect:
- HR: resume screening, candidate ranking, employee analytics
- Customer service: chatbots, sentiment analysis, ticket routing
- Finance: fraud detection, credit scoring, algorithmic trading
- Marketing: personalization engines, predictive analytics, content generation
- IT/Security: anomaly detection, threat analysis, automated patching
- Operations: demand forecasting, logistics optimization, quality inspection
- Product: recommendation engines, search algorithms, content moderation
Step 3: Create Your AI Register
For each AI system, record the system name, vendor/developer, intended purpose, deployment status, data types processed, affected populations, business owner, and current governance measures. This register is what the rest of your compliance program builds on.
Phase 2: Classification -- What Are Our Obligations?
Once you have an inventory, the next step is classifying each system under the AI Act's risk framework.
Step 1: Screen for Prohibited Practices
Run every system through the Article 5 prohibited practices checklist. This is your first filter. If anything falls into prohibited territory, it needs immediate attention regardless of where you are with everything else.
Step 2: Assess High-Risk Classification
For remaining systems, evaluate against both pathways to high-risk classification: Pathway A (safety component of regulated product, Article 6(1)) and Pathway B (Annex III use cases, Article 6(2)). Pay close attention to the Article 6(3) exception — it is narrow, and relying on it requires documented justification and notification to authorities.
Step 3: Identify Limited-Risk Transparency Obligations
Check whether any systems trigger Article 50 transparency obligations. This covers chatbots, emotion recognition, deepfake generators, and other systems that interact directly with people.
Step 4: Classify and Prioritize
Tag each system with its classification and build a prioritized compliance roadmap. High-risk systems get the most resources and the tightest timelines. If you want a structured way to work through this, our free risk classification tool can help.
Phase 3: Gap Analysis -- Where Do We Stand?
Now comes the honest part. For each high-risk system, measure your current state against what the AI Act actually requires.
Requirement Checklist
- Risk management (Art. 9): Do you have a documented risk management system for this AI system? Is it continuous and iterative?
- Data governance (Art. 10): Are your training, validation, and testing data practices documented? Do you assess for bias?
- Technical documentation (Annex IV): Do you have full technical documentation covering all required sections?
- Logging (Art. 12): Does the system automatically log events? Can you produce records upon request?
- Transparency (Art. 13): Do deployers receive adequate instructions for use?
- Human oversight (Art. 14): Are human oversight measures designed into the system? Can operators monitor, intervene, and override?
- Accuracy and robustness (Art. 15): Have you documented and validated the system's accuracy, robustness, and cybersecurity measures?
- Quality management (Art. 17): Do you have a quality management system covering the AI system's lifecycle?
- Conformity assessment (Art. 43): Have you completed (or planned) the required conformity assessment?
- EU database registration (Art. 49): Is the system registered in the EU database?
- FRIA (Art. 27): For deployers, have you conducted a Fundamental Rights Impact Assessment?
- Post-market monitoring (Art. 72): Do you have a post-market monitoring system in place?
For each gap, document where you are today, where you need to be, what it takes to get there, who owns it, and when it needs to be done.
Phase 4: Implementation -- Building Your Compliance Program
Governance Structure
You need clear roles and responsibilities. Someone has to own each piece of this:
An AI Governance Lead carries overall accountability for AI Act compliance (this might be you). System Owners are the business-side people responsible for each AI system's compliance. Technical Leads are the engineers implementing the technical requirements. You will also want a Risk Committee, a cross-functional body that reviews risk assessments and classification decisions, and Legal/DPO involvement for classification decisions, FRIA, and regulatory submissions.
Documentation Strategy
Do not try to create all documentation from scratch. Use templates and auto-generation tools to speed things up. Focus first on high-risk systems with the nearest compliance deadlines.
Training and AI Literacy
Article 4 requires AI literacy for all staff involved with AI systems. The training should be role-specific:
- Board and executives need the AI Act overview, governance obligations, and penalty exposure
- Your compliance team needs detailed requirements, classification methodology, and audit preparation
- Technical teams need guidance on technical documentation, risk management implementation, and monitoring
- Business users need training on safe use of AI systems, reporting obligations, and human oversight responsibilities
Vendor Management
If you deploy third-party AI systems, your vendor management program needs updating. Add AI Act compliance requirements to vendor assessments, contracts, and ongoing monitoring. Request Annex IV documentation and conformity assessment evidence from your AI system providers.
Phase 5: Monitoring and Continuous Compliance
With your program in place, the work shifts from building to maintaining.
Post-Market Monitoring
High-risk AI systems require ongoing monitoring throughout their lifecycle. You will need performance monitoring dashboards that track accuracy, fairness, and reliability metrics. Set up automated alerting when metrics breach predefined thresholds. Plan regular review cycles (quarterly at minimum) for each high-risk system. And establish incident reporting procedures for AI system failures or unexpected behaviors.
Change Management
Any significant change to a high-risk AI system may require updated documentation, re-testing, and potentially a new conformity assessment. Build change management triggers into your governance process so nothing slips through.
Audit Readiness
Keep your compliance documentation in an always-ready state. National market surveillance authorities can request information at any time. You should be able to produce:
- Complete AI system register
- Risk classifications with supporting evidence
- Annex IV technical documentation for each high-risk system
- Conformity assessment records
- FRIA documentation
- Quality management system documentation
- Post-market monitoring records
- Incident reports and remediation records
- Training records demonstrating AI literacy compliance
Common Questions from Compliance Officers
"How do I get engineering teams to cooperate?"
Frame it as risk management, not bureaucracy. Engineers respond well to concrete requirements with clear deliverables. Give them structured templates (not open-ended requests) and integrate documentation into their existing workflows. If they push back, the business case is straightforward: non-compliance means fines up to 3% of global turnover and potential market access restrictions.
"What if we can't classify a system clearly?"
When in doubt, classify conservatively (higher risk) and document your reasoning. You can always reclassify downward later with evidence. Classifying too low and being found non-compliant is far more costly.
"How do we handle legacy AI systems?"
Legacy systems already on the market before August 2026 are still subject to the regulation if they are significantly modified after that date. Legacy high-risk systems that are safety components of products (Pathway A) have until August 2027. Either way, we recommend starting compliance work on all systems now. Legacy systems tend to have the largest documentation gaps.
"What resources should I request from management?"
At minimum, you need dedicated compliance staff time, budget for tooling (something like Haffa.ai or equivalent), access to engineering teams for documentation, legal support for classification decisions, and executive sponsorship to make the cross-functional governance actually work.
Your 90-Day Quick-Start Plan
If you are starting from zero, here is a realistic 90-day plan:
Days 1-30: Complete your AI inventory and initial risk classification for all systems.
Days 31-60: Run the gap analysis for high-risk systems. Establish your governance structure and assign owners.
Days 61-90: Begin Annex IV documentation for your highest-priority systems. Roll out AI literacy training. Set up a compliance dashboard so you can track progress.
If you want tooling to support this, start with a free risk assessment to understand your obligations. You can see our plans for the full compliance toolkit, or check out our compliance officer solution for hands-on support.
Related Articles
FRIA under the EU AI Act: Complete Article 27 Guide [2026]
Who must conduct a FRIA under EU AI Act Article 27, what it covers, and how to complete one in 5 phases. Practical guide with FRIA vs DPIA comparison and common mistakes.
Annex IV Technical Documentation: A Practical Guide
A practical, detailed guide to preparing Annex IV technical documentation for high-risk AI systems under the EU AI Act, with templates and best practices.