The EU AI Act is no longer a future concern. The regulation entered into force in August 2024, and the enforcement timeline is already well underway. Prohibited AI practices became enforceable in February 2025. GPAI model obligations kicked in on 2 August 2025. And the big one — the full set of obligations for high-risk AI systems — becomes enforceable on 2 August 2026.

That is four months from now.

Most organisations I speak to are in one of two states. Either they have been following the Act closely and are deep into implementation, or they are still trying to figure out which AI systems they actually have. There is not much in between. If you are in the second camp, this article is a practical walkthrough of what the Act requires and what you should be doing right now.

I am not a lawyer, and this is not legal advice. What I am is someone who has spent two decades building governance systems for enterprises, and who designed HelixGate's AI Governance module specifically to help organisations meet these obligations. I understand the operational side of compliance — the gap between what a regulation says and what an organisation actually needs to do.

What is enforceable now versus what is coming

The EU AI Act rolls out in phases. Understanding the timeline is critical because it determines your priorities:

  • February 2025 (already enforceable): Prohibited AI practices. Social scoring systems, real-time biometric identification in public spaces (with narrow exceptions), manipulation techniques targeting vulnerable groups, and AI systems that infer emotions in workplace or educational settings. If you operate any of these, you should have stopped already.
  • August 2025 (already enforceable): General-Purpose AI (GPAI) model obligations. If you provide or deploy foundation models, transparency and documentation requirements apply now.
  • August 2026: Full obligations for high-risk AI systems. This includes conformity assessments, risk management systems, human oversight requirements, data governance, transparency, and registration in the EU database. This is the phase that will affect the most organisations.
  • August 2027: Obligations for AI systems that are components of large-scale IT systems listed in Annex X (e.g., Schengen Information System, Visa Information System).

If you deploy AI systems in the EU or your AI systems affect EU citizens, the Act applies to you regardless of where your company is headquartered. The extraterritorial scope is similar to GDPR.

Risk classification: four tiers, one that matters most

The Act classifies AI systems into four risk tiers. The classification determines what obligations apply.

Risk Level What It Means Obligations
Unacceptable AI systems that pose a clear threat to safety, livelihoods, or rights Banned outright
High AI systems in critical areas: employment, credit scoring, law enforcement, education, essential services, migration Full compliance regime — conformity assessment, risk management, documentation, human oversight, registration
Limited AI systems that interact with people (chatbots), generate content (deepfakes), or are used for emotion recognition or biometric categorisation Transparency obligations — users must be informed they are interacting with AI
Minimal Everything else — spam filters, AI-powered search, game AI, inventory optimisation No specific obligations (codes of conduct encouraged)

The high-risk tier is where the complexity lives. And the classification is not always obvious. An AI system used for staff scheduling might fall under high-risk if it materially affects employment conditions. A chatbot used for customer service might be limited-risk — unless it processes credit applications, in which case it could be high-risk. Context matters enormously.

The classification is not always obvious. An AI scheduling tool might be minimal risk or high risk depending on whether it affects employment decisions. Context matters.

What you actually need to do

Here is the practical checklist. I have focused on actions, not theory.

1. Build an inventory of your AI systems

You cannot classify what you have not catalogued. This sounds obvious, but it is where most organisations stall. AI systems are not always clearly labelled. A recommendation engine buried in your e-commerce platform is an AI system. A fraud detection model running in your payments pipeline is an AI system. A chatbot on your support page is an AI system. A model that scores job applicants is an AI system.

Start by surveying every team that builds or buys technology. Ask: "Do you use any system that makes predictions, classifications, recommendations, or decisions based on data?" The answers will surprise you. Most organisations have significantly more AI systems than they realise, because many of them were deployed as features within larger products rather than standalone AI projects.

HelixGate's Service Catalogue is designed precisely for this kind of inventory work — cataloguing technology assets, their owners, their risk classifications, and their relationships to business processes.

2. Classify each system by risk tier

Walk through the Act's Annex III, which lists the specific use cases that qualify as high-risk. Map your inventory against it. Be honest. If a system is borderline, classify it higher rather than lower. The cost of over-compliance is a bit of extra documentation. The cost of under-compliance is fines up to 35 million euros or 7% of global turnover.

3. Conduct Fundamental Rights Impact Assessments (FRIAs)

For high-risk AI systems, Article 27 requires deployers to conduct a fundamental rights impact assessment before putting the system into use. This is not a generic risk assessment. It specifically evaluates how the AI system could affect fundamental rights — non-discrimination, privacy, freedom of expression, human dignity, access to justice.

If you have done Data Protection Impact Assessments under GDPR, the format is similar, but the scope is broader. You are not just looking at personal data risks. You are looking at the full range of rights the system could affect.

4. Implement risk management systems

Article 9 requires a risk management system that operates throughout the entire lifecycle of a high-risk AI system — not just at deployment, but through ongoing operation, monitoring, and eventual decommissioning. This means:

  • Identifying known and foreseeable risks
  • Estimating and evaluating risks that emerge during use
  • Adopting risk mitigation measures
  • Testing to confirm that residual risk is acceptable

This is a continuous process, not a one-off assessment. If you are used to writing a risk assessment at project kickoff and never revisiting it, that approach will not satisfy the Act.

5. Set up incident reporting

This is the one that catches people off guard. Article 73 requires providers of high-risk AI systems to report serious incidents to the relevant market surveillance authority within 15 days of becoming aware of them. For incidents that involve death or serious damage to health, critical infrastructure, or the environment, the initial report must be filed immediately — no later than 2 days after becoming aware.

A "serious incident" is defined as an incident that directly or indirectly causes death, serious damage to health, serious and irreversible disruption to critical infrastructure, a serious breach of fundamental rights, or serious damage to property or the environment.

Fifteen days is not a long time. If you do not have an incident tracking and reporting mechanism in place, you will miss the window. And unlike GDPR breach notifications, which many organisations have now operationalised, AI incident reporting is new territory for most. The processes, templates, and escalation paths probably do not exist yet in your organisation.

6. Ensure human oversight

Article 14 requires that high-risk AI systems be designed to allow effective human oversight. This means a natural person must be able to understand the system's capabilities and limitations, monitor its operation, interpret its outputs, and decide when to override, intervene, or shut it down.

"Human in the loop" is the shorthand, but the Act goes further. The human must actually be capable of overriding the system. If your AI makes a decision and nobody reviews it until a customer complains three weeks later, that is not human oversight. The oversight must be real, timely, and documented.

7. Maintain technical documentation

Article 11 requires comprehensive technical documentation for high-risk AI systems. This includes the system's intended purpose, its design specifications, the data used for training and testing, the metrics used to evaluate performance, and the known limitations of the system. This documentation must be kept current throughout the system's lifecycle.

GPAI model obligations

If your organisation uses general-purpose AI models — large language models, multimodal foundation models, or similar — either as a provider or a deployer, additional obligations apply. Since August 2025, GPAI providers must:

  • Maintain and make available technical documentation about the model
  • Provide information and documentation to downstream deployers who integrate the model into their systems
  • Implement a policy to comply with EU copyright law
  • Publish a sufficiently detailed summary of the training data

For GPAI models classified as posing "systemic risk" (currently defined as models trained with more than 10^25 FLOPs of compute), the obligations are significantly heavier: adversarial testing, incident monitoring, cybersecurity protections, and energy consumption reporting.

If you are a deployer — meaning you use someone else's GPAI model within your own system — you are responsible for ensuring your use of that model complies with the Act. You cannot point at the model provider and say it is their problem. The obligations cascade through the supply chain.

How HelixGate maps to the Act

I want to be straightforward about this. HelixGate does not make you compliant with the EU AI Act by itself. No single tool does. But the AI Governance module was designed with these obligations in mind, and it covers several of the operational requirements that organisations struggle with most:

  • AI system inventory and classification: Register each AI system, assign a risk tier, record the purpose, the data sources, the responsible parties, and the deployment status. This is the foundation that everything else builds on.
  • Risk assessment tracking: Attach risk assessments and FRIAs to each AI system record. Track assessment status, review dates, and outcomes. Flag systems that are overdue for reassessment.
  • Incident logging: Record AI-related incidents with structured fields for severity, impact, timeline, and response actions. The immutable audit trail ensures incident records cannot be altered after the fact.
  • Supplier and model governance: If you use third-party AI models, track which suppliers provide which models, what documentation they have provided, and when it was last reviewed. This maps directly to the supply chain transparency requirements.
  • Human oversight records: Document who is responsible for oversight of each high-risk system, what override capabilities exist, and when oversight reviews were last conducted.

The goal is not to replace your legal counsel or your compliance team. The goal is to give them a structured, auditable system of record instead of a collection of spreadsheets and email threads.

Practical first steps

If you are reading this in April 2026 and have not started, you are behind schedule. But you are not out of time. Here is what I would do this month:

  1. Inventory your AI systems. Do not overthink this. A spreadsheet is fine to start. Name, purpose, owner, data sources, deployment location. You need a complete list before you can classify anything.
  2. Classify by risk tier. Read Annex III of the Act. It is specific enough to give you clear answers for most systems. For the borderline cases, get legal advice.
  3. Prioritise high-risk systems. These are the ones with the heaviest obligations and the highest penalties for non-compliance. Focus your compliance resources here first.
  4. Check your incident reporting capability. If a serious AI incident happened tomorrow, could you file the initial report within 2 days and the full report within 15? If not, build the process now. This is the obligation most likely to catch organisations unprepared.
  5. Review your GPAI supply chain. If you deploy foundation models from third parties, have you received the required technical documentation? Do you understand the model's limitations? Have you assessed how your use of the model affects the risk classification of your downstream system?
  6. Appoint someone responsible. The Act does not prescribe a specific role, but someone in your organisation needs to own AI governance. If nobody is responsible, nothing will happen.

If a serious AI incident happened tomorrow, could you file the initial report within 2 days? If not, build the process now.

The enforcement reality

The EU has established the AI Office to oversee enforcement at the Union level, and member states are designating national competent authorities to handle enforcement locally. Fines are significant: up to 35 million euros or 7% of global annual turnover for prohibited practices, up to 15 million euros or 3% for other infringements.

Whether enforcement will be aggressive in the first year is an open question. GDPR enforcement started slowly and then accelerated. The pattern will likely be similar here. But the organisations that wait for the first enforcement actions before taking the Act seriously are making the same mistake that many made with GDPR — and some of those organisations paid very public prices for it.

My advice is simple. The Act is real, the deadlines are near, and the obligations are specific enough that you can start addressing them today. Do not wait for guidance from your industry body or your legal team's final opinion on every edge case. Start with the inventory. Start with the classification. Build the governance structure. The details will refine over time, but the organisations that are moving now will be in a fundamentally better position than those that are still waiting.

The best time to start was a year ago. The second best time is now.