Risk classification (Unacceptable / High / Limited / Minimal). FRIA workflow under Article 27. AI incident register tracking the 15-day serious-incident clock under Article 73. GPAI model register under Article 53. Post-market monitoring under Article 72. All connected to your service catalogue, supplier register, and contracts.
“The Act lands in 2026 and we still don’t have a coherent register of which systems even count as AI. This was the gap.”
§ Risk & Compliance DirectorNo duplicate registers. AI systems are services. Suppliers are suppliers. Risk maps onto the risk model already in use. The Act becomes a view, not a project.
Unacceptable, High, Limited, Minimal. Drives obligation set per system. Reclassification logged.
Fundamental Rights Impact Assessment template, structured to the Act. Required for High-risk systems in scope.
Serious-incident register with the 15-day reporting deadline tracked, escalated, and audit-logged.
Catalogue of general-purpose AI models in use. Provider, version, technical documentation links, copyright posture.
Ongoing monitoring obligations tracked per system. Drift, performance, complaints — structured.
Every classification, FRIA, incident, and monitoring entry immutable — defensible against the regulator.
No duplicate data entry. AI systems inherit from the records you already keep.
The EU AI Act’s obligations for high-risk AI systems come into force in August 2026. That deadline applies to any organisation that deploys high-risk AI — including financial services firms operating in the UK that serve EU customers, NHS organisations using AI-assisted diagnostics or triage, and any regulated entity that has embedded machine learning into a decision-making process affecting individuals.
At a minimum, high-risk compliance requires a Fundamental Rights Impact Assessment under Article 27, an incident register capable of meeting the 15-working-day reporting clock under Article 73, and an audit trail that demonstrates ongoing post-market monitoring under Article 72. GPAI models used internally also carry separate obligations under Article 53.
The challenge most organisations face is not understanding what the Act requires. It is that the required information — system descriptions, risk classifications, supplier relationships, deployment decisions — exists in half a dozen different places: an ADR in Confluence, a supplier record in a spreadsheet, a contract in SharePoint, a risk rating in a GRC tool. Building an AI Act register from scratch means duplicating data that already exists elsewhere.
HelixGate AI governance draws that data from the records you already maintain. The supplier who provides the AI system is already in your supplier register. The architecture decision to deploy it is already in the ADR register. The risk classification and FRIA workflow sit on top of those existing records — no parallel register, no duplicate maintenance.
Bring a list of AI systems in use — we’ll walk through risk classification, FRIA, and the incident clock in 30 minutes.