EU AI Act compliance
on the data you already have.

Risk classification (Unacceptable / High / Limited / Minimal). FRIA workflow under Article 27. AI incident register tracking the 15-day serious-incident clock under Article 73. GPAI model register under Article 53. Post-market monitoring under Article 72. All connected to your service catalogue, supplier register, and contracts.

“The Act lands in 2026 and we still don’t have a coherent register of which systems even count as AI. This was the gap.”

§ Risk & Compliance Director
AI register / EU AI Act√ 36 systems
AIS-002HIGHCredit decisioningFRIA √
AIS-014LIMITEDCustomer chatbotdisclosed
AIS-031MINIMALSpam filterregister only
GPAI-04GPAIGPT-4o · via OpenAIArt. 53
INC-01915DSerious incident review9d left
II — CAPABILITIES

The only AI governance
that uses your existing data.

No duplicate registers. AI systems are services. Suppliers are suppliers. Risk maps onto the risk model already in use. The Act becomes a view, not a project.

Four-class risk register

Unacceptable, High, Limited, Minimal. Drives obligation set per system. Reclassification logged.

FRIA workflow (Art. 27)

Fundamental Rights Impact Assessment template, structured to the Act. Required for High-risk systems in scope.

15-day incident clock (Art. 73)

Serious-incident register with the 15-day reporting deadline tracked, escalated, and audit-logged.

GPAI register (Art. 53)

Catalogue of general-purpose AI models in use. Provider, version, technical documentation links, copyright posture.

Post-market monitoring (Art. 72)

Ongoing monitoring obligations tracked per system. Drift, performance, complaints — structured.

Audit trail

Every classification, FRIA, incident, and monitoring entry immutable — defensible against the regulator.

III — CONNECTIONS

How it connects.

No duplicate data entry. AI systems inherit from the records you already keep.

→ Service catalogue. AI systems registered as services. Risk classification flows from there.
→ Suppliers. AI providers map to the supplier register. Provider obligations cascaded.
→ Contracts. AI service contracts flagged for Article 53 / Article 27 obligations automatically.
→ ADRs. Architecture decisions adopting AI flagged at submission for AI Act applicability.
→ Audit trail. Every classification, FRIA, incident, monitoring entry immutably logged.
II·b — CONTEXT

August 2026 — the high-risk AI compliance deadline.

The EU AI Act’s obligations for high-risk AI systems come into force in August 2026. That deadline applies to any organisation that deploys high-risk AI — including financial services firms operating in the UK that serve EU customers, NHS organisations using AI-assisted diagnostics or triage, and any regulated entity that has embedded machine learning into a decision-making process affecting individuals.

At a minimum, high-risk compliance requires a Fundamental Rights Impact Assessment under Article 27, an incident register capable of meeting the 15-working-day reporting clock under Article 73, and an audit trail that demonstrates ongoing post-market monitoring under Article 72. GPAI models used internally also carry separate obligations under Article 53.

The challenge most organisations face is not understanding what the Act requires. It is that the required information — system descriptions, risk classifications, supplier relationships, deployment decisions — exists in half a dozen different places: an ADR in Confluence, a supplier record in a spreadsheet, a contract in SharePoint, a risk rating in a GRC tool. Building an AI Act register from scratch means duplicating data that already exists elsewhere.

HelixGate AI governance draws that data from the records you already maintain. The supplier who provides the AI system is already in your supplier register. The architecture decision to deploy it is already in the ADR register. The risk classification and FRIA workflow sit on top of those existing records — no parallel register, no duplicate maintenance.

Further reading
§ Closing statement

The Act doesn’t have to be a project.

Bring a list of AI systems in use — we’ll walk through risk classification, FRIA, and the incident clock in 30 minutes.