I have rolled out ADR governance three times. The first attempt lasted six weeks before the team quietly went back to Slack messages and verbal agreements. The second attempt survived four months before it became a box-ticking exercise that nobody took seriously. The third attempt worked, and it is still working three years later. The difference was not the template, the tooling, or the executive sponsor. The difference was understanding why the first two failed.

Architecture Decision Records are one of the most straightforward governance concepts in enterprise architecture. Capture what was decided, why it was decided, and what alternatives were considered. That is it. The idea has been around since Michael Nygard wrote about lightweight ADRs in 2011, and the format has barely changed since. It is simple, well-understood, and almost universally agreed to be a good idea.

And yet most organisations that attempt ADR governance abandon it within a year. Not because the concept is wrong, but because the implementation is.

Why ADR adoption fails

Having watched three of my own attempts and advised on half a dozen others, I can point to three consistent failure modes. If you are planning an ADR rollout and you do not address all three, you will be joining the graveyard of abandoned governance initiatives within six months.

Too much process, too early

The most common failure. Someone reads about ADR governance, gets enthusiastic, and designs a comprehensive process: mandatory templates with fifteen fields, a review board that meets fortnightly, a seven-stage lifecycle, integration with three existing tools, and a compliance checkpoint that blocks deployments without an approved ADR.

The architecture team takes one look at it, calculates that writing an ADR will take ninety minutes plus a two-week wait for review board approval, and decides that the Slack message approach was working fine, actually. They are not wrong. The process overhead was disproportionate to the value delivered, and rational people avoid overhead that does not pay for itself.

My first rollout failed exactly this way. I designed what I thought was a thorough, robust ADR governance process. It was. It was also so heavy that nobody used it voluntarily. The only ADRs that got written were the ones I personally chased, and the moment I stopped chasing, the process died.

No clear value proposition

"We should document our architecture decisions" is not a value proposition. It is a statement of obligation. And obligations that lack a clear payoff get deprioritised.

The value of ADRs is not documentation for its own sake. It is the ability to answer the question: "Why is our system built this way?" That question comes up constantly — in onboarding new team members, in incident post-mortems, in technology migration planning, in regulatory audits. Every time someone asks "why do we use Kafka instead of RabbitMQ?" and the answer is "I think Dave chose it two years ago but he has left," the cost of not having ADRs becomes concrete.

If you cannot articulate the value in terms of problems the team already has, adoption will be shallow. You need to connect ADRs to pain points that people recognise: wasted time re-debating settled decisions, inability to onboard new architects efficiently, audit findings about undocumented technology choices, or recurring arguments about technology standards.

No executive sponsorship

ADR governance requires a behaviour change. Architects need to write ADRs when they would previously have just made the decision and moved on. Reviewers need to actually review them. Teams need to reference them before making conflicting decisions.

Behaviour change does not happen through bottom-up enthusiasm alone. It requires someone with authority to say: "This is how we make architecture decisions now. Major decisions get an ADR. ADRs get reviewed. This is not optional." That does not have to be the CTO — it can be a Head of Architecture, a Principal Architect, or anyone with genuine authority over technical governance. But someone has to own it, and they have to keep owning it through the first six months when the temptation to revert is strongest.

Start small: one team, one category

The lesson from my failed first attempt is straightforward: do not try to roll out ADR governance to the entire organisation at once. Pick one team. Pick one category of decisions. Prove the value there, refine the process, and then expand.

At the organisation where ADR governance stuck, we started with the platform engineering team and focused on infrastructure technology choices. Not all architecture decisions — just decisions about which infrastructure components to adopt, replace, or deprecate. This was a narrow enough scope that the team could see immediate value. They were constantly being asked to justify infrastructure choices to security, finance, and the CTO. Having a structured record of each decision, with the rationale and alternatives documented, saved them time every week.

Within three months, the platform team was writing ADRs without being prompted. Not because of the process. Because the ADRs were solving a problem they actually had. The application architecture team noticed, asked to adopt the same approach, and the rollout expanded organically. That organic expansion is far more durable than a top-down mandate.

The minimum viable ADR process

If you are starting from zero, here is the lightest process that still delivers governance value. You can add complexity later. Do not start with complexity.

Capture: what, why, and what else

An ADR needs three things. First, what was decided: a clear, unambiguous statement of the decision. "We will use PostgreSQL as the primary datastore for the payments service." Not "we discussed database options" — the decision itself.

Second, why it was decided: the reasoning. What constraints, requirements, or principles drove this choice? "PostgreSQL was selected because we require ACID compliance for financial transactions, our team has deep PostgreSQL expertise, and the managed offering from our cloud provider meets our availability requirements."

Third, what alternatives were considered: the options that were evaluated and rejected, with brief reasons. "DynamoDB was considered but rejected due to the complexity of transaction support across partitions. CockroachDB was considered but rejected due to limited team experience and higher operational overhead." This section is what separates an ADR from a statement. It demonstrates that the decision was deliberate, not default.

That is the minimum. Three sections. If your ADR template has more than five fields, you have over-engineered it for a first iteration.

Review: at least one peer

Before an ADR is marked as accepted, at least one other architect or senior engineer should review it. Not a committee. Not a review board. One person who understands the domain and can challenge the reasoning.

The review should check three things. Is the decision clearly stated? Does the rationale actually support the decision? Were the alternatives genuinely considered or just listed for show? A good review takes fifteen minutes. If it is taking longer, either the ADR is poorly written or the reviewer is trying to relitigate the decision, both of which are signals worth paying attention to.

Connect: link it to what it affects

An ADR that is not connected to anything is an orphan document. It should be linked to the service or system it affects. If your organisation tracks capabilities, link it to the relevant capability. If a business case funded the work that triggered the decision, link to that too.

These connections are what make ADRs findable and useful over time. When someone is reviewing the payments service two years from now, they should be able to see every architecture decision that shaped it. When an EA principle is updated, you should be able to identify which existing ADRs might be affected.

An ADR with no connections to services, capabilities, or principles is just a diary entry. Useful to the author, invisible to everyone else.

Scaling up: when you are ready for more

Once ADR adoption is established in one team and the minimum process is working, you can start adding governance layers. The key word is "once" — do not add these on day one.

Review boards

A formal architecture review board becomes useful when you have multiple teams making architecture decisions that affect each other. The review board's job is not to approve every ADR — that creates a bottleneck that kills adoption. Its job is to review decisions that have cross-cutting impact: technology standardisation choices, shared platform decisions, security architecture patterns, and decisions that create precedent.

The threshold should be clear. "Decisions that affect more than one team" or "decisions that involve technology not already on the approved list" are reasonable triggers. "All architecture decisions" is not — that turns the review board into a bureaucratic chokepoint.

Governance gates

For organisations with formal project governance, ADRs can be integrated into existing stage gates. Before a project moves from design to build, the relevant ADRs should be in accepted status. This is not about adding a new gate — it is about making architecture decisions a visible input to an existing one.

I have seen this work well at two financial services organisations. Their existing project governance required a "design complete" checkpoint. Adding "relevant ADRs accepted" as a criterion for that checkpoint was a natural fit. It did not slow anything down because the ADRs had already been written during the design phase. It just ensured they had been written.

The 7-phase lifecycle

A full ADR lifecycle typically includes: Draft, Proposed, In Review, Accepted, Implemented, Superseded, and Deprecated. Each status has governance meaning. A Draft ADR is work in progress. A Proposed ADR is ready for review. An Accepted ADR represents a formal architectural commitment. An Implemented ADR has been realised in the codebase or infrastructure. A Superseded ADR has been replaced by a newer decision. A Deprecated ADR is no longer applicable.

This lifecycle is valuable for mature organisations that need to track the status of their architectural commitments. It is overkill for a team that is writing its first ADRs. Start with three statuses — Draft, Accepted, Superseded — and add the others when you have enough ADRs that the intermediate states become meaningful.

Getting architects on board

Here is something that might surprise you: most architects want ADR governance. They have been frustrated by the same problems it solves — re-debating settled decisions, explaining choices to people who were not in the room, watching technical debt accumulate because nobody recorded the constraints that led to a compromise.

The resistance is almost never to the idea. It is to the process. When architects push back on ADR adoption, they are usually pushing back on one of three things:

  • The template is too heavy. If writing an ADR takes longer than making the decision, the template needs simplifying. An ADR should take twenty to thirty minutes to write. If it regularly takes more than an hour, the bar is too high.
  • The review process is too slow. If ADRs sit in a review queue for two weeks, architects will stop writing them. Reviews should be completed within a few working days. If your review board meets monthly, that cadence is too slow for ADR governance.
  • The tooling is wrong. Architects will not write ADRs in a system they do not use for anything else. If your ADRs live in a procurement tool, a GRC platform, or a SharePoint library that nobody voluntarily opens, adoption will be nil. The tooling needs to be somewhere architects already work, or it needs to be lightweight enough that the additional friction is negligible.

Address these three objections honestly and most architects will adopt ADRs willingly. They understand the value. They just need the process to respect their time.

Measuring success

The wrong metric for ADR governance success is "how many ADRs have been written." A high count of low-quality ADRs is worse than a modest count of genuinely useful ones. I have seen organisations game this metric by writing ADRs for trivial decisions — "ADR-047: Use TypeScript for the frontend" — that add no governance value and dilute the signal.

The right metric is this: can your organisation explain why its systems are built the way they are?

Test it. Pick a significant system. Ask the team to walk you through the key architecture decisions that shaped it. If they can point to ADRs that document those decisions, with rationale and alternatives, your ADR governance is working. If they shrug and say "it was before my time" or "I think it was a Slack conversation," it is not.

Other useful indicators:

  • Onboarding time for new architects. Can a new team member understand the architectural landscape by reading the ADRs? If onboarding still requires extensive verbal history, the ADRs are not capturing enough context.
  • Decision re-litigation frequency. Are settled decisions being re-debated? If the same technology choice keeps coming up in meetings despite having been decided and documented, either the ADR is not being referenced or the decision was not well-communicated.
  • Audit evidence quality. When an auditor asks about technology governance, can you produce structured evidence of how decisions are made? If the answer involves reconstructing Confluence pages and email threads, your ADR process is not delivering its compliance value.
  • ADR currency. What percentage of your ADRs have a status other than Draft? A large backlog of Draft ADRs suggests the review process is broken. A high ratio of Accepted to Superseded ADRs suggests decisions are being captured and maintained.

The right question is not "how many ADRs do we have?" It is "can you explain why your system is built this way?" If the answer requires archaeology, your governance is not working.

The connection to broader architecture governance

ADRs do not exist in a vacuum. They are one component of a broader architecture governance framework that includes EA principles, technology standards, service catalogues, and capability maps.

EA principles define the guardrails. ADRs document specific decisions made within those guardrails. When an ADR deviates from an established principle, that deviation should be explicit and justified — not hidden in a rationale that conveniently omits the relevant principle. Linking ADRs to principles creates accountability: either the decision aligns with the principle, or it represents a conscious, documented exception.

The service catalogue provides the connection to implementation. An ADR that says "use event-driven architecture for inter-service communication" should be linked to the services that adopted the pattern and the services that have not yet migrated. That mapping turns the ADR from an abstract statement into a trackable architectural commitment.

For organisations pursuing compliance certifications, the combination of ADRs, principles, and service mappings provides robust evidence of technology governance. SOC 2 auditors want to see that technology decisions follow a defined process. ISO 27001 requires documented information about information security decisions. ADR governance, done properly, satisfies both requirements as a byproduct of how the architecture team already works.

Practical first steps

If you are reading this and thinking about introducing ADR governance, here is what I would do, based on the mistakes I have made and watched others make.

  1. Pick one team. Not the whole organisation. One team that has recently been frustrated by undocumented decisions. They will be your most willing adopters and your best source of feedback.
  2. Use the lightest possible template. Title, decision, rationale, alternatives considered. Four fields. You can add more later.
  3. Require one reviewer, not a committee. The reviewer should respond within three working days. If they do not, the ADR is auto-accepted. This prevents the review queue from becoming a bottleneck.
  4. Connect ADRs to at least one other entity. The service they affect is the obvious starting point. Even a simple tag is better than nothing.
  5. Get a sponsor who will hold the line for six months. Someone with authority who will ask "where is the ADR for this?" in design reviews and architecture forums. Behaviour change requires consistent reinforcement.
  6. Celebrate the first win. When an ADR saves the team from re-debating a decision, or helps a new joiner get up to speed, or provides audit evidence without reconstruction — make that visible. Concrete value stories drive adoption far more than process mandates.

ADR governance is not complicated. The concept is simple, the format is well-established, and the value is clear. The challenge is entirely in the implementation: keeping it light enough to sustain, connected enough to be useful, and sponsored enough to survive the first six months. Get those three things right and the rest follows.