Get the App

Chapter 10 of 14

Module 10: Designing a DORA Implementation Roadmap

Translate regulatory requirements into a structured implementation program, including gap analysis, prioritization, and integration with existing risk and compliance initiatives.

15 min readen

Module 10 Overview: From DORA Text to Executable Roadmap

Where we are in the DORA journey

By today (mid-December 2025), Regulation (EU) 2022/2554 on Digital Operational Resilience for the Financial Sector (DORA) has already applied for almost a year (since 17 January 2025). Supervisors are moving from implementation guidance to evidence-based supervision.

This module focuses on how to design and run a DORA implementation roadmap that:

  • Translates legal articles and RTS/ITS into concrete workstreams
  • Integrates with existing cyber, operational risk, NIS2, and cloud programs
  • Supports continuous compliance, not a one-off project

We assume you already understand DORA’s five pillars (ICT risk management, incident reporting, resilience testing, third-party risk, information sharing) from previous modules.

Learning objectives

By the end of this 15-minute module, you should be able to:

  1. Construct a practical roadmap to achieve and maintain DORA compliance, starting from a structured gap assessment.
  2. Embed DORA into broader operational resilience and cyber programs, avoiding duplication with NIS2, cloud migration, and security transformation.
  3. Define governance structures, roles, and KPIs to track and steer DORA implementation.

We will walk through a 10-step, practitioner-style method that mirrors how large EU banks and critical ICT providers are managing DORA today.

Step 1 – Clarify Scope: What Exactly Is In Your DORA Universe?

Before any roadmap, you must define scope precisely. Under DORA, scope is not just the legal entity; it cuts across:

  1. Entities in scope (Regulation (EU) 2022/2554, Art. 2)
  • Banks, investment firms, insurers, payment institutions, crypto-asset service providers, etc.
  • ICT third-party service providers (ICT TPPs) are indirectly in scope via contractual and oversight requirements, and critical ICT TPPs are directly overseen at EU level.
  1. Business services and processes
  • Identify critical or important functions (CIFs) as defined in DORA and related RTS.
  • Map business services → supporting processes → ICT assets (applications, infrastructure, data, people, vendors).
  1. Regulatory perimeter and overlaps
  • Map DORA overlap with:
  • NIS2 (Directive (EU) 2022/2555), now transposed in Member States, especially for essential/important entities.
  • EBA/EIOPA/ESMA guidelines on ICT and security risk (some now effectively subsumed by DORA, but still relevant as interpretative background).
  • Existing operational resilience frameworks (e.g., UK PRA/FCA for UK entities, Basel operational risk frameworks).

#### Practical scoping checklist (condensed)

  • List all regulated entities in your group and mark: EU, EEA, third-country.
  • For each, list top 10–20 critical business services (payments, trading, policy administration, etc.).
  • For each service, identify supporting ICT assets and ICT third parties.
  • Tag each asset with whether it is already in scope of other regimes (NIS2, GDPR, etc.).

This scoping step is crucial because it defines the universe for your gap assessment and prevents under- or over-scoping.

Interactive – Scoping Thought Exercise

Imagine you are the DORA implementation lead for a mid-sized EU bank that:

  • Operates in 3 EU Member States
  • Has a large cloud-based core banking platform provided by a hyperscaler
  • Is already classified as an essential entity under NIS2 in one Member State

Task (5 minutes, on paper or screen):

  1. List 3–5 critical business services that are clearly in scope for DORA.
  2. For each, identify at least 2 ICT components and 1 ICT third-party provider.
  3. Mark where you see regulatory overlap with NIS2 or other frameworks.

Reflect:

  • Which services are obviously in scope?
  • Which ones are borderline, and what criteria would you use to decide?

You do not need to write a full answer here, but make your own quick list before moving on. This mirrors the scoping workshops that real institutions ran in the months leading up to January 2025.

Step 2 – Structure Your DORA Gap Assessment Across All Pillars

A DORA gap assessment compares your current state to regulatory expectations. To be actionable, it should be:

  • Pillar-based (aligned to DORA structure)
  • Control-level (not just high-level statements)
  • Evidence-backed (policies, logs, reports, contracts)

#### 2.1 Pillar-based structure

Organize your assessment at least along these axes:

  1. ICT risk management (Arts. 5–15)
  • Governance, roles, ICT risk taxonomy, policies, asset management, change management, backup & recovery, logging & monitoring, etc.
  1. ICT-related incident reporting (Arts. 17–23)
  • Detection thresholds, classification, reporting workflows, templates, timelines, EBA/ESMA/EIOPA technical standards.
  1. Digital operational resilience testing (Arts. 24–27)
  • Testing strategy, coverage of critical functions, TLPT (threat-led penetration testing) for significant entities, remediation tracking.
  1. ICT third-party risk management (Arts. 28–44)
  • Outsourcing framework, register of ICT TPPs, contractual clauses, concentration risk, exit strategies, oversight of critical ICT TPPs.
  1. Information and intelligence sharing (Art. 45)
  • Policies and participation in information sharing arrangements, safeguards, alignment with Module 8 content.

#### 2.2 Scoring and evidence

For each requirement or control, assign:

  • Maturity rating (e.g., 1–5: Non-existent → Optimized)
  • Compliance rating (Compliant / Partially compliant / Non-compliant / Not applicable)
  • Evidence references (policy IDs, ticket IDs, log screenshots, contracts)

This allows you to later prioritize high-risk, low-maturity gaps.

#### 2.3 Recurring vs. one-off assessments

DORA expects continuous and recurring assessments, not just a pre-2025 project. Many institutions now:

  • Run a comprehensive gap assessment annually, and
  • Use quarterly mini-assessments for high-risk areas (e.g., critical ICT TPPs, incident reporting).

Quiz – Designing a Gap Assessment

Check your understanding of how to structure a DORA gap assessment.

Which of the following BEST describes a robust DORA gap assessment approach?

  1. Review existing policies once, mark them as compliant if they mention ICT or cyber, and store the result in a spreadsheet.
  2. Map DORA requirements to specific controls across all pillars, score maturity and compliance, and back each judgment with concrete evidence.
  3. Focus only on ICT incident reporting and third-party contracts, because those are the most visible to supervisors.
Show Answer

Answer: B) Map DORA requirements to specific controls across all pillars, score maturity and compliance, and back each judgment with concrete evidence.

Option B is correct because a robust DORA gap assessment must be structured across all DORA pillars, operate at the control level, and be evidence-based. Option A is superficial and not evidence-based. Option C is too narrow, ignoring major DORA areas like ICT risk management and resilience testing.

Step 3 – From Gaps to Risks: A Mini Case Study

Case study: Incident reporting gaps in a payments institution

A pan-EU payments institution completed a DORA gap assessment in early 2025 and found:

  • Finding 1: Incident classification uses only low/medium/high without criteria aligned to DORA’s major ICT-related incident definition.
  • Finding 2: No central register of ICT incidents; each country logs incidents in different tools.
  • Finding 3: No tested procedure to meet DORA reporting timelines (initial, intermediate, final reports).

#### Translating gaps into risk statements

Instead of just listing gaps, they formulated risk statements:

  • R1: “There is a high risk of under-reporting or misclassifying major ICT-related incidents, leading to non-compliance with DORA reporting obligations and potential supervisory sanctions.”
  • R2: “Fragmented incident logs create a medium risk of incomplete or inconsistent data, undermining trend analysis and supervisory reporting quality.”

They then:

  1. Assessed inherent risk (likelihood × impact) for each.
  2. Evaluated current controls (incident playbooks, SOC monitoring, national reporting processes).
  3. Determined residual risk and urgency.

Outcome: Incident reporting upgrades became a top-priority workstream in the DORA roadmap, with Board visibility, because of the direct regulatory and reputational impact.

This illustrates how gap assessment → risk articulation → prioritization works in practice.

Step 4 – Prioritization Logic: Risk, Regulatory Expectations, and Constraints

You cannot fix everything at once. A rigorous prioritization framework is essential.

4.1 Core prioritization dimensions

  1. Risk criticality
  • Impact on availability, integrity, confidentiality of critical or important functions.
  • Potential for systemic impact (e.g., payment system outages, large-scale data breaches).
  1. Regulatory expectations and scrutiny
  • Direct linkage to explicit DORA obligations and RTS/ITS.
  • Areas of supervisory focus (e.g., incident reporting, ICT TPP register, TLPT for significant entities).
  • Past findings or sanctions from supervisors.
  1. Dependencies and enablers
  • Does this control unlock multiple other requirements? (e.g., asset inventory supporting risk management, testing, and third-party oversight.)
  1. Resource and time constraints
  • Availability of specialist skills (e.g., red-teamers for TLPT, contract lawyers for ICT TPP clauses).
  • Lead times for technology changes (SIEM, backup architecture, identity management).

4.2 Example of a simple prioritization schema

For each gap, assign:

  • Risk score: 1–5
  • Regulatory impact: 1–5 (5 = direct, time-bound obligation with high enforcement risk)
  • Dependency weight: 1–3 (3 = prerequisite for multiple other controls)

Compute a composite score, for example:

> Priority score = (Risk × 0.5) + (Regulatory impact × 0.3) + (Dependency × 0.2)

Rank gaps by this score to define high/medium/low priority workstreams.

4.3 Edge cases

  • High regulatory impact, low operational risk (e.g., detailed documentation requirements): still high priority because of enforcement risk.
  • High operational risk, ambiguous regulatory mapping: treat as high priority under your enterprise risk appetite, even if DORA references are indirect.

This kind of structured prioritization is what Boards and supervisors increasingly expect when they review DORA programs in 2025.

Interactive – Prioritization Exercise

You have identified three major gaps:

  1. G1 – No central ICT asset inventory for critical services.
  2. G2 – Incident playbooks not aligned with DORA reporting timelines.
  3. G3 – Some legacy outsourcing contracts lack DORA-compliant ICT clauses.

Assume the following rough scores (1–5):

| Gap | Risk | Regulatory impact | Dependency |

|-----|------|-------------------|------------|

| G1 | 5 | 4 | 3 |

| G2 | 4 | 5 | 2 |

| G3 | 3 | 5 | 1 |

Using the formula:

> Priority = (Risk × 0.5) + (Regulatory impact × 0.3) + (Dependency × 0.2)

Task:

  1. Compute the priority score for each gap.
  2. Rank them from 1 (highest) to 3 (lowest) priority.
  3. Reflect: would you keep this ranking, or adjust based on qualitative factors (e.g., contract renegotiation lead time, upcoming supervisory review)?

Write down your reasoning. This is exactly the type of discussion that happens in DORA steering committees.

Step 5 – Program Governance: Committees, Workstreams, Accountability

DORA implementation is a multi-year change program that must eventually become business-as-usual (BAU). Governance is central.

5.1 Typical DORA program structure

  1. Board / Management Body
  • Approves ICT risk strategy, risk appetite, and DORA roadmap.
  • Receives regular DORA status and risk reports.
  1. DORA Steering Committee (SteerCo)
  • Senior executives from ICT, Risk, Compliance, Operations, Procurement, Business lines.
  • Responsibilities:
  • Approve prioritization and funding.
  • Resolve escalated dependencies and conflicts (e.g., between cloud migration and DORA deadlines).
  • Monitor KPIs and KRIs.
  1. Workstreams / Projects (examples):
  • WS1: ICT risk management & governance
  • WS2: Incident management & reporting
  • WS3: Digital operational resilience testing (including TLPT)
  • WS4: ICT third-party risk & contracts
  • WS5: Data & tooling (asset inventory, CMDB, monitoring, reporting)
  1. Control owners and process owners
  • Each DORA requirement/control has a named owner (e.g., Head of SOC, Head of Vendor Management) responsible for implementation and ongoing effectiveness.

5.2 Governance pitfalls

  • Treating DORA as a pure IT project, ignoring risk, business, and legal.
  • Lack of clear ownership for cross-cutting requirements (e.g., impact tolerances, resilience testing strategy).
  • No linkage to enterprise risk management (ERM), leading to parallel, inconsistent risk reporting.

A strong governance model is also crucial for supervisory credibility: when competent authorities review your program (onsite inspections, thematic reviews), they expect to see clear lines of responsibility and decision-making.

Step 6 – Integrating DORA with NIS2, Cloud, and Security Transformation

By late 2025, most EU financial institutions are juggling multiple regulatory and strategic initiatives. A good roadmap avoids duplication and conflict.

6.1 DORA and NIS2

  • Many financial entities are also essential or important entities under NIS2.
  • Overlaps include:
  • Risk management and governance
  • Incident reporting (though thresholds and channels differ)
  • Supply-chain and third-party risk management
  • Integration strategies:
  • Create a single ICT/cyber risk management framework that satisfies both, with mapping tables from DORA to NIS2 requirements.
  • Align incident classification so that one incident record can feed both DORA and NIS2 reporting with minimal extra work.

6.2 DORA and cloud migration

  • Cloud programs often run in parallel with DORA.
  • Tensions:
  • Cloud adoption can increase dependency on a small number of ICT TPPs, raising concentration risk (a DORA concern).
  • DORA requires exit strategies and data portability, which must be built into cloud architectures and contracts.
  • Integration strategies:
  • Make sure cloud projects adopt DORA-compliant contract templates and risk assessments.
  • Sequence migrations so that critical services move only when DORA controls (monitoring, incident reporting, exit plans) are in place.

6.3 DORA and security transformation

  • Many institutions have multi-year security transformation programs (e.g., zero trust, SOC modernization).
  • DORA can be used as a justification and structuring framework for these investments.
  • Avoid creating two parallel control catalogs; instead, maintain one integrated control framework mapped to DORA, NIS2, ISO 27001, etc.

In your roadmap, always mark for each initiative:

  • Which other programs it depends on or enables.
  • Whether there are conflicting timelines or resource needs.

This is vital to avoid “DORA-compliance theater” that looks good on paper but is not aligned with how the organization is actually changing.

Step 7 – Representing a DORA Roadmap in a Machine-Readable Way

You can model a DORA roadmap in a structured format (e.g., JSON) to feed dashboards or GRC tools.

Below is a simplified example of how a DORA implementation backlog might be represented in JSON. This is not a standard, but it illustrates how to structure work items, owners, dependencies, and metrics.

```json

{

"dora_program": {

"version": "2025.4",

"workstreams": [

{

"id": "WS2",

"name": "Incident Management & Reporting",

"owner": "Head of SOC",

"items": [

{

"id": "WS2-01",

"title": "Align incident classification with DORA major incident criteria",

"pillar": "incident_reporting",

"priority": "high",

"risk_score": 4.5,

"regulatory_refs": ["Art. 17", "Art. 19"],

"dependencies": ["WS5-02"],

"planned_start": "2025-09-01",

"planned_end": "2025-11-30",

"status": "in_progress",

"kpis": {

"playbooksupdatedpct": 60,

"stafftrainedpct": 30

}

},

{

"id": "WS2-02",

"title": "Implement DORA/NIS2 incident reporting workflow integration",

"pillar": "incident_reporting",

"priority": "medium",

"risk_score": 3.8,

"regulatory_refs": ["Art. 19", "NIS2-23"],

"dependencies": ["WS2-01", "WS5-01"],

"planned_start": "2025-12-01",

"planned_end": "2026-03-31",

"status": "planned",

"kpis": {

"automation_level": "partial",

"avgreportingtime_hours": 36

}

}

]

}

]

}

}

```

In practice, a bank or insurer might store similar structures in a GRC platform or a project portfolio tool, then generate dashboards for the DORA SteerCo.

Step 8 – Defining Milestones, KPIs, and KRIs for DORA

A roadmap is only useful if you can measure progress and risk.

8.1 Milestones

Milestones should be:

  • Outcome-based, not just activity-based.
  • Linked to specific DORA requirements.

Examples:

  • By Q1 2026: 100% of critical or important functions mapped to ICT assets with owners and RTO/RPO defined.
  • By Q2 2026: DORA-compliant major incident reporting process tested end-to-end with at least one simulation.
  • By Q4 2026: First full TLPT exercise completed for at least one critical business service (for entities in scope of TLPT).

8.2 KPIs (Key Performance Indicators)

KPIs track implementation and operational performance. Examples:

  • Coverage KPIs
  • % of critical services with up-to-date business impact analysis and ICT asset mapping.
  • % of ICT TPP contracts updated with DORA clauses.
  • Process KPIs
  • Average time from incident detection to internal escalation.
  • % of incidents classified within defined time thresholds.
  • Testing KPIs
  • % of critical services covered by resilience testing in the last 12 months.
  • % of high-severity test findings remediated within agreed timelines.

8.3 KRIs (Key Risk Indicators)

KRIs track residual risk and early warning signals.

Examples:

  • Number of critical ICT incidents per quarter affecting CIFs.
  • % of critical services exceeding impact tolerances during incidents or tests.
  • Concentration metrics, e.g., % of critical services relying on a single cloud provider.

Supervisors increasingly expect quantitative evidence that your DORA program is not only implemented, but also effective in reducing operational risk.

Quiz – Milestones and Metrics

Test your understanding of DORA metrics.

Which of the following is the BEST example of a DORA-aligned KPI rather than a KRI?

  1. "Number of critical ICT incidents per quarter affecting critical services."
  2. "Percentage of ICT third-party contracts for critical services that include DORA-compliant clauses."
  3. "Probability of a systemic outage due to cloud provider failure in the next 12 months."
Show Answer

Answer: B) "Percentage of ICT third-party contracts for critical services that include DORA-compliant clauses."

Option B is a KPI because it measures implementation performance (contract updates). Option A is more of a KRI reflecting realized risk. Option C is a risk assessment estimate, not an operational performance metric.

Step 9 – Flashcard Review of Key Roadmap Concepts

Flip these cards (mentally or on paper) to reinforce key terms and ideas from this module.

DORA gap assessment
A structured, evidence-based comparison of an institution's current ICT risk management, incident reporting, testing, third-party risk, and information-sharing practices against DORA requirements, typically scored by maturity and compliance.
Critical or important function (CIF)
A function whose disruption would materially impair the financial institution's services, financial performance, or regulatory obligations; used by DORA to scope ICT risk, testing, and third-party oversight.
Prioritization dimensions under DORA
Risk criticality, regulatory impact, dependency/enabler role, and resource/time constraints; used together to rank remediation actions into high/medium/low priority.
DORA Steering Committee (SteerCo)
A senior cross-functional body (ICT, Risk, Compliance, Operations, Business) that oversees DORA implementation, approves priorities, allocates resources, and monitors KPIs/KRIs.
Integration with NIS2
Aligning ICT/cyber risk management, incident classification, and reporting so that one set of controls and records can satisfy both DORA and NIS2, minimizing duplication while respecting differing thresholds and channels.
Milestone vs KPI vs KRI
A milestone is a time-bound outcome (e.g., 100% of critical services mapped by Q2). A KPI measures implementation or process performance (e.g., % of updated contracts). A KRI signals residual risk (e.g., number of critical incidents per quarter).

Step 10 – Putting It All Together: Your Mini DORA Roadmap

To consolidate this module, sketch a mini DORA roadmap for a hypothetical institution (or one you know) using these steps:

  1. Scope: Define 3–5 critical services and their key ICT dependencies.
  2. Assess: Identify at least 5 concrete gaps across different DORA pillars.
  3. Prioritize: Score each gap on risk, regulatory impact, and dependency; rank them.
  4. Govern: Propose a simple governance model: who is the SteerCo, what workstreams exist, who owns which controls.
  5. Integrate: Note at least 2 dependencies with other initiatives (NIS2, cloud, security transformation).
  6. Measure: Define 3 milestones, 3 KPIs, and 2 KRIs to track progress and residual risk.

If you can do this coherently, you are already operating at the level of a junior DORA program manager or analyst. In a real organization, the same logic scales up to hundreds of controls and dozens of projects, but the core thinking process is identical.

You can reuse this framework for exam answers, case studies, or internship projects related to DORA and digital operational resilience.

Key Terms

NIS2
Directive (EU) 2022/2555 on measures for a high common level of cybersecurity across the Union, applying to essential and important entities in various sectors, including many financial entities.
Gap assessment
A structured analysis comparing current practices and controls against regulatory or standard requirements to identify deficiencies, overlaps, and improvement opportunities.
Impact tolerance
A defined level of disruption (in terms of duration, data loss, or service degradation) that an organization is willing to accept for a given service before it threatens its objectives or regulatory obligations.
KRI (Key Risk Indicator)
A metric providing early warning about the level of risk exposure, such as the number of critical incidents or the proportion of services exceeding impact tolerances.
Steering committee (SteerCo)
A governance body composed of senior stakeholders that directs and oversees a program, makes prioritization and funding decisions, and monitors progress and risks.
KPI (Key Performance Indicator)
A metric used to measure performance or progress of processes or implementation activities, such as the percentage of updated contracts or average incident response time.
ICT third-party provider (ICT TPP)
An external provider delivering ICT services (e.g., cloud, data centers, software) to financial entities; subject to specific DORA requirements for risk management and contractual clauses.
Critical or important function (CIF)
A function whose disruption would materially impair the financial institution’s services, financial performance, or regulatory compliance; central to scoping under DORA.
Threat-led penetration testing (TLPT)
Advanced red-teaming tests simulating real-life threat actors against live production systems supporting critical services, required under DORA for certain significant entities.
DORA (Digital Operational Resilience Act)
Regulation (EU) 2022/2554 on digital operational resilience for the financial sector, applicable since 17 January 2025, setting harmonized requirements for ICT risk management, incident reporting, resilience testing, third-party risk, and information sharing.