Chapter 7 of 12
Module 7: Incident and Vulnerability Reporting to ENISA and CSIRTs
Explains the early reporting obligations, three-stage reporting framework, and strict timelines for notifying ENISA and national CSIRTs about exploited vulnerabilities and severe incidents.
1. Why reporting to ENISA and CSIRTs matters (CRA + NIS2 context)
In the EU, cyber incident and vulnerability reporting is now shaped mainly by two instruments:
- Cyber Resilience Act (CRA) – focuses on products with digital elements (PDEs) and their manufacturers, importers, and distributors.
- NIS2 Directive (EU) 2022/2555 – focuses on essential and important entities in key sectors (energy, health, digital infrastructure, etc.).
ENISA (the EU Agency for Cybersecurity) and national CSIRTs (Computer Security Incident Response Teams) play a central role in:
- Receiving early warnings and incident notifications.
- Coordinating technical response and information sharing.
- Supporting cross‑border incident management.
This module focuses on the CRA-style three‑stage reporting framework (24h, 72h, 14 days / final report), and how it aligns with NIS2 incident reporting and your internal incident response process.
> Think of it as three layers that must fit together:
> 1. Internal IR (your SOC / IR team)
> 2. CRA reporting (for exploited vulnerabilities and severe incidents in your products)
> 3. NIS2 reporting (for your organisation as an operator in a critical sector, if applicable)
2. Key definitions: actively exploited vulnerability & severe incident
Before you can report, you must know what triggers reporting.
Actively exploited vulnerability
A vulnerability in a product with digital elements (PDE) is actively exploited when:
- There is evidence of real-world exploitation, not just a theoretical weakness.
- Exploitation is performed by an attacker (human or automated) against deployed products.
- Evidence may come from:
- Intrusion detection / logs (e.g., repeated payloads against a known CVE).
- Customer incident reports.
- Threat intelligence feeds.
> Example (actively exploited): Your web gateway appliance has a remote code execution flaw. You observe multiple successful exploitation attempts in customer logs, with attackers deploying web shells.
Severe incident (in the CRA sense)
A severe incident related to a PDE typically means an event that:
- Compromises confidentiality, integrity, or availability of the product or its data, and
- Has or is likely to have significant impact on:
- Safety of persons, or
- Provision of critical services, or
- Large numbers of users or high-value data.
> Example (severe incident): A vulnerability in a connected medical device is exploited, causing loss of monitoring in multiple hospitals. Patient safety is at risk and clinical workflows are disrupted.
In practice, organisations often use a severity matrix (impact × likelihood) to decide if an incident is severe and thus reportable.
3. The three‑stage reporting framework: 24h, 72h, 14 days/final
Under the CRA-aligned approach (and consistent with NIS2), reporting follows three main stages:
- Early Warning – within 24 hours
- Goal: Alert authorities quickly that something serious is happening.
- You do not need all details.
- Focus: "We have a suspected severe incident / actively exploited vulnerability".
- Incident / Vulnerability Notification – within 72 hours
- Goal: Provide initial structured report.
- Includes: basic technical details, scope, first containment actions.
- Allows ENISA and national CSIRTs to assess cross‑border impact.
- Final / Follow‑up Report – within 14 days (or as soon as investigations allow)
- Goal: Provide full analysis and lessons learned.
- Includes: root cause, detailed timeline, impact assessment, remediation and patches, communication to customers.
> Important: Exact timing and formats are set in implementing acts and national transposition of NIS2. Always check your national CSIRT guidance and ENISA’s latest templates.
In many Member States, the same national reporting portal or CSIRT platform is used for both NIS2 and CRA-related notifications, with ENISA receiving aggregated/forwarded information.
4. Walk‑through example: from detection to final report
Imagine you are a manufacturer of an industrial router used in energy and transport.
#### T0 – Detection (Monday 09:00)
Your monitoring team notices:
- Unusual outbound connections from multiple customer routers.
- A new process running with elevated privileges.
They suspect exploitation of an unknown vulnerability (possible 0‑day).
#### Within 24 hours – Early Warning
By Tuesday 09:00 you must:
- Confirm that this may be a severe incident or actively exploited vulnerability.
- Submit an early warning via the national CSIRT / ENISA-linked portal.
Typical content:
- Short description: "Suspected 0-day RCE in industrial router model X. Observed exploitation in at least 3 EU Member States."
- Affected product models and firmware versions.
- Very rough estimate of affected installations.
- First containment measures (e.g., recommended firewall rules).
#### Within 72 hours – Initial Notification
By Thursday 09:00 you submit a more complete report:
- Provisional technical description of the vulnerability (e.g., auth bypass in web management interface).
- Known indicators of compromise (IoCs).
- Updated scope: number of customers / countries.
- Status of investigation and patch development.
#### Within 14 days – Final / Detailed Report
By two weeks later (Monday) you provide a final report:
- Root cause analysis (e.g., input validation bug in legacy module).
- CVSS score and assigned CVE ID (if available).
- Detailed timeline of detection, escalation, and response.
- Final patch / mitigation details and deployment status.
- Communication measures: customer advisories, documentation updates.
ENISA and CSIRTs may use this information to:
- Warn other operators in critical sectors.
- Update EU‑wide threat landscape.
- Coordinate if multiple vendors are affected (e.g., shared open‑source component).
5. Map the three stages to your internal incident response
Use this as a thought exercise. Imagine your organisation already has an internal Incident Response (IR) playbook.
Task
For each CRA reporting stage, decide which internal step must trigger it:
- Early Warning (24h)
- At what internal point do you say: "This could be a severe incident / actively exploited vulnerability"?
- Who has authority to approve sending the early warning? (e.g., CISO, Incident Manager)
- Initial Notification (72h)
- Which IR milestone corresponds to "We have enough structured data to file a formal notification"?
- Which team gathers the required information? (SOC, product security, legal, PR?)
- Final Report (14 days)
- At what point is your post‑incident review or lessons‑learned meeting held?
- How do you ensure the final report to ENISA/CSIRTs matches your internal RCA (root cause analysis) and remediation plan?
Write it down (short outline)
Create a quick mapping table (mentally or on paper):
| CRA Stage | Internal IR Milestone | Responsible Role(s) |
|---------------------|----------------------------------------------|----------------------------|
| 24h Early Warning | e.g., Incident classified as High/Critical | Incident Manager, CISO |
| 72h Notification | e.g., Triage complete, initial RCA drafted | SOC Lead, Product Security |
| 14d Final Report | e.g., Post‑incident review completed | IR Lead, Engineering Lead |
This mapping is what you would later formalise in policy.
6. What to include in each report: practical checklists
When you actually file reports, you’ll typically use online forms provided by your national CSIRT or a central EU platform that forwards to ENISA.
Below is a practical checklist for each stage (adapt to national templates):
24h Early Warning – Minimum content
- Contact details of reporting entity and incident lead.
- Product identification: name, model, version, affected component.
- Nature of event: suspected exploited vulnerability or incident.
- Preliminary impact: sectors, Member States, estimated number of affected systems.
- Initial actions: temporary mitigations, internal escalation.
- Need for urgent coordination? (e.g., risk to life, critical infrastructure)
72h Initial Notification – Structured technical and impact data
- All early warning data, plus:
- Technical description of vulnerability/incident:
- Attack vector, preconditions, privileges required.
- Known IoCs (malicious IPs, file hashes, URLs, etc.).
- Impact analysis:
- Data affected (personal data, operational data, safety-critical data).
- Service disruption (duration, geographic spread).
- Mitigation status:
- Temporary workarounds.
- Expected timeline for patch or permanent fix.
- Links to public advisories (if already published or planned).
14‑day Final / Follow‑up Report – Full picture
- Updated and corrected information from previous reports.
- Root cause analysis:
- How the vulnerability was introduced (e.g., coding error, 3rd‑party component).
- Why it was not detected earlier (gaps in testing / monitoring).
- Detailed timeline of events (discovery, escalation, containment, recovery).
- Final impact:
- Confirmed number of affected customers/systems.
- Actual service downtime and safety consequences.
- Permanent remediation:
- Released patches, configuration changes.
- Long‑term security improvements (e.g., new tests, better SBOM process).
- Communication actions:
- Notifications to customers, regulators, possibly to the public.
> Tip: Create internal templates that mirror your national CSIRT / ENISA forms so your teams can prepare reports quickly under time pressure.
7. Quick check: timelines and content
Test your understanding of the three‑stage reporting framework.
You discover an actively exploited vulnerability in your product on Monday at 10:00. By when should the **initial structured notification** (not just the early warning) be submitted, and what is its main purpose?
- Within 24 hours; to provide full root cause analysis and final impact.
- Within 72 hours; to provide structured technical and impact information so ENISA/CSIRTs can assess and coordinate response.
- Within 14 days; to inform ENISA only if customers complain.
Show Answer
Answer: B) Within 72 hours; to provide structured technical and impact information so ENISA/CSIRTs can assess and coordinate response.
The **initial notification** is due within **72 hours** of becoming aware of a severe incident or actively exploited vulnerability. Its main purpose is to share **structured technical and impact information** so ENISA and national CSIRTs can evaluate cross‑border impact and coordinate response. The 24h stage is an early warning (high‑level), and the 14‑day stage is for detailed, final reporting.
8. Example: internal reporting checklist (pseudo‑template)
Here is a simplified YAML-style template you could adapt for internal use, aligned with the three‑stage framework.
9. Scenario quiz: deciding when to report
Apply what you’ve learned to a short scenario.
Your monitoring team detects scanning activity against your product using a known exploit for a vulnerability that **you have already patched** 6 months ago. Logs show **no successful exploitation** on supported versions. What is the best interpretation regarding CRA/ENISA reporting?
- You must file a 24h early warning because any scanning means an actively exploited vulnerability.
- You should treat this as an actively exploited vulnerability in your product and file all three reports.
- You may log and monitor the activity internally, but it does **not** automatically trigger CRA early reporting because there is no evidence of successful exploitation of a current product version.
Show Answer
Answer: C) You may log and monitor the activity internally, but it does **not** automatically trigger CRA early reporting because there is no evidence of successful exploitation of a current product version.
Under CRA-style logic, an **actively exploited vulnerability** requires evidence of **real exploitation** in the field. Mere scanning or attempts against already‑patched versions, with no successful compromise, typically does **not** trigger mandatory early reporting. You should still monitor, harden, and possibly inform customers to stay patched, but it is not automatically a 24h/72h/14d case.
10. Flashcards: key terms and concepts
Use these flashcards to review core concepts from this module.
- Actively exploited vulnerability
- A vulnerability in a product with digital elements for which there is **evidence of real-world exploitation** (successful attacks) against deployed systems, not just theoretical or lab-based proof-of-concept.
- Severe incident (CRA context)
- An incident affecting a product with digital elements that **significantly impacts** confidentiality, integrity, availability, or safety, especially when it affects critical services, large user bases, or safety of persons.
- 24h early warning
- The **first stage** of reporting: a rapid alert (within 24 hours of becoming aware) to ENISA/national CSIRTs that a severe incident or actively exploited vulnerability is suspected, even if details are incomplete.
- 72h initial notification
- The **second stage** of reporting: a structured notification (within 72 hours) containing technical details, initial impact assessment, and early mitigation steps, enabling authorities to coordinate response.
- 14‑day final report
- The **third stage** of reporting: a comprehensive follow‑up (typically within 14 days) including root cause analysis, final impact, remediation actions, and lessons learned.
- CSIRT
- Computer Security Incident Response Team – a national or organisational team that receives incident reports, coordinates technical response, and supports mitigation and information sharing.
- Alignment with NIS2
- CRA reporting timelines and concepts are designed to be **compatible with NIS2** incident reporting, so organisations can integrate both into a single internal incident response and reporting process.
Key Terms
- CSIRT
- Computer Security Incident Response Team; a specialised team that handles incident reporting, analysis, and coordination at national or organisational level.
- ENISA
- The European Union Agency for Cybersecurity, which supports EU-wide cyber policy, incident coordination, and capacity building.
- NIS2 Directive
- EU Directive (EU) 2022/2555 that sets cybersecurity and incident reporting obligations for essential and important entities in critical sectors.
- Severe incident
- An incident that has or is likely to have significant negative impact on confidentiality, integrity, availability, or safety, particularly in critical sectors or for large user populations.
- Early warning (24h)
- The first, rapid alert to authorities that a serious cyber issue may be underway, sent within 24 hours of awareness.
- Final report (14 days)
- A comprehensive follow-up report, usually within 14 days, summarising root cause, full impact, and remediation.
- Incident response (IR)
- The organised approach an organisation uses to detect, analyse, contain, eradicate, and recover from cybersecurity incidents.
- CRA (Cyber Resilience Act)
- An EU regulation focusing on cybersecurity requirements for products with digital elements throughout their lifecycle, including vulnerability and incident reporting.
- Initial notification (72h)
- A more detailed report, within 72 hours, containing structured technical and impact information.
- Actively exploited vulnerability
- A vulnerability for which there is concrete evidence that attackers are successfully exploiting it in real systems, not just in theory.