Webinar

How to meet DORA compliance with trusted technology data

Overview

This webinar shows how financial institutions and ICT providers can meet DORA requirements by improving asset visibility, data quality and cross-team collaboration. You’ll see how definitive asset identification, service mapping and trusted technology data support compliance, reduce operational risk and make regulatory reporting more reliable.

Key takeaways for security leaders, IT asset managers, IT operations teams, risk and compliance leaders

  • How DORA defines ICT asset identification and mapping requirements
  • Why continuous monitoring is critical for maintaining compliance
  • How poor technology data increases vulnerability and audit risk
  • How to connect vulnerability and lifecycle data to business services
  • How trusted asset identification improves cross-team collaboration at scale

Speakers

Adam Jeffries

Adam Jeffries
Principal Solutions Architect, Flexera

DORA compliance data challenges, outcomes and business impact

Why DORA compliance is difficult at scale

DORA requires definitive identification of ICT assets across complex environments. Many organizations rely on fragmented discovery data, spreadsheets and disconnected systems. As data moves between tools and teams, errors quickly multiply.

Outcome: Inaccurate asset data increases regulatory risk and slows incident response.

Why technology data breaks down across teams

Discovery tools often capture incomplete or noisy data, including duplicate vendors, missing fields and irrelevant components. As this data is exported, transformed and reused, inconsistencies spread across security, IT asset management, operations and finance workflows.

Outcome: Teams lack a shared view of risk and can’t confidently support DORA reporting.

Why asset context matters for resilience

DORA requires assets to be mapped to business functions, services and ownership. Without service and organizational context, teams can’t prioritize vulnerabilities or understand which systems support critical customer-facing services.

Outcome: Critical services may run on unsupported or vulnerable technology longer than acceptable.

Why lifecycle and vulnerability data must be trusted

Lifecycle status and vulnerability severity depend on accurate patch-level and release-level data. Guessing support dates or relying on incomplete sources introduces both regulatory and operational risk, especially when AI systems consume low-quality data.

Outcome: Decisions based on unreliable data undermine resilience and regulatory confidence.

Why DORA data quality matters

  • In one large financial organization, 98 percent of software assets were identified on the first pass using enriched technology intelligence
  • Roughly one-third of technology assets lack published lifecycle dates, making vendor-backed research essential for compliance

If your team needs definitive ICT asset identification, service mapping and trusted lifecycle and vulnerability intelligence, Flexera One IT Visibility supports the DORA outcomes covered in this webinar.

Contact us to see how Flexera helps operationalize DORA compliance.

Frequently asked questions

DORA requires definitive identification and mapping of ICT assets to business services. This webinar shows how enriched technology data and normalization support accurate, auditable asset inventories.

DORA requires visibility into all ICT providers, including those with limited deployments. Trusted identification helps teams assess exposure and manage contractual risk consistently.

Incomplete or incorrect data leads to missed vulnerabilities, unsupported software and unreliable reporting. DORA expects organizations to explain how data is gathered and validated.

Service mapping links assets to critical business functions, allowing teams to prioritize remediation and report risk in business terms.

Transcript

[00:08] Introduction: Why DORA matters

[00:08] Adam Jeffries:

Hello, my name is Adam Jeffries. I’m an architect at Flexera in the data and platform team, and welcome to this session on DORA unlocked: Building resilient and compliant ICT operations through datadriven collaboration.
This topic is critical for organizations responding to growing regulatory pressure. Over many years, I’ve worked with customers to strengthen regulatory responses by improving the quality, trust, and use of technology data—resulting in stronger operational resilience.

[00:38] DORA is now live and enforceable

[00:38] Adam Jeffries:

Bottom line up front: DORA is now live. It came into force in January and applies to financial entities such as banks, insurers, fintechs, and the ICT providers that support them.

DORA introduces meaningful penalties for noncompliance. While it is a European Union regulation, organizations outside the EU—including those in the UK and the US—may still fall within scope depending on the services they provide.

The regulation is designed to:

  • Reduce systemic ICT risk
  • Protect critical infrastructure from disruption or illegal access
  • Ensure rapid recovery from cyber incidents

DORA raises expectations for ICT resilience reporting to a level comparable with financial reporting rigor.

[02:23] Compliance under DORA is a continuous obligation

[02:23] Adam Jeffries:

DORA compliance is not a onetime exercise. It is an ongoing obligation with expectations of continuous improvement.

Organizations must submit annual reports, respond to periodic requests, and be prepared to demonstrate compliance at any time. This requires continuous monitoring, enforcement, and confidence in the accuracy and completeness of reported data.

[03:03] What DORA covers

[03:03] Adam Jeffries:

DORA, the Digital Operational Resilience Act, focuses on five core areas:

  • ICT risk management
  • ICT related incident management
  • Digital operational resilience testing
  • Third party ICT risk management
  • Information sharing arrangements

Testing and third party risk management are particularly important. Organizations must understand who their ICT providers are, what contracts exist, and what risks those relationships introduce.

[03:58] Visibility across the IT estate is the foundation

[03:58] Adam Jeffries:

The starting point for DORA compliance is visibility across the IT estate.
Organizations must:

  • Identify and classify ICT assets
  • Determine which assets support critical business functions
  • Document dependencies
  • Perform continuous risk assessments
  • Define mitigation strategies

Without this foundational visibility, meaningful resilience and compliance are not possible.

[04:36] Asset identification and mapping requirements under DORA

[04:36] Adam Jeffries:

A core DORA requirement is definitive identification and mapping of ICT assets.

This starts at the business function level and flows through operating models, people, services, applications, and infrastructure. Applications and services provide a practical way to group the technology that supports business capabilities.

Organizations must document these assets thoroughly and assess associated threats and vulnerabilities. Mapping must be accurate, repeatable, and defensible to regulators.

[05:44] Why DORA is a cross-functional challenge

[05:44] Adam Jeffries:

This is a multi-team, multidisciplinary regulation. It may sit in cybersecurity, but its implications for the wider IT organization are extensive and challenging.

We have all felt at times like we are jumping out of a plane while trying to coordinate multiple people at once. That is what it can feel like when you start looking at cross-functional operating models and trying to make sure there are no gaps or errors.

Many practitioners are used to working within one part of the organization and doing their job well. But in reality, we are part of a supply chain with dependencies across the business and IT.

[06:47] A vulnerability management example

[06:47] Adam Jeffries:

If you take a simple example such as mapping ICT systems and understanding their risks and vulnerabilities, the complexity becomes clear very quickly.

Imagine a vulnerability is identified by the vulnerability management team. They have their own discovery capabilities scanning the estate, collecting telemetry and other data to understand where vulnerabilities exist and whether they are exploitable.

That data may then be transferred into the CMDB to create a more consistent view of the estate. It may also trigger workflows that raise incidents or change requests so the issue can be remediated.

The CMDB may contain other discovery data as well. Some of it may overlap and some may not. Managing that complexity while maintaining complete visibility is a major challenge.

Once the issue is raised in the CMDB, workflows begin to involve other teams. Infrastructure and operations become critical because they may need to schedule when patching can occur. They also have their own discovery tools and databases.

[08:10] The role of ITAM, finance and procurement

[08:10] Adam Jeffries:

ITAM plays a critical role in DORA compliance by linking licensing and support entitlements to remediation decisions. Organizations must know:

  • Whether software is licensed
  • Whether it is supported
  • Whether patches are permitted under current entitlements

This requires strong integration between ITAM, finance, procurement, and asset repositories to answer these questions accurately and consistently.

[09:34] Business context and ownership

[09:34] Adam Jeffries:

Then there is the organizational context. Who owns the server? Which cost centers will fund remediation? Who owns the application? What business function does the software support?

DORA requires organizations to understand applications and services so they can prioritize critical systems and services. Much of that information often comes from enterprise architecture teams. It may also come from application performance management tools or incident workflows in the CMDB.

Many organizations, especially large and complex ones, still struggle with clear ownership and executive accountability for these decisions.
In some cases, data is also being aggregated into a corporate data lake, which creates yet another point of entry for decision-makers responsible for ICT assets.

Once approvals are in place, once the application owner agrees the system supports a critical function and once operations confirms an acceptable maintenance window, the issue can finally reach the patch management team.

That is just one small example, but even in that scenario there may be 11 teams, eight databases and numerous integrations involved in remediating a single vulnerability.

[11:55] Why a single repository is not always the answer

[11:55] Adam Jeffries:

A common response is to centralize everything in the CMDB. However, relying on a single repository introduces resilience risk.

If critical processes depend on one system and that system fails, those processes fail as well. DORA requires resilience, which often means using multiple tools for action while ensuring data flowing between them is trusted, standardized, and well governed.

[12:46] Understanding the technology data supply chain

[12:46] Adam Jeffries:

A lot of my recent work with customers around DORA has focused on understanding the data supply chain.

That concept is incredibly important. Different teams undertake different processes, provide data to others and consume data from elsewhere. There is a constant supply chain of data across the organization.

  • Data is exported from one system
  • Something is added in a spreadsheet
  • The data is merged with something else
  • Errors are discovered
  • Feedback is sent back upstream

Then the output is passed to someone else.

It is vital to understand who your customers are, what they need and how you know you are meeting those needs. Errors can multiply as data moves through spreadsheets, systems and handoffs.

This is similar to the idea of shift left. If you want to reduce downstream issues, you need to address them earlier in the process. The same applies to becoming data-driven.

[14:46] Why data quality matters even more in the age of AI

[14:46] Adam Jeffries:

In the age of AI, data quality becomes even more critical. Poor data leads to poor decisions—garbage in, garbage out.

AI systems should not make resilience or compliance decisions based on incomplete or incorrect data. Understanding how data is shared and governed is essential to safe and effective AI use.

[15:08] Why technology data is inherently messy

[15:08] Adam Jeffries:

Technology data is messy due to:

  • Missing or inconsistent discovery fields
  • Vendor name variations
  • Mergers and acquisitions
  • Legacy technologies
  • Ambiguity between hardware, software, SaaS and components

Discovery tools often collect large volumes of irrelevant noise. Accurate identification requires multiple evidence points and careful normalization before data is suitable for operational or regulatory use.

[17:12] Granularity, vulnerabilities and lifecycle data

[17:12] Adam Jeffries:

Different use cases require different levels of granularity.

Licensing may only require major versions, but DORA and vulnerability management require patch level detail, lifecycle intelligence, and vulnerability context. This includes understanding:

  • Known vulnerabilities
  • Malware exposure
  • Zero day risks
  • Practical exploitability in context

Risk prioritization must consider both severity and exposure.

[19:03] Supportability and asset categorization

[19:03] Adam Jeffries:

Organizations must understand whether assets are supported, how they are supported, and what type of assets they are.

Support may be contract based, extended, open source, or limited. Asset categorization—database, middleware, utility, or component—provides essential context for assessing risk and prioritizing remediation under DORA.

[21:05] Scale and third-party risk

[21:05] Adam Jeffries:

At scale, technology data management becomes extremely challenging.
Large organizations may process hundreds of millions of discovery records, yet still need definitive identification for thousands of vendors—each representing potential third party risk under DORA.

This level of analysis is only feasible with a robust technology intelligence platform.

[22:18] Definitive asset identification with Flexera

[22:18] Adam Jeffries:

DORA is very clear that organizations need to identify ICT assets definitively.
This is not just about cleaning or normalizing data with machine learning or algorithms that guess patterns and heuristics. Regulators want defined assets and definitive identification.

That is where Flexera’s Technology Intelligence Platform, powered by Technopedia®, can help establish a trusted source of insight. It can ingest data from multiple sources across an organization. Flexera also has discovery capabilities and can work at lower levels of granularity.

For example, beyond software installs, we can process software bill of materials (SBOMs), which is particularly useful in containerized and cloud environments. That gives visibility into the underlying components of software and enables linkage to vulnerability data.

The data is normalized, aggregated and deduplicated. Duplicates and irrelevant data are removed. Flexera uses millions of identification rules and a dedicated data intelligence team supported by AI and expert processes to deliver effective mapping and identification.

[23:49] Coverage, catalog and enrichment

[23:49] Adam Jeffries:

When this is done well, we are talking about very high levels of coverage.
In one large financial organization, we processed millions of rows of data and identified 98% of software across the estate on the first run. Gap-fill processes are then used to minimize what remains unknown and keep data up to date.
We also maintain Technopedia, which includes more than 5 million products. It is curated, researched and updated to reflect market activity, acquisitions and other changes.

In addition to discoverable data, it includes non-discoverable intelligence such as:

  • Lifecycle and support information
  • Vulnerability intelligence
  • Open-source data
  • Sustainability data

We can then add business context, such as organizational and service context, and make that information available to the CMDB and other operational processes.

The result is an open technology intelligence platform that supports decisions around technology spend and risk while maintaining data transparency, governance and lineage. That helps organizations explain to regulators how data was captured, transformed and used to reach decisions.

[25:41] Why normalization needs real research

[25:41] Adam Jeffries:

Normalization is not just about applying naming conventions. Unless you are doing the research and checking with vendors or manufacturers, you can still end up with misleading information.

For hardware, the motherboard or discovered component may suggest a different product than the one the vendor would actually identify. Flexera’s content team works directly with major manufacturers and vendors and applies those rules within Technopedia.

For software, one example might be a product where the manufacturer is not present in the raw evidence. Flexera adds that missing manufacturer context during identification and aligns the result in the database. For open-source software imported through an SBOM, unrecognized elements can be routed to the content team for research and mapping.

Much of the raw evidence is also classified as irrelevant. That data is not deleted. It can still be retrieved for reporting or used to drive workflows where appropriate. But for trusted operational processes, you want the cleanest and most meaningful representation of the asset.

Very importantly, once assets are identified, Flexera assigns unique IDs for products, models, software, releases and manufacturers. Organizations use those stable IDs in CMDBs and other systems to track workflows and actions consistently.

[28:04] A unified technology data model

[28:04] Adam Jeffries:

One financial organization shared a slide with me that showed how a cleaned and enriched technology data model helped connect teams, processes and data across the estate.

I really liked it because it showed how they were communicating the value of trusted data across the organization. Instead of thinking only in terms of individual systems, they were showing how the data connected the dots.
That helped them work in a more data-driven way and accelerated better management information, stronger governance and lower exposure to risk.

[29:07] KPIs and key risk indicators for DORA

[29:07] Adam Jeffries:

Once assets are definitively identified, organizations can define meaningful KPIs and KRIs, such as:

  • Asset coverage for critical services
  • Unsupported or obsolete components
  • Risk distribution across technology layers

These metrics support continuous improvement and defensible DORA reporting.

[30:53] Lifecycle data and the limits of AI-generated answers

[30:53] Adam Jeffries:

We often hear organizations say they need 100% lifecycle coverage. It is important to be realistic.

Not every asset has published lifecycle information. In our research within Technopedia, roughly a third of assets may not have clear end-of-life or obsolete dates because:

  • the vendor does not publish them
  • support is contract-based only
  • support is open-source based only

That matters because even though critical infrastructure often has formal support policies, many other assets do not.

We are also seeing organizations use generative AI to invent support dates. That is risky. These dates must come from the vendor or the party responsible for providing support and patches. That is why vendor-backed research remains essential.

[32:29] Service mapping and business context

[32:29] Adam Jeffries:

Mapping assets to the right business context is critical under DORA.
Discovery tools often collect a huge number of interactions between systems on the network, but that alone is not always actionable. Flexera has service discovery capabilities that can group applications based on their communication patterns and show dependencies between them.

That gives organizations visibility into service relationships and the internal components of application services. If they prefer, they can also import service information from other systems of record.

Ultimately, organizations want standardized KPIs across multiple measures and the ability to align those KPIs to the services that matter most. That way they can see vulnerability and lifecycle exposure in a business-relevant way.
Obsolete, in this context, means the product is no longer supported by the vendor. That kind of information gives teams something actionable they can use to collaborate across the organization.

[34:18] Board-level visibility and executive oversight

[34:18] Adam Jeffries:

As we saw earlier, many people are involved in these cross-functional processes. We need to engage the practitioners doing the work and assessing the risks, but we also need to engage asset owners and responsible leaders.
Under DORA, organizations also need board-level reporting so executives can track progress and maintain the required oversight of the technology estate from a regulatory perspective.

[34:55] How Flexera supports DORA outcomes

[34:55] Adam Jeffries:

At Flexera, we help organizations address the challenge of technology data by using the Technology Intelligence Platform, powered by Technopedia, to provide standardized identification of assets and support collaboration across the tools already in use.

That may include security tools, operations platforms, the service desk, enterprise architecture systems or a corporate technology data lake.

The goal is not necessarily to import every single piece of data into one place.

The goal is to create a common language and trusted identification so that teams can collaborate using the tools they rely on every day.

That reduces errors in foundational data and supports the identification and mapping requirements that sit at the heart of DORA.

[36:14] Final recommendations

[36:14] Adam Jeffries:

To improve DORA readiness, organizations should ask:

  • Who are our data consumers?
  • What do they need?
  • How do we know we are meeting those needs?

Understanding the technology data supply chain and treating data as a business asset is essential. Trust enables collaboration—and collaboration enables resilience.

Let’s get started

Our team is standing by to discuss your requirements and deliver a demo of our industry-leading platform.