A Novel Approach to Data Protection

blog-lisitng-img

Cybersecurity attacks are all about data: personal, corporate, healthcare, financial, or intellectual properties.

Data-motivated attacks are becoming more frequent, aggressive, and ultimately more costly. The attack style varies, but whether through ransomware, espionage, unauthorized disclosures, or destruction and denial of availability—cyber adversaries are fixated on acquiring your critical information, and they have a good reason. Sensitive information, such as personally identifiable information (PII), credit card numbers, corporate financials, healthcare records, and intellectual property, is the organization’s bloodline and, in many cases, its secret sauce. Once seized by an attacker, these assets quickly become the organization’s largest threat. 

According to the IBM Cost of Data Breach report, in 2024, Canadian organizations paid an average cost of CA$ 6.32 million per data breach. The financial sector paid $9.28 million on average per breach, the technology sector pays $7.84 million on average, and the industrial sector pays $7.81 million on average. Interestingly, in most of the cases, breached organizations have security, controls, and policies in place, yet their critical data ended up in the wrong hands and was used against them. The average financial damage caused by a single breach reaches startling heights, underscoring the reality that conventional approaches may falter. Despite deploying robust edge and network security, threat intelligence tools, and security programs, organizations across industries remain vulnerable. The facts suggest that our traditional, and often dogmatic, defence strategies might be failing to offer real, long-term protection.

It’s time to ask ourselves: are our efforts to prevent data breaches truly keeping pace with the escalating threat?

 

We need to approach the problem with fresh eyes—ready to challenge existing assumptions and explore creative solutions. Ask a cybersecurity expert today about their top priorities, and you will hear them discuss areas such as zero-trust frameworks, employee education, and incident response. However, the idea of fortifying the data itself is noticeably absent from many “top five” lists. This raises a critical question: in a world where every system is eventually breachable. This raises a critical question: why aren’t more organizations placing the protection of the data itself at the forefront of their strategy? Perhaps it’s time to expand our perspective and approach defence in depth from both ends: perimeter-in and from the core outwards.

 

Most still approach it from a perimeter-in approach, focusing their investments on network security, endpoint security, IAMs, etc.

 

Proportionally, we have witnessed a heightened recognition of the need to improve the security posture and the protection of information assets. There has been an explosion of technological innovations to help safeguard our information assets, especially in today’s work-anywhere, always-connected, agile, and hybrid computing world. However, the effort seems to focus mostly on network security, endpoint security and identity access management.

 

For decades, information assets were primarily stored as structured data in database systems physically located inside secure data centres. The modality of a medieval castle defence served as an excellent analogy for data protection – establishing a strong perimeter defence, employing competent guards to interrogate and validate the identity and the permission of those who wish to access, and ensuring they are not already compromised. Today, we are dealing with both structured and unstructured data (i.e. documents, spreadsheets, emails, etc.) containing sensitive information subject to various regulatory compliance requirements scattered all over the enterprise global computing ecosystem: on-premises in the corporate offices and employee home offices, end-user work and personal devices (both managed and unmanaged), cloud platforms (sanctioned and unsanctioned), and SaaS applications.

 

The rapid adoption of a cloud-first strategy for enterprise applications further complicates this. Due to the complexity of software and cloud technologies, developers may unintentionally create vulnerabilities in the enterprise data protection framework. No, I’m not talking about the quest for secure software development - I’m referring to the separation of production data from test data, sanitization and protection of the test data, and extending protection to the data that the cyber team might not know about (e.g., cloud storage under unofficial cloud tenets).

 

Hence, there has been a rise in Data Security Posture Management (DSPM) and Attack Surface Management (ASM) products to expose data and data assets. However, a holistic approach is needed to address the unstructured and shadow data across the entire organization’s technology ecosystem (e.g. beyond the top three cloud providers and the most common SaaS applications) and to add the context (i.e. data owner, intended use, data classification, etc.) of the data it discovered/protects. Leveraging AI/ ML capabilities to automatically classify discovered data helps automate aspects of data discovery. Additionally, an iterative interview approach with cross-functional stakeholders to contextualize the results will ensure that the best results are achieved.

 

In their report, IBM found that the average cost of a data breach jumped to USD 4.88 million in 2024, a 10% spike from 2023 and the highest increase since the pandemic. A rise in the cost of lost business, including operational downtime, lost customers, and cost of post-breach responses, totalled USD 2.8 million, the highest over the past 6 years. It’s clear that the typical approach, despite increased investment and technological innovations, hasn’t yielded the desired results.

 

An inside-out approach designed to incorporate reasonable and contextual security controls across the data lifecycle.

 

Let’s consider a data-centric approach that applies security controls to how the information assets are collected, where they are stored, and how they are used and managed.

 

  • Discover: Establish a comprehensive inventory of data assets, identifying ownership and classification.
  • Analyze: Evaluate existing controls and their effectiveness against the classification of data.
  • Protect: Implement robust security measures such as access control, data encryption, and tokenization.
  • Validate: Regularly assess the adequacy of controls and make necessary adjustments.
  • Monitor and Manage: Continuously oversee the security landscape, ensuring rapid incident response.

 

This systematic approach enhances security and aligns with regulatory compliance and risk management objectives, thereby safeguarding the organization’s reputation.

 

 

DISCOVER

You can’t protect what you don’t know. The first step is to establish an inventory of data assets that your organization collects, processes, and stores and answer the following questions:

 

Who owns the data?

 

What types of data do we have?

• Structured data in a database, files, emails, paper files, etc. Where does this data come from?

• End-users, employees, HR, financial systems, R&D, 3rd party systems, etc. Where is this data stored?

• Personal devices, in the cloud, or on servers controlled by the organization. It will help identify potential vulnerabilities and areas for improvement in data security.

 

How can we classify this data?

• PII, healthcare, financial, PCI, etc. How can we see what happens to this data over its lifecycle?

• Who/how it would be used, intended retention, destruction, etc.

 

As we inventory the information assets, we should also classify the data according to their sensitivity, criticality, value, and regulatory compliance ratings. This ensures the ability to apply controls uniformly and consistently downstream. In addition to data inventory, we should examine what controls have been implemented for each asset. This concludes the Discovery phase of the approach.

 

ANALYZE

The next step is to analyze the controls against the data classification to determine whether they are sufficient and robust. There are quite a few tools that we could leverage for the analysis—CIS Critical Security Controls (CSC), NIST Cybersecurity Framework (CSF), etc. Personally, I prefer a holistic approach that overlays the Capability Maturity Model (CMM) on top of the technical prescriptive nature of CIS CSC and the breadth of NIST CSF. NIST CSF is the most adopted industry standard for cybersecurity and maps very well to most (if not all) compliance and risk management frameworks. The result of the analysis can be refactored into the organization’s Enterprise Risk Management (ERM) or Enterprise Security Risk Management (ESRM) process for alignment and SLT support.

 

PROTECT

Common data protection controls include access control (i.e., access should only be provided to specific job roles, on specific types of data, from specific endpoints, and at specific times), data encryption (i.e., to prevent unauthorized users from accessing the data even if they have access to the infrastructure hosting the data, such as the file directory, database, email store, etc.), and data tokenization (i.e., making the data usable for some business processes without exposing it). There are technology solutions, like IBM’s Guardium for example, that would accomplish them in a manner that works with most existing business processes and minimizes the burden on the end-users (especially if data classification/labelling is done). In addition, it is vitally important to establish key data management processes, such as data retention and destruction policies. The most important thing here is to ensure the investment level for data protection aligns with the organization’s ERM/ESRM program. Sometimes, it can be more straightforward and more economical to transfer the bulk of the risk to a 3rd party (e.g. cyber insurance underwriter, business process outsourcer, etc.) and focus on residual risk internally.

 

VALIDATE

Now that we have completed the data discovery, assessed existing controls for adequacy, and implemented additional controls to bring the safeguards to an acceptable level, we need to ensure a cadence for repeated evaluation. This will ensure that the controls remain effective despite technological, organizational, and threat landscape changes and that new data assets (especially those not stored in the centralized data management solution, or Shadow Data) are incorporated into the established data protection program. In addition, there needs to be a process for regularly reviewing and auditing access to the data.

 

MONITOR AND MANAGE

Lastly, data security incidents (e.g., policy violation, suspected misuse, data breach, etc.) must be monitored and responded to 24/7, 365 days a year. Incorporating artificial intelligence (AI) and machine learning (ML) to detect abnormal data usage would greatly improve detection efficacy while reducing unnecessary burdens on the team due to false positives. It makes sense to integrate this critical function with the organization’s existing security incident-handling program to take advantage of its around-the-clock coverage model and mature incident management and incident response process.

 

Reduce the impact of data breaches with a process that aligns with business processes, regulatory compliance and enterprise risk management.

 

The urgency of enhancing data protection measures is more evident than ever. The combination of escalating cyber threats and the significant financial ramifications of data breaches necessitates immediate action. Organizations can mitigate risks, reduce impact, and protect their most valuable assets by adopting a proactive, data-centric strategy. What is that proactive data-centric approach, and how can you get started, you ask?

 

Lean into an “inside-out” approach to your program, with a good first step being to execute a Data Security Assessment that focuses on discovering your critical data, or at least a subset thereof. Take an “agile” or iterative approach to Discovery, so that you do not get bogged down by trying to discover all of your data or all of your critical data. Discover a tranche of your critical data, then follow the process outlined to Analyze, Protect, Validate, Monitor and Manage. Learn from the first data tranche and then rinse and repeat. Once started, the contextual insights gained and alignment to various industry frameworks make it far easier to gain senior leadership buy-in and support to sustain this for years to come.

 

Finally, the ability to convey data security in the context of business processes, regulatory compliance and risk management language would surely enhance the level of engagement of business executives.

Get in touch with our Cloud expert about this topic

Talk to expert