April 6, 2022

How to create a concept for GDPR -compliant test data management

AuthorMichael Schwenk

Data protection and GDPR : Even in non-production systems, this topic is becoming increasingly relevant with regard to test data and test data management. May 2022 marks the fourth anniversary of the entry into force of the GDPR . Like any other law, the GDPR is also subject to annual changes and adjustments. GDPR-compliant test data management has been a MUST for every company since the GDPR  came into force. However, the data protection requirements are still not met to the required extent on a regular basis. The big dilemma: How to work realistically and  GDPR -compliant within the test environments? We have already looked at this specific question in more detail in the first part of the blog post series on the topic of "Test data management in line with the EU-GDPR".

The second part now deals specifically with the topics of security concept, duty to inform and process definition for a GDPR-compliant test data management. In addition, we clarify the question: "What do companies face in the event of violations of the GDPR?"

Creation of a security concept for test data management

Test data is subject to the restrictive requirements for technical as well as organizational data protection that also affect the testing of software and systems. This therefore affects a company's entire test data management. Among other things, companies must ensure so-called access control. This means that, in order to ensure data protection, personal data must not be read, copied, modified or deleted without authorization after processing, usage and storage. Furthermore, availability control must be ensured by companies. This means that personal data must be protected against accidental destruction or loss.

In addition, the GDPR requires a separation of production and test systems. As a result, testing is not allowed within real operations. The usag of real personal data in test operations, which must also be assessed from a data protection perspective, is fundamentally unacceptable, as this represents a violation of purpose. In addition, both the integrity and confidentiality of the data are at risk.

It may happen that a system is so multi-layered and complex that no meaningful tests can be performed without real data. In such cases, exceptions to the rule described above may also apply. However, Article 32 GDPR, like Article 25 GDPR, requires that the use of a test data management tool for anonymization or pseudonymization, such as Libelle DataMasking, must absolutely be taken into account in this context. (Source)

"Taking into account the state of the art, the costs of implementation as well as  the nature, scope, circumstances and purposes of the processing, as well as the varying likelihood and severity of the risk to the rights and freedom of natural people, the controller and processor shall implement appropriate technical and organizational measures in order to ensure a level of protection appropriate to the risk, which shall include, where relevant, inter alia:
(a) the pseudonymization and encryption of personal data;
(b) the ability to ensure the confidentiality, integrity, availability and resilience of the systems and services related to the processing on a continuous basis."

(Source: GDPR Article 32(1))

Mandatory information requirements and assessment of consequences

Depending on the environment and, in particular, the scenario of the test, there is a risk within test data management that procedural errors will result in productive, and thus personal and personal-relatable, data being transferred to sometimes unauthorized third parties. Before the GDPR came into force, the Federal Data Protection Act (BDSG), which was in force until then, stipulated that the responsible supervisory authority and also the data subjects themselves must be informed of the loss of data.  However, this only applied if the lost data was "particularly sensitive" data, such as bank data. According to the GDPR, any data loss must now be reported to the competent supervisory authority within 72 hours, i.e. regardless of the sensitivity of the data. However, if companies have a security concept in place to protect data in test data management and if this concept provides for the alienation of data through encryption, anonymization or pseudonymization, the risk of loss of real data can be drastically reduced.

If new technologies are used in a company that pose risks to data subjects, the company must conduct a so-called data protection impact assessment in accordance  to Article 35 of the GDPR. If particularly sensitive data is to be processed in the IT system, a technical risk analysis must be carried out when planning the tests of this system. The test data management is thus closely scrutinized and it is analyzed whether data protection is still guaranteed. The content of the analysis is the minimization or even the exclusion of the risk of data loss caused by  usage of software at least for pseudonymization, better even for anonymization of the test data. Our test data management tool Libelle DataMasking is ideally suited for this purpose. (Source)

What do companies face if they violate the GDPR?

When the EU GDPR came into force, the penalties for violations of the regulation were significantly increased. Thus, companies can face fines of up to 20 million euros or up to four percent of global annual turnover, whichever is higher. In addition, the competent supervisory authority must consider and ensure the effectiveness, proportionality and deterrence of the fines imposed.

"Each supervisory authority shall ensure that the imposition of fines under this Article for infringements of this Regulation pursuant to paragraphs 4, 5 and 6 is effective, proportionate and dissuasive in each case."

(Source: GDPR Article 83(1))

Supervisory authorities and data subjects should also be informed after the risk assessment if sensitive data is lost due to inadequate testing of security systems and procedures. Image loss is, of course, the biggest loss for a company. But it should be noted that data loss can also be punishable by law, because it can lead to the violation of, for example, official or professional secrets or the betrayal of trade secrets. (Source)

Process definition for compliance with the GDPR

Data protection regulations have become considerably stricter in various areas within the GDPR. In addition, on the one hand, the technical and specialist requirements for companies have increased at the same time, and on the other hand, the importance of data sovereignty and the protection of external sources has also increased. This not infrequently leads to processes in the operational workflow that have to be redefined, with companies also having to define the competencies, tools and resources. This results in clearly defined processes that guarantee compliance with the legal framework.

For those responsible within the company, it is important to understand the architecture of the data flow throughout, in order to analyze both source and target systems, and also to recognize interfaces of different services. In this way, the requirements for test data can be derived very quickly and the systems involved can be identified and integrated. This allows the responsible person to know and deal with the test process and procedures in detail. A visual representation of the operational processing steps makes it easier not only for the person responsible, but also for the entire company to understand its own processes. If there is an understanding of the individual process steps, the mass storage of data can be eliminated because test data can then be provided specifically for individual test cases.

Once an overview and understanding of the individual processing steps has been gained, the selection of the required test data management tools can be narrowed down in order to process test data in a legally compliant manner. To do this, the requirement for each individual step must be documented. This way, a matrix can be developed for the selection of the required tool(s) for a GDPR -compliant test data management. (Source)

Allocation of tasks and roles

As  often the case, it can be difficult for one person within an organization to keep track of operational processes. Therefore, the recommendation is to define different roles in order to provide clarity in the management of test data. Specific roles include project owners, test data managers, and test data analysts. Project responsibility also includes technical responsibility for the test project. Here, the definition of the technical and regulatory requirements takes place, and in addition, their compliance is ensured with the provision of the corresponding resources.

The test data manager is responsible for the functional requirements of the project and their technical implementation. He also tracks delivery and operational capability with a runtime test data analyst. The analyst, in turn, is responsible for the technical implementation and ongoing monitoring of the (test data management) tools and environments used.

Without exception, all areas of a company face changes in data processing and increasing awareness as a result of the GDPR. By taking into account the requirements contained in the GDPR, companies not only create legal certainty. It can ultimately also save time and money, because it often becomes clear that testing causes significant overhead, especially with regard to data processing and use. (Source)

Test data management: Leverage your data

Test data management is not just about data protection, but also about automated provisioning of test data, as offered by our dream team Libelle SystemCopy and Libelle DataMasking. Resetting data after it has been used, logging the validity, age and consumption status of test data are also important parts of test data management.

Read more about this in our blog post on "What is test data management (TDM) actually?" or take the Libelle Data Protection Quick Check for your test data management.


Recommended articles
December 22, 2022 Libelle IT Glossary Part 22: What is DevOps?
December 19, 2022 Anonymized data in the data pipeline

All blog articles