Topics such as ransomware, viruses and unwanted data outflow are red-hot. They are a serious threat and challenge for companies. Developing a cybersecurity solution is a "MUST" to protect business units, data, processes and technologies.
Imagine a company whose processes are mostly digitized. Operations are efficient and highly productive, and management is happy. One morning, however, a cyberattack hits the company. The company is shocked to discover that there is a lack of plans and measures on how to quickly resume regular operations. In the worst-case scenario, operations come to a standstill.
The "WannaCry" ransomware attack caused quite a stir a few years ago. On May 17, 2017, a cyberattack occurred that affected over 300,000 devices in over 150 countries. Among those affected were some well-known companies.
The vulnerability for WannaCry was apparently based on unpatched operating systems. Now, there are many use cases in which patches on production environments can, by definition, only be implemented at long intervals. Here, it doesn't even take highly sophisticated zero-day exploits and fast-reacting malicious energy for corresponding software to seize these systems.
An IT operation with 24/7 operations that, for example, only schedules regular maintenance windows twice a year and does not consider the current security patches to be sufficiently critical to interrupt operations is and remains vulnerable to such cyberattacks.
In practice, there is a simple and, above all, more or less universally applicable solution option: architecture- and infrastructure-independent mirroring at database and application level.
This technology, which is often ridiculed as obsolete in times of virtual machines, storage mirroring and a wide variety of cluster variants, can play to one of its still existing strengths here: The logical independence of the underlying system environments.
The Libelle BusinessShadowbusiness continuity solution works completely independently of the productive environment, without Shared Servers, without Shared Storage, in short: Shared Nothing. Mirroring means that the current data is always physically present on the mirror environment, but the systems can be maintained and kept up to date with the latest patches independently of the production environment.
If a cyber attack with ransomware succeeds on the productive environment and is successful due to the low patch level, the system is simply switched to the highly patched mirror system and continues working there within a few minutes.
The result: "The cyberattack was not averted, but it went nowhere."
The data mirroring described above is asynchronous. This has several advantages over synchronous mirroring, which is often used in storage mirroring and clustering: First, there is the possibility to use relaxed maintenance windows on the mirror in the first place, because unlike synchronous mirroring, no Two-Site-Commit is required.
On the other hand, the company also gets out of the synchronous trap: If a logical error has corrupted the productive dataset, the dataset on the mirror is automatically corrupted as well.
In the worst case, the following logical errors bring all business processes to a halt:
In the even worse case, continuing to work with faulty data, generating additional economic expense or even public image damage.
This asynchronous data/application mirroring can be used to define any time offsets between the production and mirror systems: The current production data is already physically available on the mirror system, but is artificially held in a time funnel and only logically activated when the defined time offset expires. From a logical point of view, the mirror system thus permanently lags behind the production system by exactly this time offset, but already has the delta of the data physically available on its own storage and can pull it up ad hoc if required.
If a logical error of any kind occurs in the production environment, the organizationally responsible instance decides on the switchover, e.g. the application owner, DR officer or IT management, depending on the company structure and processes. From a technical point of view, the best possible point in time for the dataset is determined and activated on the mirror system.
The database or application on the mirror system is thus made available productively at any point in time within the time funnel with transactional accuracy and data consistency. Users and other accessing applications log on again and can continue working with correct data.
Another advantage of this asynchronous data and application mirroring is that any latency is not an issue because the production system does not have to wait for the mirror system to commit. This also enables practical and economically interesting disaster recovery concepts with long distances and low volume and QoS requirements on the network lines between the systems. Also, the failover systems can be operated not only in the company's own data centers, but also, for example, as a service at a "friendly company" or service provider located any distance away, which is particularly common in medium-sized companies. The distance between the productive and the failure site is thus no longer limited by the possibilities of DarkFibre, campus or metrocluster technologies, which in the usual case only amount to a few kilometers. Asynchronous mirroring can be expanded at will based on business requirements and depending on the corporate structure, even on different tectonic plates. This means that disaster recovery concepts are possible that also take effect in the event of large-scale disasters and keep IT operations running across countries, regions or even worldwide.
In addition, architecture-independent data/application mirroring frees users from the "single point of failure" dilemma: in addition to the shared-nothing architecture already recommended, different hardware architectures and infrastructures are also supported in the environments involved. In addition to technological interests, economic interests must also be taken into account here.
Homogeneous architectures require less maintenance, but the risk of faulty drivers, firmware patches or controller software affects not only individual environments, but all of them. In addition, commercial considerations also play a role with regard to the requirements for productive and emergency environments: it is often sufficient if only the productive system is designed for permanent high-performance operation. The failover system can also be designed smaller, it just has to be "good enough" for a hopefully never occurring, and if so, then only temporary use.
In practice, these considerations often result in the "old" productive system continuing to operate as the new failover system as part of the usual hardware cycle. Thus, many companies opt for the middle ground, between homogeneous and heterogeneous architecture, in which at least two hardware standards are defined, often also with components from different manufacturers.
Would you like to read more about various IT topics? For example, what exactly does high availability and business continuity mean? Then feel free to visit our Libelle IT Blog or follow us on LinkedIn.