Data virtualisation
The switch to a new system, without high-cost data migration Laatst bijgewerkt: 19 December 2018

What you’ll learn in 4,5 minutes:

  • The complexity of data migration when transitioning to a new system
  • Why data virtualisation takes away this complexity, for the most part
  • How to take the first step towards a more flexible data architecture

 

For most companies, high-cost data migration is the biggest hurdle to overcome when transitioning to a new ERP, CRM, financial or order system, etc. However, unless all the information migrates, the new application cannot function, and your data warehouse with all the accompanying reports and dashboards will no longer work. Right? The answer is no. A major misconception of many companies is that they believe that they must migrate all data from the existing system to the new system, even though the historical data is no longer relevant to the operational process. Of course, you’ll need the information for analysis and reporting purposes, but why do we solve that with a complex migration between systems and not at the very point where we use the data: close to the reports, analyses and dashboards?

A time-consuming solution, in the wrong place

Let me give you a quick recap of the traditional way of data migration, that many companies seem to choose automatically. When transitioning to a new system, companies usually consider two options. The first option is the migration of all data from the existing system to the new system. A very complex operation, because data must be interpreted between two very complicated data models — in some cases, involving thousands of tables. Then, ETL processes must be developed to give all (new and migrated) data a proper little place in the data warehouse and data marts. With a little bit of bad luck, you’ll also have to adjust the data warehouse’s data model, and, as a result, also the reports and dashboards. A massive operation that could take months or even years!

 

 

The other option is to let the old and new system exist alongside each other. Admittedly, this means you don’t have to migrate data between the two systems, but all you’re doing is shifting the complexity. After all, you’ll have to combine two data streams into one information model in the data warehouse. This process also usually coincides with adjustments to the data model, data marts and reports. You don’t have to be an IT expert to understand how difficult this is. Sometimes, it’s nearly impossible.

 

 

An added complication is that one often can’t migrate important systems in one fell swoop. Instead, you must phase it across departments, regions, user groups or functionalities. In other words, one must retrieve the data that’s required for business intelligence applications, from two systems, over long periods of time. Thus, it doesn’t involve a one-off conversion, but a gradual process during which the underlying data source or (parts of the) reports and dashboards gradually change.

The problem with these two options is that we are trying to solve a problem at the end of the data landscape — how do we retrieve the old and new data for BI solutions? — with data migration that happens at the landscape’s origin. Not only is this cumbersome, but companies also tend to underestimate the time and costs involved in getting the information service back up and running. During the transition to a new system, all kinds of problems pop up with reports, dashboards and analyses that are incomplete and unreliable. Sometimes, it could take months or even years to fix this.

 

Fast and smooth

Wouldn’t it be much easier if you only had to migrate the master data that’s required for the new system to function (like the client-, product- or pricing data) from your old to your new system? Not having to translate transaction or process data from one complex data model to another, and not having to adjust your data warehouse and ETL processes? It might sound too good to be true, but it’s possible in actual practice. By discontinuing the physical duplication of data and by using technology for data virtualisation, to integrate data from various sources, virtually. This approach lets you create a ‘data hub’ that offers data from the data warehouse, the old system and the new system, as one logical entirety, without the need to physically move the data. Where possible, you can virtualise existing data marts straightaway, in so doing simplifying your architecture. Of course, you’ll still have to apply transformation rules to integrate the data, but the process will be much simpler and much faster because it’s all virtual. Now, you’ll have one central location where all data integration will take place via virtual database tables, instead of numerous physical data streams and step-by-step data integration.

 

 

It should be clear that gradual migration from the existing to the new system offers major advantages. After all, you must make changes to your data streams each time you commission a new piece of the new system for use. The difference is quite substantial when you choose to do this via the data virtualisation platform’s virtual tables instead of via the numerous physical data models and ETL (mapping) processes. Apart from an enormous gain in terms of implementation speed, you also save loads of time (each time) on the testing and commissioning of changes.

Another cool feature is that a data virtualisation platform offers means to easily provide insight into the data’s origin. This way, users can see which data sources their information is coming from and which transformations were applied.

 

A long-term solution

Data virtualisation not only ensures that it becomes less complex and less time-consuming to implement a new system, but this approach also constitutes the first step to a data architecture where data from operational systems is no longer combined physically, but virtually. Each time you implement a new system or link a new data source, you are building the virtual data hub and slowly parting with your traditional data warehouse. The result is a much simpler architecture where data sources are combined in one virtual layer. It affords one flexibility, enables you to respond to changes faster, creates the ability to make data available in real time and substantially simplifies the data management aspects. Moreover, data virtualisation uses the computing power of all systems in the landscape, to guarantee the best possible performance. In short, data virtualisation lets you implement your new system in a much simpler way, without complicated data migration.

 

How much time would your organisation save if you could say goodbye to data migration?


DOWNLOAD