Reading time: About 4 minutes
Organisations today have a substantial amount of data, for many this has built up over a very long period of time – well over 20 years in some cases.
Typically, long standing organisations will have implemented a series of ‘new’ repositories during this time to handle their future needs – but inevitably these have been and gone, or maybe they are only used by small pockets within the organisation.
Ultimately, they have information everywhere, in different silos, of varying ages, and of varying quality.
Just about every organisation has a desire to take their data to the cloud, either through a strategic directive, a reduction in on-premise infrastructure, or to take advantage of cloud-based collaborative platforms such as Office365.
However, the legacy data challenge often leads to information paralysis, where organisations simply don’t know where, or how, to start addressing the problem.
This can lead to a – just move it all – ‘lift and shift’ approach, but bitter experience and many migration horror stories confirm that this is not the optimal way.
We believe there are two steps to any cloud migration process and it’s vital that organisations approach it in a structured way, not least because doing so significantly improves the data set to be migrated and the success of the overall project.
The first challenge is understanding what data you have, regardless of when you are thinking of going to the cloud, should it be next month or next year. This lets you understand what the size and shape of the challenge is, as well planning what data to migrate, as well as how to migrate it.
The second stage is the actual migration, identifying and classifying the data and moving it into the chosen destination.
The first hurdle to overcome, therefore, is why bother with the first stage at all? Why not simply lift the data they currently have to the cloud? For us, there are four main reasons.
- COST: It’s the old adage, ‘Junk in, Junk out’. If you have lots of data in disparate sources and migrate all of that to your new cloud environment, you are going to be paying for a lot of storage you don’t actually need. On average, we find that 70% of data is what we call DROT, that’s duplicate, redundant, obsolete, and trivial data. Why migrate almost three quarters of data you don’t require as part of your transformation, when money could be spent elsewhere?
- RISK: If you don’t know what you have, then you don’t know what it contains. By migrating all your information, you could be introducing new risk to the organisation by bringing chaos in, let alone perpetuating the historical risk your data already contains. Perhaps it’s information that should have been deleted or data that requires restricted access.
- GDPR: Tied in to risk is GDPR. Just last week we saw the first GDPR fines imposed (£282m across just two organisations). If you’re going to transfer your existing data without understanding the GDPR risk associated with it, then you are potentially exposing the organisation to huge penalties and reputational damage.
- EFFICIENCY: The final thing is organisational and user productivity. If you migrate 70% of your data which is essentially ‘rubbish’, then users are having to sift through a lot of chaos in order to do their day-to-day work. All you are doing is moving the problem from one place to another, without improving what you have.
When an organisation decides to adopt the cloud, migration from legacy systems must be as high quality as possible. This means that the project must form of part of an overall process of gaining control of the data.
A migration carried out without forethought will potentially compound an organisation’s data issues, rather than easing them – and not deliver the anticipated benefits of the digital transformation.
For more information, contact us on info@automated-intelligence.com
(This blog first appeared as part of techUK’s 2019 Cloud Week)