Authored by: Marcus Loh, CTO, APJ, Cohesity
Wasting valuable energy on ineffective activities will keep us from doing the meaningful work that delivers the most value. In the business world, this equates to being bogged down by low-value tasks and confusing busy work with productivity. Beyond smart time management, organisations need to be smart in the way they backup and recover their data. They should shift focus away from low-value tasks and manual work to high-value tasks that contribute to serving their customers better. Despite this seemingly common knowledge, organisations still waste time on low-value tasks, and IT is one of the worst offenders.
One particular area of IT where this is most prevalent is within the data backup and recovery, where manual tasks have long dominated the process. A clear example of this is where a highly trained IT employee is expected to drive 15-20 miles during a workday to an offsite tape repository.
That is a misuse of time and tape. IT teams can do better. Technological advancements have provided options for organisations with a better backup, store and secure system and, yet many businesses are still stuck in outdated approaches.
Backup and recovery are critical business processes that have evolved to deliver quantifiable business value, but only if allowed to. Currently, backup and recovery are largely under-resourced and, poorly managed. It is at best tacked on at the end of a project, at worst ignored altogether; leading to outdated and siloed product offerings with varying levels of efficacy, and no guarantee of a minimum level of resilience. This must be changed to reap benefits for a business.
The need to be educated
Data loss can happen in innumerable ways - from accidental coffee spillage to common errors. Some of the more common causes include accidental errors, employee or competitor theft, and computer malfunctions. To prevent data loss and the ripple effect that stems from it, businesses need to approach backup and recovery consistently as part of their overall data management strategy. This requires knowledge of what needs protection and, where it is located.
These days, applications and their data can increasingly be found almost anywhere – on desktops, servers, Storage Area Networks (SAN), distributed Network-Attached Storage (NAS) appliances and, multiple cloud platforms. Moreover, without effective copy controls, data can be massively duplicated resulting in both silos and, multiple copies.
One solution would be to back up everything, but that is rarely practical which then leads us to rationing and prioritising backup resources. That prioritisation needs to be business rather than technology-led, with priorities set by asking questions including:
What is the business impact of losing data and these applications?
Can the business sustain without them in the long term?
How can we mitigate the impact and/or downtime?
A process, not a job
A major shift in mindset is also needed to accommodate fresh solutions. Leave the concepts of backup “jobs” and “windows” and, taking systems offline for backups to run, in the past. Technology has progressed to enable live snapshots, for example, to be taken in real-time without impacting availability.
With a wealth of customisable alternatives available for use on-premises, at remote sites or, in the cloud, you can forget about tapes. Moreover, it is feasible to continuously replicate business-critical applications and simply fail over whole systems in the event of a problem. This can even be purchased as a service.
However, IT teams should never forget the basics such as the 3-2-1 rule of backup - Keep three complete copies of data, where two are stored locally but on different media, plus at least one off-site, which could include the cloud if you use a WORM (write once, read many) style protection.
Neither should they forget the need to verify backups for consistency, data compliance and potential malware. Even more, given the increasingly insidious nature of ransomware attacks which routinely infect the very backup organisations see as their main line of defence, as Cohesity saw with the recent ST Engineering US subsidiary attack.
Do not make assumptions. A common misconception, for example, is that service providers will automatically backup anything stored in the cloud. Unless you pay extra, there is no guarantee that they will or that you will be able to recover lost data to your timescales. That applies for SaaS products (such as Microsoft 365 and Google G-Suite) as it does, wider infrastructure platforms like Azure and AWS.
Moving forward: Introducing Automation
Lastly, but arguably, the most important way of saving time is by automating backup and recovery processes. This means automating as many backup and recovery processes as possible, no matter how simple or complex it may appear.
The most common recovery processes are not what might be thought of as disaster recoveries. They are more likely to be singular, such as a file or an application. This can be automated using self-service tools to empower the requester to complete the recovery themselves.
Full disaster recoveries, however, are rarer and more difficult to automate. This is primarily because multiple applications, services and data sources must be recovered, and there are often dependent on others, possibly, on different platforms. Additionally, most recoveries require systems to be brought back online in a precise manner.
Few organisations can rehearse such large-scale disaster recoveries with the latest environment accurately depicted, so the processes involved ought to be rigorously documented. This rarely happens with under-resourced IT teams, whose documentation produced is unlikely to have been kept up to date. The answer to this is the so-called runbook automation. It is an increasingly common tool that can be used to specify recovery order and kick-off the required recovery processes, with a single click.
If you do not want your future to be bogged down by your past, you need a different approach.
It is 2020, and data is the new oil. Organisations must get to know their data better and understand how departments should use that data. This will outline a better understanding of the requirements. Instead of concentrating on fractional improvement on existing processes, IT teams could make a larger impact and achieve more for the organisation, by using recent technologies that deliver better results.
It is time for organisations to modernise their backup and recovery.