Cloud or Ground Control: The High Stakes Game of Data Storage Decisions
Written by Beni Sia, APJ Leader, Veeam
When the global pandemic hit, offices emptied, and IT had a clear mandate: To get employees online ASAP. We made great strides in doing so, but now our region is in flux. Businesses now debate whether workforces need to come back into the office, policies have changed, and our IT requirements are becoming less clear. Careful data management was key during the pandemic, but now data mobility is the crucial piece to ensure businesses can respond swiftly today. In this environment, businesses have a myriad of options to consider, including repatriating their data from the public cloud to the private cloud, re-evaluating cloud strategies, or considering alternative data storage providers.
While long-term planning has its merits, the pandemic has taught us the importance of flexibility in optimizing a company’s financial, technical, and security requirements based on its current needs. By carefully evaluating the advantages and disadvantages of both cloud and ground control, the significance of data mobility becomes evident. It goes beyond facilitating smoother migrations; it has the potential to transform your business.
The Cloud Advantage Versus Covering All Grounds on Ground
In the post-“Great Relocation” era, where the Asia Pacific public services market experienced a remarkable 36.3% growth in 2021, the demand for a continuous flow of services and data has exploded. The cloud, with its unparalleled scalability, allows businesses to dynamically expand or reduce their storage capacity based on demand. This flexibility not only reduced capital expenditure but also eliminated the constraints associated with physical infrastructure.
For companies with a remote workforce or a decentralized structure, leveraging the public cloud becomes increasingly advantageous. If businesses have already transitioned away from physical infrastructure during the pandemic, the prospect of repurchasing and maintaining it again may not be worth the investment. In such cases, companies still seek to optimize costs but prefer to do so within a cloud environment. This optimization can be achieved through re-architecting their systems into more cloud-native solutions like Platform-as-a-Service (PaaS) or a managed database service. These solutions alleviate concerns about managing underlying hardware, operating systems, and patches, allowing businesses to focus on their core operations.
However, there are certain pitfalls that companies should take note of, including the risks of cloud “lock-in” and “lock-out.” Cloud lock-in occurs when integration with proprietary services and Application Programming Interfaces (APIs) becomes challenging to replicate. Relying on vendor-specific skills and knowledge can limit a team’s ability to work with alternative cloud providers. Another consideration is the concept of “data gravity,” where a company heavily relies on a single cloud, making it arduous to migrate workloads in bulk to another platform.
Additionally, IT teams may inadvertently lock themselves out of other environments and clouds by building architectures that are incompatible or non-translatable elsewhere. While it may be possible to remove a workload from its current cloud, it may not seamlessly fit into any other environment or platform.
Other organizations are increasingly recognizing the value of migrating applications back on-premises as their employees’ transition back to working from the office. The escalating costs associated with cloud services may no longer appear justified, particularly when there are idle physical servers readily available. In such instances, it becomes a logical choice for these organizations to repatriate their workloads and data back on-site, leveraging their existing hardware investment. These organisations may require greater control and security over their data or have their own security measures, encryption protocols, and data management practices and policies to implement.
The flexibility of on-premises solutions allows for customized hardware configurations and network setups that optimize performance, scalability, and minimize latency. This level of customization enables organizations to tailor their infrastructure to meet their specific needs and ensure optimal operation.
The Transformative Power of Data Mobility
When it comes to choosing the best data storage configuration, there is no one-size-fits-all approach. Organizations that adopt hybrid or multi-cloud strategies have the flexibility to choose the most suitable environment for each workload on a case-by-case basis. However, this is not necessarily an easy task. Many businesses have faced significant challenges when migrating to the cloud for the first time, even with a basic “lift and shift” approach. Finding the right balance between on-premises and cloud environments is crucial, and that’s where data mobility plays a vital role. It ensures that organizations have the option to move their workloads when necessary. Think about it this way: While you may not move houses frequently, it’s beneficial to have the ability to quickly move furniture out of the house if you would like to make renovations or upgrades.
Beyond facilitating easier migrations, data mobility is transformative. For instance, the ability to replicate and host workloads and applications enables teams to set up separate environments for activities like testing and analytics without impacting day-to-day operations. It empowers businesses to leverage and unlock the value of their vast amounts of data more effectively. By embracing data mobility, organizations can enhance their operational agility, optimize resource utilization, and extract valuable insights from their data assets.
Recoverability is the Lynchpin of Data Mobility
As organizations reassess and realign their cloud strategies, ensuring safe and seamless data movement and recovery between environments is crucial to avoid any potential loss or temporary unavailability of critical workloads.
Cyber incidents can range from small-scale issues like a deleted virtual machine to large-scale catastrophes such as site-wide failures, natural disasters, or ransomware attacks. Regardless of the scale of these cyber incidents, the key question here is “Where will we recover to?”. According to Veeam’s 2023 Ransomware Trends Report, 74% of APJ organizations plan to recover to cloud-hosted infrastructure or DRaaS, while 73% of organizations plan to recover to servers within a data center. Both of these statistics add up well beyond 100%, which points to the heartening fact that most organizations’ disaster recovery and cyber resiliency strategies include both location types, depending on the crisis.
To ensure comprehensive protection, organizations must also ensure that they do not re-infect during recovery. Thoroughly scanning data at every step of the recovery process is imperative. Organizations that adopt a combined approach of data verification and staged recovery can significantly reduce the risk of data compromise during the recovery stage. Fortunately, the same report found that close to half (44%) of organizations in APJ had opted to first restore to an isolated test area or ‘sandbox’ before reintroducing to production, as a preventive measure against re-infection.
Ultimately, it is business critical to have a prepared, seamless, and efficient plan for data movement to minimize downtime. The minutes and hours after a critical outage are not the ideal time to learn new lessons. Being prepared for any eventuality will be the contingency that protects your customers, buffer damage to your brand’s reputation and keep your business running.