Databases are a problem for businesses dealing with increased demand as more services are digitised and remote work continues. This is causing IT professionals to rethink how they manage their databases and whether they should change – especially with data volumes increasing dramatically as businesses become more customer-centric.
In an interview with DSA, Marc Linster, Chief Technology Officer at EDB, stated that data flows into enterprises quickly and furiously, and it comes in a variety of shapes and formats, including personalisation, data about consumption and preferences, available stock in various geographies, package tracking, the interaction of retail buying behaviour with weather patterns, and many more.
“An agile company, focused on digital transformation, wants to use this data immediately to react to changing market conditions, identify opportunities and act quickly,” said Marc.
And databases are vital to ingesting, managing, tracking, and querying all that data. However, how can companies decide what kind of database they need to manage it all while being cost-effective, flexible and innovative? As the interview went on, Marc explored these points further:
Freeing up costs and eliminating technical and contractual constraints are key to enabling agility. Budgets need to be reallocated from maintenance activities that maintain the status quo onto projects that drive innovation.
“Also, as digital interaction becomes a key part of the value chain, the cost of delivering the digital service is increasingly important as a component of gross margin. As the digital part of the business grows, there is an increasing focus on cost-effectiveness and cost of transaction,” said Marc.
Hence, the more reason to consider using an open-source database that is supported by a vendor, compared to legacy databases like Oracle, IBM DB2, and Microsoft SQL Server, which can help reduce database costs by as much as 90%.
Traditional database licensing agreements are notorious for being extremely restrictive and costly. Some agreements even prohibit the licensee from running the software on the cloud of their choice.
In the age of rapid digital transformation and agile development, solutions that constrain the customer are unacceptable. Businesses yearn for the ability to switch platforms, move to the cloud of their choice, and adopt microservices and containers when they think it’s right.
So, look for a database that can run virtually anywhere - in every cloud, in containers, with all major operating systems and hardware platforms - with support from all vendors or none.
In the past, enterprises stored tabular and numeric data in their databases. Digital transformation and customer intimacy have added geospatial, document data, IoT, click streams, and various new requirements. Hence the importance of looking for a database to manage multi-model data in one coherent transaction environment in creating the ultimate solution.
According to Marc, this is why Postgres continues to see a rise in adoption, outstripping MongoDB. The reason for this is that, unlike other relational databases, Postgres is built on an object-relational architecture.
“It is the leader in combining relational with non-relational data: geo-spatial data, key-value pairs, document data, and custom data types. And because it’s true open-source, it costs less than proprietary relational and speciality databases where licensing is controlled by the vendor,” said Marc. “Postgres is also the most versatile, innovative, reliable, powerful, and agile database on the market today, easily scaling and adapting to modern IT environments. It’s the #1 database in containers, and a key component of all the cloud providers’ database services.”
He added that even more can be done to accelerate these advantages, which is by using cloud-native databases.
The Critical Role of Cloud-Native Databases
Cloud-Native databases are designed to be scalable and flexible, deployed in dynamic environments, such as public or private clouds, or containers. Marc further explained that they are resilient, manageable, observable, and through automation, designed to run at a large scale and easy to change.
“Practically speaking, cloud-native deployments improve speed, simplicity, and flexibility,” he said.
Cloud-Native Postgres enables developers and DevOps teams to expedite their software delivery projects across testing, staging, and production. Compatibility with Kubernetes makes deploying databases in the cloud less labour-intensive for DevOps teams and enables IT decision-makers to reallocate IT staff to more strategic tasks, and focus on going to market faster with the right architecture.
Unlike traditional databases that require external tooling to work in Kubernetes, Cloud-Native Postgres enables Postgres to work seamlessly in modern IT infrastructures. Processes, such as installations and updates, are automated, which greatly reduces the dependency on infrastructure management teams.
Cloud-Native Postgres enables organisations to deploy and manage Postgres databases wherever the business requires them, whether in a multi-cloud or a hybrid environment.
Moreover, with the rise of cyber threats, organisations can rest assure that Kubernetes has been designed with security in mind through a multi-layered model approach known as “4C”, which was inspired by the Defence in Depth (DiD) model. The 4C security model for cloud-native computing with containers is organised in four layers: Cloud, Cluster, Container, Code.
Mark elaborated that, “This complements the inherently secure development model of Postgres, which relies on a public code inspection process. Thousands of open-source community participants review and inspect the Postgres code and alert the community about any bugs or issues that may be present.”
Adding to that, security also lies in the hands of the person who deploys and manages the database. Marc advises IT teams to follow stringent guidelines in a secure technology implementation guide, to ensure that the Postgres database is configured securely.
The shift to cloud-native databases greatly reduces the manual infrastructure work, at least when using advanced Kubernetes operators, such as the one found in EDB Cloud-Native Postgres. While that is cost-saving, the biggest impact is on speed and agility. It used to take months to provide a new database server - now it takes minutes.
So, if clients want to embark on the cloud-native database journey, EDB has developed a Cloud-Native Postgres operator for Kubernetes. It includes a second-generation Kubernetes operator that automates many of the infrastructure tasks that traditionally had to be done manually by an Infrastructure DBA, such as installing the databases, creating a cluster, running a backup, and so on.
“EDB’s Cloud-Native Postgres is an ideal entry point for somebody who wants to engage with cloud-native databases, get into DevOps, or work with microservices,” Marc said.