Managing Data in the Cloud: The Challenge of Complex Environments for Real-Time Applications
Hybrid and multicloud strategies bring flexibility to your organization's data management strategies. However, they also introduce new challenges and can create inefficiencies that should be addressed.
- By Karen Krivaa
- March 29, 2021
Cloud computing is one of the key enablers for digital transformation. Using the cloud is becoming mainstream, and in return the cloud market continues to grow at a rapid rate as enterprises expand their use of one or more cloud vendors.
Although more enterprises leverage the cloud, they are still retaining much of their data on premises to meet business requirements and maintain regulatory compliance. To manage the complexity of these hybrid deployments, it is important to have a carefully thought-out data strategy and a flexible infrastructure to optimize network resources and maintain the speed and scale necessary for digital applications and analytics.
Growing Cloud Complexity
Many organizations store their data across clouds or adopt a hybrid approach -- storing data on premises and in the cloud. There are several advantages to utilizing more than one cloud vendor. For example, a multicloud strategy prevents vendor lock-in and lets enterprises negotiate better terms including payment flexibility, adjustable contracts, and customizable capacity. Signing up an additional cloud vendor close to where the data originates also improves application performance and is sometimes required to comply with regulations that require data be stored and processed locally.
A hybrid strategy, storing data both on premises and in the cloud, is often required. The cloud is leveraged for some applications and services and it can also be used when resource demand suddenly surges, known as cloud bursting. A hybrid strategy also enables enterprises to gradually migrate applications and services from a legacy on-premises infrastructure to the cloud, reducing risks and possible service disruptions.
Hybrid and multicloud strategies bring flexibility to the organization's data management strategies. However, they also introduce new challenges and can create inefficiencies that an enterprise should address, including:
- Maintaining processing speeds needed for real-time and near-real-time applications
- Optimizing network efficiency to reduce costs including high egress fees from moving data in and out of the cloud
- Ensuring data consistency when replicating data across environments and regions
- Complying with regulations that vary across industries and countries
- Providing scalability, high availability, and resilience at the platform level
A Successful Data Management Strategy
One strategy to simplify data management is to implement a cloud-native platform that can aggregate data from existing on-premises legacy systems of record and data stores into a low-latency scalable storage and compute data fabric. This type of architecture can be viewed as a next generation operational data store or, as Gartner refers to it, a digital integration hub. By decoupling APIs from the legacy systems, this infrastructure supports the rapid development of microservices that require always-on and real-time response. The ability to co-locate the data with the services (business-logic) in-memory, reduces the amount of data movement, significantly reducing latency.
The importance of a cloud-native platform and Kubernetes is clear: they allow organizations to develop once and deploy anywhere. The data platform that is deployed and managed with Kubernetes operator, which uses the Kubernetes API for application creation, configuration, and management, provides support for auto-scaling with high availability and redundancy. Leveraging artificial intelligence for IT operations (AIOps) to autonomously scale either up (for operations) and out (for analytics) at just the right time, ensures the highest levels of service with lower costs.
Another important factor when implementing a multicloud strategy is to make sure you can replicate data between on-premises data centers and the cloud, among different cloud vendors, or among multiple regions efficiently and in real-time. This allows applications and users (such as analysts) to access current consistent data from all environments. To minimize data transfer delays and network costs, you can compress data and limit updates to only changes made rather than uploading the entire data store. If workloads are duplicated between two or more clouds to increase reliability, each function can work independently when they are not being synched to minimize data transfer delays that can slow down response times. For privacy and regulatory compliance, organizations should be able to define data fields on premises and in the cloud that should be anonymized or encrypted during storage and replication.
Final Words
As enterprises accelerate their digital transformation projects, they are migrating to hybrid cloud and multicloud architectures. As a result, they face the challenge of rapidly accessing this data and gleaning real-time insights from large data sets stored on disparate sources across multiple geographies. By implementing a cloud-native, in-memory data fabric, they can support growing volumes of data and adapt to fast-changing regulations while achieving the speed and scale they need to rapidly innovate.
About the Author
Karen Krivaa is the vice president of marketing for GigaSpaces, where she is responsible for all things marketing including product marketing, marketing strategy, digital marketing and communicating the value of GigaSpaces platforms. You can reach the author via email.