By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Upside - Where Data Means Business

In-Memory Computing for Turbo-Powered Analytics

Embracing in-memory computing and data virtualization can move your business from laggard to leader.

Speed-to-market is a key characteristic that separates leading businesses from the laggards. Successful organizations constantly adopt innovative technologies and business practices faster to inject speed into their operations and beat their competitors in delivering goods and services to their customers. Need proof? Look no further than Amazon.

One technology that promises to deliver such speed is in-memory computing, which Technopedia defines as a developing technology for processing data stored in an in-memory database. In-memory computing stores information in the main random access memory (RAM) of dedicated servers rather than in complicated relational databases operating on comparatively slow disk drives.

For Further Reading:

Will In-Memory Processing Accelerate Your Azure SQL Workloads?

Reach Real-time Analytics on the Data Lake with GPU Acceleration

Data Warehouses and In-Memory Technologies: Myths and Reality

Touted as the antidote for slow disk-access technologies, in-memory computing is expected to grow from almost $7 billion in 2016 to a whopping $34 billion by 2022 -- a compound annual growth rate of more than 30 percent according to market research firm Mordor Intelligence. Many technologies, such as database and data virtualization, are also getting on the bandwagon to cure the slow BI illness. By moving the processing to in-memory, these technologies can support large volumes of data at high velocity.

Because Speed Is Not Negotiable

Let's take a large pharma company as a case in point. It manufactures high quantities of extremely costly drugs and closely monitors every machine to ensure that the manufacturing facility as a whole is operating at the required performance level. To accomplish this feat, the company analyzes data feeds from each machine in real time. This extremely large data set takes time to process, and any delay can potentially impact the accuracy of the performance analysis, rendering it obsolete.

In this situation, in-memory processing can play an important role as it leverages clusters of additional processors to relieve a company's core systems. However, as promising as this technology may be, the underlying architecture is equally important. Without the proper architecture, in-memory processing cannot deliver maximum benefits.

The Optimal In-Memory Architecture

Today, modern architectures employ data virtualization at the core, working in tandem with in-memory processing. Normally, data virtualization aggregates views of the data across multiple heterogeneous data sources and delivers the results to business users via BI and reporting tools. However, as an overarching access point to the various data sources, a data virtualization layer is also capable of orchestrating computations across the infrastructure, making data virtualization well suited to work with in-memory processing for maximum efficiency.

For example, during run-time, the data virtualization layer can determine if a given processing job is too large for specific sources to handle. If it is, the data virtualization layer can hand off the job to an in-memory system on the fly. This has the potential to accelerate processing a hundredfold, enabling stakeholders to analyze data in real time as it streams in from the source.

Why In-Memory Computing is the New Shiny Object

In-memory computing promises to deliver many results at a much faster speed. This is particularly important as data has become varied and has increased in volume. Now, as companies begin to monitor operational systems for peak efficiency -- as in the case of the pharmaceutical manufacturing plant -- older systems are incapable of handling the large volumes of data generated by these machines in real time. Technologies such as in-memory computing that accelerate the delivery of results while keeping up with immense data volumes become imperative to take advantage of this data and gain a competitive edge.

Data virtualization, with its capability to push down query processing to limit data movement over the network, has already accelerated the delivery of enterprise data to business users. Now, in combination with in-memory computing, it further fast-tracks the delivery of data, even in huge volumes. Such a blend of volume and speed has never been possible before. Think of it as real-time data delivery on steroids.

For the pharma scenario above, these two technologies can enable stakeholders to analyze machine data with virtually no lag for optimal plant efficiency and, as a result, maximum profitability. Imagine this scenario: the company needs to produce a million batch of the drugs every week to meet the demand. With its plants operating at an average of 80 percent efficiency, the company can't produce all the drugs needed, so they either have to take a profitability hit either by not making and, as a result, not selling the remaining 20 percent of the drugs or by building additional plants to produce the remaining 20 percent, which increases costs significantly.

Turbo-Powered Business

The pharmaceutical example above is only one example of what is possible among a myriad of use cases. Organizations that embrace data virtualization with in-memory computing stand to gain an edge over their competition in the race to grabbing market share. Speed-to-market does not have to be a pipe dream. By leveraging these technologies, businesses can supercharge their BI efforts in order to become leaders instead of laggards.

About the Author

Ravi Shankar is senior vice president and chief marketing officer at Denodo, a provider of data virtualization software. He is responsible for Denodo’s global marketing efforts, including product marketing, demand generation, communications, and partner marketing. Ravi brings to his role more than 25 years of marketing leadership from enterprise software leaders such as Oracle and Informatica. Ravi holds an MBA from the Haas School of Business at the University of California, Berkeley. You can contact the author at rshankar@denodo.com.


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.