From the initial days of commercial computing, the value of spending (or more appropriately, investment) in technology has been measured in ROI.
Today, this measure is both meaningless and inappropriate for information infrastructure.
All information infrastructure can be virtualized. What virtualization brings is the ability to do multiple tasks (applications) on demand. Some tasks may be persistent, and last throughout the life of the infrastructure, while some tasks may be transient. Furthermore, virtualization technologies continually stretch the limits of existing infrastructure to do more and more.
This uncertainty makes ROI inappropriate and meaningless to the measurement of return or value of information infrastructure.
Another example is the mobile phone. A mobile phone is more than just a phone. While a specific return can be measured in terms of utility for being able to make phone calls anywhere, when we add in additional functions such as email, banking, and other productivity capabilities that are loaded and unloaded on demand, it is impossible to properly measure the value of a mobile phone for any given purpose.
Commoditization of information infrastructure further reduces the applicability of any form of return measurement.
However, in many cases software continues to be able to be measured with ROI. This too will change, and ROI will no longer be applicable. Software solutions are now multi-purposed. Rarely is software purchased for a single task. For example, Microsoft SQL Server can be the underlying database for multiple functions from CRM to DNS to ERP, etc.
This idea is extended to the enterprise as well. As enterprises (should all) move towards converged and hyperconverged infrastructure, such as those from Gridstore and others, the idea of ROI simply just does not apply.
Instead, enterprises should measure the total cost of ownership (TCO) against a business metric such as revenue or profit. The investment in technology will always increase as an enterprise grows – this is no different to the need to hire more people as an enterprise grows. Where technology is different to human resources is that the average cost of technology (measured in TCO divided by some business metric – including TCO per employee) should drop.
Driving down TCO can sometimes be a function of buying less expensive technology. But 10-20% savings on marginal hardware and software is relatively insignificant. This is especially relevant to spending on data storage. Neuralytix research shows that the amount of net new data stored continues to grow at roughly 65%, but using data reduction technologies, the cost of net new data storage can be reduced by 50%.
So too does this apply on the computing side. Hyperconverged infrastructure is helping enterprises to consolidate otherwise lowly utilized highly distributed server farms, to an optimally utilized smaller number of densely packed servers. This not only reduces the number of servers, but has many follow-on improvements to an enterprise’s bottom line. For example, with fewer servers, less power is consumed, less cooling is required, and likely less real estate necessary. The savings expected from these three factors alone can often surpass the initial capital investment in hyperconverged infrastructure.
In most cases, the consolidation of traditional datacenters into hyperconverged infrastructure not only reduces TCO, but improves the output and utility of the datacenter. This improvement can be observed in more CPU power per application, or more applications per CPU. Either way, the benefits of hyper- or conventional converged infrastructure augmented with the savings in environmental within a datacenter is a compelling reason to move towards the hyper- (or at least highly) converged, virtualized, next generation datacenter. Gridstore, an up-and-comer in this segment, is just such an example.
This monthly series is sponsored by Gridstore. All opinions expressed are those of Neuralytix and our analysts.