With storage making up as much as 70% of enterprise capital IT investment budgets, there is a lot of attention being paid to what that storage capacity is actually storing, and what is the value of the data sitting on the storage system.
Additional attention is paid to what is known as copy data – replicas, snapshots, backup, etc. These data, often redundant to primary and active data are used from simple single file restores, to whole system restoration particularly in the case of virtual servers). These data can also be used for testing and development, in the case of writeable snapshots. This is particularly useful when testing next generation software on a realistic dataset without impacting the production copy of the data.
So the question here are, what data is being stored, what is the value of the data being stored, how many copies of it are there, and it is necessary to keep x versions of the same data?
This is why copy data management is so critical.
The big problem is that most enterprises are still using 20th century processes and procedures and not fully understanding the answers to the questions posed above.
What enterprises actually need is a way of cataloging, automating, managing, orchestrating and analyzing copy data.
The simplicity and effectiveness of NetApp’s snapshot technologies and archiving technologies is well documented. However, just as there can be (virtual) server sprawl, the ease with which snapshots can be made often results in copy data sprawl.
ECX runs out of band to the NetApp storage system and VMware environment. It catalogs actions performed by NetApp’s DataONTAP, and it can then help administrators to analyze and examine the effectiveness of the copy data management.
The software aims to answer questions such as whether there are orphaned virtual machines, the number of copies of data, and orchestrates the necessary actions required to meet regulatory and corporate governance standards. Additionally, the software can help administrators measure the effectiveness of their copy data management against service level agreements (SLAs) such as recover point objectives (RPO) and recover time objectives (RTO).
The importance of the functions of the software cannot be underscored enough. The data collected by the software can help enterprises generate a workflow that is effective and compliant. It can help administrators help the business in evaluating the criticality and cost of storing data, in relation to the value of the data being stored.
The insights provided by the software and the impact of the software is highly measurable – allowing IT leaders to quickly justify its implementation. The key to all this is allowing the business to better control and manage its digital assets, while using the insights from the software to find new opportunities to leverage the data being stored.
The latter, being the new opportunities from data already collected, can be used by Big Data analytics systems, or other form of business intelligence to generate great enterprise value and competitive advantage.
At the end of the day, copy data management is the foundation to growth and leadership for the enterprise. Although a “technology” spend, copy data management is really about an investment in creating greater return on data.
This Research Brief is sponsored by Catalogic Software. All opinions expressed are those of Neuralytix and our analysts.