Despite what most analysts and vendors tell you, data growth is not some new phenomenon. The phenomenon is that we still think it is a phenomenon! Anyway, for as long as there has been commercial computing, there has been exponential data growth.

In the last 15 years or so, we have introduced the idea of data de-duplication to help us manage that growth. First, it was for our backups and archives, and now for our primary data. While data de-duplication helps to optimize data capacity, it does not help to optimize data redundancy. After all, if there is still just one copy of the data, it becomes a single point of failure (SPOF).

For redundancy and performance reasons, enterprises have begun to distribute copies of our de-duplicated data. Therefore, it begs the question, how many duplicates of de-duped data is enough? In more contemporary vernacular, what is the appropriate copy data usage? In addition, how do we manage that?

It’s all about business agility

While dedupe, snapshots, replication, etc. are various technologies, ultimately each are used to achieve different business purposes. Enterprises are looking to protect its valuable proprietary digital assets, while mitigating the rising cost of technology. Mountable snapshots allow new versions of software to be tested against the most recent dataset, rather than stale data, driving faster time to market. Replication can serve to deliver the right data closer to the user to maximize performance of business applications.

However, going back to the concept of what is the appropriate copy data usage, and applying that to business agility. What we need is the concept of visibility into the business process as well as the technology.

From there, it is then possible to garner some insight, including the ability to audit the processes that are going on and then ultimately to put some form of control and automation around those processes so that the outcome for both the technology and the business is highly predictable, repeatable and scalable.

Predictable, Repeatable & Scalable

The concepts of predictable, repeatable and scalable (PRS) will be the driving factors around IT as we move towards 2020. Much as Reliability, Availability and Serviceability (RAS) had been in the previous two decades.

The only way to achieve PRS is through visibility, insight and control. In other words, we must have an understanding of a given environment, enough that we can fully instrument, and granularly monitor that environment. Then we are able to construct models that are essentially workflows. Then finally control or automated those workflows with deterministic outcomes.

So who’s doing this?

I recently sat down with the new management at Catalogic Software, and they seem to have a good handle on these concepts. They have even gone so far as to have a catalog of the data that allows them to build intelligent workflows. These workflows can be automated, that help IT to save time and money.

The ability to model workflows, and automating them definitely achieves the PRS principle and enabling deterministic outcomes.

 

 

 

 

This monthly series is sponsored by Catalogic Software. All opinions expressed are those of Neuralytix and our analysts.

corn mazes and fall family fun in Central Virginia
christian louboutin outlet Not only that

The Advantages Of Fashion Merchandising Schools
wholesale dresses 25 a share with the company valued at roughly

Top 5 Spring Styles for Girls Outfits
Dresslink compared to last year’s

Fall 2014 Acne Studios Layering Warmth with a Childlike Touch
Woolrich Outlet Its was elegant

The groom needs to buy about 6 pairs for his groomsmen
Coach Outlet If these people wear short sleeves

Ed Slot Discusses Roth IRA Conversions And Second
Coach Canada but is not truly indicative of potential success

It’s a Doddle to Doodle Become a Fashion Designer
isabel marant sneakers On these types of sites you can upload your resume

Lady Snowblood aka Syura Yukihme Vol
moncler jacka dam Saucony Men’s Progrid Razor 2