Neuralytix approaches our consulting practice by applying our proprietary principle called Creating Value through Contextualizing Data, or CV[D]. CV[D] asserts that information technology (IT) has evolved from simple automation and productivity improvements to an essential success factor for organizations to create value and competitive advantage. Additionally, Neuralytix believes that the traditional silos into which technology is categorized, such as hardware, software, services or servers, storage and networking are too simple and not representative of the contemporary IT environment.

By moving away from the traditional silos of hardware (server, storage, networking) and software, IT vendors and end-users can finally take a converged and holistic view of their entire IT operations. The result is  a movement away from concentrating on technology for technology-sake to technology for business purposes.

The CV[D] principle characterizes six factors against which organizations need to measure their people, processes and technologies (and in the case of IT vendors – their products, solutions and services). Through these six factors, organizations will be able to maximize the value that is created and derived from available data and information. These six factors all relate to how data is:

  • Created and captured;
  • Curated;
  • Computed; and
  • Consumed;

These first four factors are then measured against how data is managed within the context of:

  • Community; and
  • Compliance.

This contemporary view of IT aligns the goals of the CEO and CFO and the goals of the CIO/CTO. Furthermore, this approach moves IT away from being a cost center to a corporate resource that can be measured with traditional business key performance indicators (KPI).

The CV[D] principle is all about putting data in context, and then leveraging that data. The principle aligns with the Neuralytix definition of Big Data – a set of technologies that creates strategic organizational value by leveraging contextualized complete data sets.

At Neuralytix, we believe that classical technology performance metrics, in particular return on investment (ROI) is no longer relevant. Our CV[D] principle relies on return on technology investment ROTI that measures technology collectively over time. We also believe that organizations need to consider other metrics such as time-to-value (TTV) and risk. These latter metrics deliberately parallels metrics for financial investment portfolios as converged technologies and a holistic approach to technology will yield the optimal and maximum value outcome.

Aperçu au courant

Throughout the evolution of the IT, there has been a necessity to develop areas of specializations in:

  • Hardware infrastructure (namely, around server, storage and network);
  • Core system software such as database administration, operating system administration; and
  • Server and client application development.

Under this paradigm, it made sense to view the IT world as hardware and software markets and discreet submarkets within these two über markets where advancement can be measured in terms of the skill set of the people employed, the development of processes involved, and the technologies deployed.

With the concurrent evolution of virtualization technologies and the economic benefits of consolidation, in the last several years, the concept and attraction of a converged infrastructure (CI), whereupon the hardware (compute, storage and network) and the software (operating system, database and application) functions are “converged” within a single physical unit, has changed the way IT is deployed, managed and consumed. This new unit is typically a singular industry standard (x86) server, or a cluster of industry standard servers in an individual rack or “pod.”

CI has dramatically changed the way IT is architected. Today’s IT involves a highly distributed and multifaceted set of interdependencies and dynamics that transcends simple day-to-day transaction processing to complex data analytics.

The complexity of contemporary IT is a result of many factors. We list only some of the more obvious ones below:

  • Cloud
  • Regulatory and legislative compliance;
  • Changing end-point clients from terminals to PCs to smart mobile devices;
  • Availability and accessibility to external data sources; and
  • Evolving communications media – from analog (e.g. traditional fixed-line telephone) to digital (email, texts, social networks, etc.)

The six factors of Creating Value by Contextualizing Data

The six factors are divided into two major groups:

  • Data Continuum – Capture, Curation, Computation and Consumption; and
  • Data Context – Community and Compliance.

Data Continuum

The first four factors look at the universe of data. It looks at how data is:

  • Captured (with an obvious presumption of data creation);
  • Understanding what data is relevant to a given context (curation)
  • What actions are taken on the curated data (computation); and
  • How the output is ultimately consumed.

These factors do not define any specific technology or technologies. Instead, CV[D] suggests a more contemporary (and arguably, more appropriate) approach: that a convergence of technologies is assumed. Convergence is assumed based on the fact that since the pervasive adoption of Intel’s x86 platform, most contemporary solutions are taking advantage of this platform, converged with a real-time or open-source operating system, with proprietary applications sitting atop these hardware/software infrastructures.

(Creation and) Capture

New data is constantly being created. Data is created and generated from a multitude of sources. Additionally, existing data can be captured. The CV[D] principle considers all forms of data creation and capture.

Examples of data creation include:

  • The composition of an email;
  • The entry of a new commercial transaction into a database;
  • The creation of a new rich media file from original audio or video sources; and
  • The generation of smart meter data.

Examples of data capture include:

  • The reading of data from radio frequency identification (RFID) tags; and
  • Importation of data from external sources (including but not exclusive to free and for fee data feeds).

Technologies that are associated with the creation and capture of data include (but is not limited to) desktop PCs, laptops, tablets, smartphones, servers, storage, and networking. Many of these technologies are likely to be common with technologies associated with data consumption.

Curation

Given the volume of data available (either through creation or capture), it is ultimately necessary to curate data, an operation that includes the organization, management, search and discovery of data.

Data curation includes the organization and management of data through a database. These databases have different characteristics:

  • Some databases that either have a prescribed schema and relationship (represented by relational databases, such as Oracle);
  • Other databases that can create dynamic relationships (such as NoSQL, columnar, key-value and graph databases, including, but not exclusively, the Hadoop framework).

Additionally, the organization and management of data may persist outside of the concept of a database. It may also include the organization of data laid out on storage systems, and by extension, network file systems (such as CiFS, NFS and object based storage). Servers running operating systems that have an embedded local file system must also be considered under this data curation.

Finally, technologies that assist the search, discovery and analytics of data fall into the data curation factor as these technologies help users to distinguish useful data from less useful (or even useless) data.

Computation

Perhaps the simplest of the factors to consider is that of computation. Data computation is simply the processing of a set of instructions on a curated set of data. This process essentially transforms data into valuable information.

There is no quantitative or even qualitative assessment of the actual value of the output at this point. This will occur at the consumption point.

Technologies associated with data computation include application software, servers, storage and networking.

Consumption

The consumption of data (actually information) brings the data continuum to an end. However, it must be noted that this does not mean that data is dead. CV[D] does not support the notion of a data lifecycle. The principle asserts that data never dies; instead, it becomes passive or inactive.

Many client devices can be used to consume data and information. These devices parallel many of those that are used for data creation. These include smartphones, laptops, desktops, and tablets. Networks are used to deliver the information for consumption, and storage on client devices are used store the consumed information.

One oft forgotten “consumer” of information is the multitude of software applications. The output of computations is equally consumed by a new software process, connecting the “end” of the data continuum with the beginning again.

Combining the first four factors

While there are exceptions where data curation or data computation could result in the discarding of data and an early return to the capture factor of the data continuum; for the most part, data will travel in the specific series of creation, capture, curation, computation, consumption and then the return of the computed data into capture for subsequent processes.

The first four factors generate value from data. However, value is generated without constraints of corporate governance, regulatory compliance, and communally accepted guidelines around which data and information can be used.

This is where data context comes into play.

Data Context

The concept of data context is to ensure proper usage of data. Data context defines IT security policies, business and corporate controls. Neuralytix believes that networks inherently insecure. As opposed to traditional security approaches that either attempt to keep unauthorized users (and data) out; or keep only authorized users and data inside the security perimeter, a more appropriate and contemporary consideration would be to assume that the perimeter is permeable. Data will travel in and out, whether the data is authorized or not.

The increasing popularity of bring-your-own-device (BYOD) policies found in many organizations exaggerate this notion of insecure networks.

Community

The data community is one in which the relevant users have access to the relevant data. The community could be a defined internal user group, or an external community of customers. The social nature of both humans and data sharing is such that clear definitions of authorized members of a community.

The community factor in the CV[D] principle extends across most of the other data continuum factors. For example, understanding who has the ability to create, and capturing the relevant data involves the definition of the community; conversely, understanding who has the ability to consume the output is critical in not only generating competitive advantage, but also in time-to-value.

Less obvious, may be how the data community impacts the curation and computation factors. Again, this is founded on the social nature of IT today. In many cases, the curation of data may no longer be a function of a single person. The curation of data may involve a community. In social networking, a community of similarly minded or concerned participants will collaborate individually, and collectively, to recognize data that is relevant. For example, the number of retweets of a twitter post is dependent on the followers of the original poster (OP) and the followers of followers. These followers all have some relationship (no matter how distant) to the OP; but ultimately these followers will influence on how quickly or viral the tweet will be distributed.

In terms of computation, the relationship to the community factor is somewhat obscure. The growth of infrastructure-, platform-, software-, and data-as-a-service (IaaS, PaaS, SaaS, DaaS) has given rise to the opportunity of a single set of data that persists in the cloud, and being made available to any number of computation engines. In some cases, these engines operate independently, in others, they are more coordinated. Either way, the community concept applies.

Compliance

The last of the factors is compliance. We live in a highly regulated world. Governments and regulatory bodies do not have the necessary understanding of the inner workings of technologies, yet they make laws and regulations that have significant impact on data.

Ultimately, compliance is not just about external compliance. It is also about internal compliance. Does the data continuum and community operate within the internal and external controls of the organization? This is the single question that has to be asked as it relates to compliance. A more granular approach may involve breaking down the question to the five other factors within the CV[D] principle.

Prospective Perspective

Technology can no longer be left for the technically minded. We live in a data and information centric world. Understanding the way technology helps us make decisions is critical to success.

Neuralytix, through our proprietary Creating Value by Contextualizing Data CV[D] principle has defined the six factors that both IT vendors and IT users need to consider for IT solutions.

Not all solutions will need to emphasize all six factors. As a guide, below is Neuralytix’s view on the role each of some major technology segments play in relation to the six CV[D] factors:

Table 1: Suggested role of selected technologies as they relate to the six CV[D] factors
 

Capture

Curate

Compute

Consume

Community

Compliance

Servers

Supporting

Supporting

Critical

Supporting

Supporting

Critical

Storage

Critical

Supporting

Supporting

Supporting

Supporting

Critical

Networking

Critical

Supporting

Supporting

Critical

Critical

Minor

Database

Supporting

Critical

Supporting

Minor

Minor

Supporting

Desktop/Laptops

Critical

Minor

Minor

Critical

Supporting

Supporting

Mobile

Critical

Critical

Minor

Critical

Critical

Supporting

Source: Neuralytix, 2012

The above table is not meant to be a definitive positioning of each technology. Submarkets will result in varying positioning and relevancy.

What about the Cloud?

The Cloud, whether it is private or public, is not a technology. It is a collection of technologies, and as such is not listed separately.

Please check back frequently. this report will be updated over the next several weeks…