Neuralytix 10th Anniversary

It is with deep humility that we enter our second decade in business. I am grateful to our Clients, our team, our alumni, our collaborators, our competitors, our industry, and even our detractors. We look back on the last 10 years, and forward to the next 10 years.

Neuralytix 10th Anniversary

Author(s)

What a Difference a Decade Makes … 

Neuralytix 10th Anniversary

On June 12, 2022, Neuralytix began our 11th year of operations. I am humbled by the trust and respect afforded to Neuralytix and me especially from our Clients, but also from the IT industry overall.

On our anniversary, I wrote the following note of thanks on LinkedIn.

To summarize, the success of Neuralytix is due to our unwavering transparency, conviction, and our duty to be responsible not only to our Clients, but to the IT industry.

To our Clients, Neuralytix and I are humbled by the honor of the trust you have placed in us and the privilege of your business and partnership. For this, we are sincerely and deeply grateful.

I also want to thank:

  • Our Team – who put up with me everyday while delivering superlative outcomes to help our Clients succeed;
  • Our Alumni – who help expand our scope and grow our geographies;
  • Our Collaborators – who help to make sure the deliverables that go to our Clients are the best;
  • Our Competitors – who inspire us everyday and whose friendships I cherish;
  • Our Detractors – whose scrutiny motivates us to reaffirm or refine our methodologies and analyses; and
  • Our Industry – for the community of innovators, leaders, and developers who allow me and no doubt many of you love what we do and the people with whom we do it.

I am fortunate to have a career in our industry, and I look forward to the next 10 years with anticipation and excitement.

At the end of the day, without the trust and investments from our Clients we would never be as successful as we are today. I thank them sincerely and wholeheartedly, and I fully appreciate their continued support and faith in Neuralytix.

In 2012, 10 Years Ago

In the 10 years since Neuralytix was founded, there has been a lot of change in our industry. New technologies, new innovation, mergers and acquisitions, and the regretful demise of some. Below, I present a highly selective list of events in the IT industry that in 2012, the Neuralytix was founded, had not yet occurred.

  • 5G had not yet been deployed worldwide.
  • Amazon ranked 206th in the Global 500 and had not made its acquisition of Ring.
  • Arguably, cloud computing was in its infancy – Amazon Web Services (AWS) Elastic Compute Cloud (Amazon EC2) only came out of beta 4 years earlier, while Microsoft Azure was launched in 2010. Detractors of cloud computing pushed the idea that cloud computing was insecure.
  • Google had not released Google Glass and Google had not withdrawn Google Glass. Google also had not yet acquired Fitbit nor Nest.
  • TikTok did not exist.
  • HP was one company, not two.
  • IPv6 had not yet been ratified.
  • Kubernetes had not been launched.
  • Microsoft had not yet acquired Github or LinkedIn, nor had it re-released my all time favorite game – Flight Simulator.
  • The Raspberry Pi was not yet available.
  • The majority of on-premises hardware still had proprietary firmware – something that we now call “software-defined”, most often built on some distribution or variation of Linux.
  • The volume of data breached from 2004-2012 was roughly 670M records, which represented just under half of the volume of data breached in 2013 alone. {(Based on data from Information Beautiful.)}

What About The Next 10 Years?

The number one question I get asked as an industry analyst and consultant is “so, what’s the next big thing?” It’s very unfortunate, but my answer is the standard answer provided by lawyers – “it depends!”

There is no other answer to give. The world of IT and technology in general is so dynamic. When we look at what I irreverently call the “technology graveyard”, we see some pretty solid ideas. In some, if not many, cases, I can make at least a reasonable to strong argument that technologies and innovations that ended up in the technology graveyard are better than the technology we ended up adopting.

With that preface, here is my personal 10 year outlook on the changes, innovations, and the challenges in technologies that will impact enteprises1.

Data-as-a-Service

  • I actually proposed this concept back in 2015 in Asia, where I delivered a high level keynote on the principles of Data-as-a-Service. The concept is simple. The implementation is simple. The acceptance and adoption will be the biggest challenge, and I do not anticipate initial broad based acceptance of Data-as-a-Service for at least 10 years. The principle of Data-as-a-Service is the creation of a formal market or exchange in which data owners offer to license (or even sell) raw datasets. Data integrators, acting as middlemen, will integrate various datasets together, and resell these value-added datasets on the  same market or exchange, so data consumers can license either the raw or value-added datasets.

A peak in cloud computing spend

  • The global pandemic forced enterprises to adopt cloud computing, whether they were ready, willing, or able. With expectations for further forced isolation and quarantining subsiding, enterprises will spend 2023 reviewing and reevaluating their short and medium data processing, data governance, and data security needs. I expect that many enterprises, especially large to very large enterprises, will conclude that the long term costs of cloud computing (adjusted for risk) may exceed the cost of continuing, or in some cases returning, to on-premises computing. The case for remaining with, or returning to, on-premises computing is strengthened by the improved reliability of data infrastructure (extending the useful life of IT capital investments, and thereby reducing the cost of on-premises computing); expected tightening of data governance, privacy, and retention by governments and regulatory bodies; and the development of next generation technology derived proprietary intelligence and innovations (AI/ML) for competitive advantgage. As a result, I expect cloud computing spend to peak before 2030.

A measurable amount of data repatriation

  • As an effect of my previous prediction, I predict that just as enterprises will slow its investments in cloud computing, they will also repatriate data stored in the cloud. The seemingly endless supply of new applications and services available from hyperscalers and their partners is alluring. But these new applications and services need to command a greater percentage of enterprise data to reach their full potential. Using the Pareto rule that 80% of data is not used by some immediate and business critical process, but can still offer some level of potential value through the use of these new applications and services, the major cost to an enterprise is not the applications or services themsevles, but the endless spend on persisting and protecting data in the cloud for the benefits that these new applications or services are expected to bring. The problem is that in order for these new applications and services to deliver on their promises, the 80% of non-active data that could have been stored in the cloud using lower cost options must now be stored using more performant, and thus, higher cost options to facilitate the processing and performance by these applications and services. I predict that When evaluating the storing less active or inactive data in the cloud using higher cost options to run these new applications and services, versus storing these data on-premises with similar (or the same) applications and services running locally, the medium to long term cost benefit analyses will alway come out in favor of local processing.

The next “mainframe” will emerge

  • This prediction is less profound than the title suggests. Simply put, I expect within the next 10 years, a return to mainframe-like infrastructure. Hyperscalers are essentially doing this, albeit, in a highly proprietary fashion. Currently, we call this composable infrastructure, or what I previously introduced in the early 2010s as “Datacenter 4.0” (a moniker that regretfully never caught on!) Where composable infrastructure falls short of my prediction is that it takes a universe of compute, networking, and storage, and virtualizes it at multiple levels to present a “custom” server to run the desired hypervisor or application. My prediction does not begin with the infrastructure, but with the software. The “mainframe” software will aggregate and virtualize all available compute, networking, and storage, and present a super-server (cum mainframe). This super-server can integrate or retire any number of additional infrastructure components. But unlike hyperconverged infrastructure or composable infrastructure, where the focus is on presenting virtual servers, the new super-server presents itself as a single, well, super-server! This super-server will not be virtualized into virtual servers. Instead, it will present an abstraction layer that provides a platform for extensible serverless computing.

Commoditization of almost everything

  • Infrastructure (compute, networking, and storage) are already commoditzed. CPUs, GPUs, xPUs will provide greater capabilities through improvements in the number of of cores, clock speeds, multilayer integrated caching, etc. This will result in a relatively linear decrease in the cost of each unit of compute. Networking speeds will increase organically as already laid out in the specifications. Like compute, the cost per unit of network bandwidth will decrease in a relatively linear fashion. Finally, improvements in areal density for hard disk drives (HDDs) and density as well as write endurance for solid state drives (SSDs), will also result in a relatively linear decrease in the cost of each unit of storage. Software will also commoditize. When you look at the universe of software in each segment of the market, there are tens, and sometimes, hundreds of competitors all making (at least) similar claims to feature, performance, simplicity, and cost. Just like Data-as-a-Service will have “middlemen” in the form of data integrators, software will also have “middlement” in the form of software aggregators. Many already exist today. An example is the number of Kubernetes GUI or management platforms available. These “aggregators (of software)”, as I consider them, aggregate core software together. These core software include, Kubernetes itself, Grafana and Prometheus, Zabbix, container and containerized application image management, deployments, and optimization. The software aggregator then “aggregates” all these core software together and they add value by presenting everything through one singular GUI interface. Ultimately, the number of software developers, the number of applications and Softwre-as-a-Service (SaaS) providers in any given segment will continue to increase, their claims will continue to homogenize, and the price per unit of value will continue to drop, but not necessarily as predictably as infrastructure will.

 

 

  1. A Neuralytix Insight will be available in the coming months on each topic []

Related Capabilities