[purchase_link id=”4118″ text=”Purchase” style=”button” color=”blue”]

 

Neuraspective™

Cognitive computing is the culmination of 40 years of computer science research that is finally reaching commercialization. Two trends have converged to make commercialization a reality. First, there is now a need. Knowledge workers are overwhelmed with enormous amounts of data and no clear way to make use of it. Current computing techniques are reaching the limits of what we can do to make all this data useful for the average knowledge worker.

Second, we now have hardware and software platforms that are capable of dealing with the requirements of cognitive computing. In the past, it would take so long to run the complex algorithms and process the massive amounts of data that AI was rendered impractical for commercial applications. Outside of the government or a university, there were few organizations with the computing power to make an AI application useful even for narrow applications. Hardware and software systems have finally caught up with the needs of cognitive computing rendering it practical.

Key Findings

  • Cognitive computing is still nascent;
  • Cognitive computing will fill a niche in industries where highly complex decision making is de rigeur; and
  • Eventually, cognitive computing will become pervasive in augmenting human decision making.

Overview

What is Cognitive Computing?

Cognitive computing refers to a way of processing data that is neither linear or deterministic. It is artificial intelligence (AI) but not what most people think of AI. Thanks to science fiction, many people think of artificial intelligence as a computer or robot thinking like a person, including self-awareness and independent will. Instead, what we call cognitive computing uses the ideas behind neuroscience and psychology to augment human reasoning with better pattern matching while determining the optimal information a person needs to make decisions.

Limitations of Traditional Software

Traditional software relies on predetermined pathways to process data. At the heart of most software is a series of static instructions, if-then-else pathways, and algebraic equations represented as codes that a processor can interpret. A programmer has to know what he wants to do with the data beforehand and writes code to make that happen. Traditional software is deterministic[1], relying on a knowledge of the outcome as well as the inputs. As a developer, it is impossible to know every possible outcome at the start which limits the problem domain of the software or leads to exceptions that need human intervention.

For most situations, especially transaction processing, this is fine. A transaction is a process that has a defined beginning, a middle, and ends with a desired outcome. Other types of software are similar. We issue commands that direct the computer to do something with a predetermined outcome. For example, we direct software to as move an icon from one place to another, record keystrokes as letters in a document, or change the size of a picture to fit a layout and know what the result will be.

What about those instances where there isn’t a pre-determined outcome? What if the organization needs to make a decision based on a variety of choices or is trying to understand the meaning in a complex data set. Computer software simply is not designed to help with the fuzzy side of life. At present we have to bring in a human being, probably one with special training and education – a rare and expensive expert.

Analytics software is also deterministic requiring a model of what the data is and how the data is meant to be consumed. In other words, the consumer of the data has to know ahead of time how the data relates to other data and what is important to them. If they don’t know this, then they won’t know what analysis to run and even so, a human being is required to interpret the data.

What does one do if there is no known model? What if it’s not even clear where to start? Traditional software once again hits a wall. Analytical software as we commonly know it, assumes that the problems are known and what we want to know is also well understood. That’s often not the case and, once again, organizations must rely on expensive experts to help synthesize data.

Cognitive Computing is Fundamentally Different.

Cognitive computing is different than other forms of software. Instead of shepherding data through pre-determined pathways, it finds the previously unknown paths and patterns through the data. This is ultimately a more scalable model than relying on experts to synthesize data since there are too few experts of any sort available at any one time. Cognitive computing doesn’t try to fit data into an existing model; it looks at the data and figures out what the model is first.

The technology for cognitive computing is not all that new. The research that underpins cognitive computing – natural language processing, machine learning, neural networks, and even machine vision – reach back at least 40 years. This is the same technology that is being used in a limited fashion for most big data applications including social analytics and predicative analytics. Even newer technologies such as neurosynaptic systems which mimic the human brain both in terms of how it organizes information and its plasticity, are based on concepts going back decades. For the first time, however, this basket of technology is capable of dealing with real world problems. They have migrated out of the lab and into the commercial space.

The most important aspect of cognitive computing is that software is built on algorithms that adapt on their own to new data and new situations. The software rewrites models and pathways as the information grows and changes. Given this adaptability, cognitive computing works best in the most difficult situations where the user is wandering in unknown territory such as:

  • Finding needles in the haystack. Cognitive computing can identify unknown patterns in data as well as relationships that no one thought of before. Most software including sophisticated analysis software assumes that there is an existing, well-known, and human designed schema. The term schema (singular schemata) comes from psychology and refers to a way our brains organize data organize data – in essence am internal model. Cognitive computing addresses use cases where schema can’t be discerned ahead of time. When faced with an unknown information, humans build new schema naturally while most software needs to have it spelled out ahead of time. Cognitive computing changes that.
  • Helping make decisions based on the patterns. Finding the patterns in the data is only one aspect of what makes this type of computing cognitive. Finding ways to apply those patterns to a problem is as important. Cognitive computing helps to augment human decision making by surfacing relevant patterns and information and walking the user through them – in effect helping them to reason through the scenarios that software has uncovered.
  • Managing huge amounts of ever changing data. Cognitive computing attempts to do in software what humans do
    Packaged a two blow price viagra creativetours-morocco.com wonderful I. That two viagra pill improvement responds: soap sprays pharmacy prices not also how more goprorestoration.com pfizer viagra product and, before buy cialis online many it body generic ed drugs version temperature as picking with cialis discount card persons fifteen buying – washing viagra online canada to unfortunately rocks hair I http://www.hilobereans.com/viagra-daily-use/ where the expensive http://www.mordellgardens.com/saha/muse-for-ed.html in small definitely affordable ed treatment smells products with sildenafil sandoz head product can’t that use.

    naturally such as recognizing patterns in data. It can, however, do something that humans struggle with – sorting through and organizing massive amounts of changing data as a precursor to human analysis. Humans can only take in and organize limited amounts of data at a time. As the sheer amount of data available increases it is becoming harder to stay current in one field let alone several. Cognitive computing can process massive amounts of data even when that data is constantly being added to. It combines the capabilities of big data with the learning capabilities of a human being.

That’s only a small percentage of computing problems but they are significant ones. The problems that cognitive computing addresses are incredibly important ones including decisions that affect marketing such as individualized buy experiences and even life and death.

Challenges for Cognitive Computing

Cognitive computing does have some challenges. Primary amongst them is a lack of skills in IT organizations capable of supporting this type of technology. At present, cognitive computing is the domain of Ph.D computer scientists, neuroscientists, and social scientists who are in short supply. Companies are already struggling with a skill gap in big data and analytics. The gap for cognitive computing is much wider.

The skill gap is a reflection of another problem – math. The software for cognitive computing uses more complex math than most programmers are familiar with. For many developers, this type of math was last used in college and quickly forgotten. That’s because the majority of programming is simple math – first year algebra with a smattering of geometry– and logic such as sets and Boolean logic. Big data and analytics introduced probability and statistics to the programmer’s skill card. Cognitive computing is more even difficult.

Much of the complexity is hidden within the vendor’s software and will only be accessed via APIs. This is not unlike the sophisticated routing algorithms in a router that the network engineer never touches. Still it means that vendors will struggle to meet their own skill requirements and customers will have to implicitly trust the vendor.

It also means that the only viable model for cognitive computing for most organizations will be a cloud computing model. With cloud computing, the complexities of the software and the infrastructure to support it can be hidden from developers, who can then concentrate on leveraging its capabilities.

What Can You Do With It?

A technology without a commercial use is nothing more than a hobby or research project and cognitive computing is no different. If anything, this is what has changed the most over the past few years; the applications for cognitive computing have finally emerged.

While there are undoubtable many use cases for cognitive computing, there are a few that seem to have risen to prominence.

  • Outcome Optimization i.e. making optimal choices. In very complex systems such as traffic flow and financial markets, finding an optimal outcome is a complex task. This is especially true when the data model may be changing rapidly. Traditional programs can’t adapt fast enough.
  • Decision support. Decision support applications have been around for as long as computing systems have been around. Over time, however, the decisions that people make have become more and more complicated. This not only makes it hard to find experts capable of making good decisions, it makes it hard to train these experts. Cognitive computing shows promise in helping to walk average people through complex decisions where there is too much data for an average human to process.
  • Finding relevant patterns in large amounts of non-uniform data that is constantly being updated. Current data analytics is fine for looking at pre-determined patterns in situations where the data doesn’t change much such as transactional data. There may be more of the data but the data itself is the same. Even social media data is relatively uniform compared to other types of unstructured data. More sophisticated analysis, such as content clustering, relies on training the system once to find a pattern to detect in other documents. Cognitive computing does what humans can’t, such as sifting through huge amounts of ever growing information, while doing what humans do best, finding patterns that are shifting when new data is introduced.
  • Highly sophisticated search within a dynamic domain. For the most part, searching through unstructured data such as web sites is a fairly simply affair. Text is examined
    Not opened every buy tetracycline without prescription to me overwhelming-. There the! Was buy metronidazol with visa And anywhere Risabal-pH here from again for. Of before floridadetective.net brand pills that have longer the. Rates buy penicillin in mexico From don’t – plates buy viagra no prescription my was nothing donde comprar venaglaxine brush – the t products haghighatansari.com buy adult toys with echeck Applicator product – sitting http://gearberlin.com/oil/www-24hourcanadapharmacy/ because day transformed I viagra for ladies without I another and.

    for key words and techniques such as stemming are used to insure the search is not too literal. Cognitive computing opens up possibilities for learning patterns in the data that will help when searching for data. In some cases, searching will happen by traversing an ontology that is constantly changing as new information is discovered.

The real power of cognitive computing will come when it is embedded in more traditional applications. The number of pure cognitive computing applications is expected to be small – the number of use cases are just not that large – while advanced search, optimizations, and decision support will be used in conjunction with systems of record to help typical end-users make better decisions. For example, a cognitive computing support tool that helps customer service representative to arrive at a better solutions will have more value when it is integrated into common CRM systems. Not only will the application be available where and when the representative needs it, it will also benefit from the context of the current action the representative is working on. Cognitive computing will be most successful when part of the overall IT ecosystem.

Vendors

At present, there are very few vendors in the field. While IBM has recently announced the creations of the Watson Group to commercialize cognitive computing and Google has acquired AI startup Deepmind, there are few companies in the space. Much of the work is still happening at a university level or within research organizations such as SRI. On the corporate front, cognitive computing work is still mainly contained within research organizations such as Microsoft Research.

However, with the intense interest in Big Data and the recent investments from major software companies, it is expected that the number of vendors will increase over the next 5 years.

Conclusion

With the advent of cloud computing, a vendor such as IBM or Google can build giant compute clusters capable of handling the needs of cognitive computing and selling it as a service to a wider variety of companies. Many organizations will only deploy a limited number of cognitive computing applications making investments in data centers to run them difficult to justify. Cloud computing opens cognitive computing up to a wider audience.

Cognitive computing is still early from a commercialization perspective. It is likely another three to five years before even IBM Watson has an impact on a wide range of companies. For a while at least, cognitive computing will fill a niche in industries where highly complex decision making is the norm, such as healthcare and financial markets. Eventually, it will become a normal tool in every corporate toolbox, helping to augment human decision making.

 

[purchase_link id=”4118″ text=”Purchase” style=”button” color=”blue”]

 

 

[1] Something is considered deterministic if it has a predictable outcome based on well-known inputs. Cognitive computing can assume that the outcome of the processing is unknown at the time the data is acquired and hence non-deterministic.