Why Cognitive Computing Is A Big Deal For Big Data

When IBM’s Watson won against humans playing "Jeopardy," most of the world considered it just another man against machine novelty act, going back to Deep Blue’s defeat of chess champion Garry Kasporov in 1997. But it’s much more than that.

As Josh Dreller reminded us a few Search Insider Summits ago, when Watson trounced Ken Jennings and Brad Rutter in 2011, it ushered in the era of cognitive computing. Unlike chess, where solutions can be determined solely with massive amounts of number crunching, winning "Jeopardy" requires a very nuanced understanding of the English language as well as an encyclopedic span of knowledge.



Computers are naturally suited to chess. They’re also very good at storing knowledge. In both cases, it’s not surprising that they would eventually best humans. But parsing language is another matter. For a machine to best a man here requires something quite extraordinary. It requires a machine that can learn.

The most remarkable thing about Watson is that no human programmer wrote the program that made it a "Jeopardy" champion. Watson learned as it went along. It evolved the winning strategy. And this marks a watershed development in the history of artificial intelligence. Now, computers have mastered some of the key rudiments of human cognition. Cognition is the ability to gather information, judge it, make decisions and problem-solve. These are all things that Watson can do.

Peter Pirolli, one of the senior researchers at Xerox’s PARC campus in Palo Alto, has been doing a lot of work in this area. One of the things that has been difficult for machines is to “make sense” of situations and adapt accordingly. Remember a few columns ago when I talked about narratives and Big Data? This is where Monitor360 uses a combination of humans and computers: computers to do the data crunching and humans to make sense of the results.

But as Watson showed us, computers do have to potential to make sense as well. True, computers have not yet matched humans in the ability to make sense of an unlimited variety of environmental contexts. We humans excel at quick and dirty sense-making, no matter what the situation. We’re not always correct in our conclusions, but we’re far more flexible than machines. But computers are constantly narrowing the gap -- and as Watson showed, when a computer can grasp a cognitive context, it will usually outperform a human.

Part of the problem machines face when making sense of a new context is that the contextual information needs to be in a format that can be parsed by the computer. Again, this is an area where humans have a natural advantage. We’ve evolved to be very flexible in parsing environmental information to act as inputs for our sense-making.

But this flexibility has required a trade-off.  We humans can go broad with our environmental parsing, but we can’t go very deep. We do a surface scan of our environment to pick up cues and then quickly pattern-match against past experiences to make sense of our options. We don’t have the bandwidth to either gather more information or to compute this information. This is Herbert Simon’s Bounded Rationality.

But this is where Big Data comes in. Data is already native to computers, so parsing is not an issue. That handles the breadth issue. But the nature of data is also changing. The Internet of Things will generate a mind-numbing amount of environmental data. This “ambient” data has no schema or context to aid in sense-making, especially when several different data sources are combined. It requires an evolutionary cognitive approach to separate potential signal from noise. Given the sheer volume of data involved, humans won’t be a match for this task. We can’t go deep into the data. And traditional computing lacks the flexibility required. But cognitive computing may be able to both handle the volume of environmental Big Data and make sense of it. 

If artificial intelligence can crack the code on going both broad and deep into the coming storm of data, amazing things will certainly result from it.

1 comment about "Why Cognitive Computing Is A Big Deal For Big Data ".
Check to receive email when comments are posted.
  1. H M from LexisNexis, August 15, 2014 at 3:59 p.m.

    Gord, very informative article. very interesting article. Many uses of big data have a measurable positive impact on outcomes and productivity. Areas such as record linkage, graph analytics deep learning and machine learning have demonstrated being critical to help fight crime, reduce fraud, waste and abuse in the tax and healthcare systems, combat identity theft and fraud, and many other aspects that help society as a whole. It is worth mentioning the HPCC Systems open source offering which provides a single platform that is easy to install, manage and code. Their built-in analytics libraries for Machine Learning and integration tools with Pentaho for great BI capabilities make it easy for users to analyze Big Data. Their free online introductory courses allow for students, academia and other developers to quickly get started. For more info visit:

Next story loading loading..