I am, admittedly, a bit of a technology geek. I’m no longer quite the early adopter that I used to be (which has resulted in a slightly healthier bank balance, and a little less useless technology and software filling storage areas in my basement). But I confess to chronically being on the lookout for ways to make life easier. To solve annoyances and problems that waste time and to find efficiencies that make life easier. Technology usually plays a significant role in that.
So, I was intrigued to get invited to a lunch last week to learn about recent insights that were presented at a technology conference at the beginning of the month. The conference was World of Watson. The company was IBM. And the story they had to share was pretty cool.
I do try to stay up to date on what’s emerging and exciting in the technical world (see aforementioned aspiring-early-adopter disclaimer). And I’ll be the first person to admit that I don’t typically put IBM on the list of organizations producing really cool technology. Software, services and hardware for large corporate empires? Sure. Leading edge innovation that could change your very personal world? Not so much.
Not that IBM hasn’t done some interesting and intriguing things. They built Deep Blue, the first computer to beat a world chess champion (Garry Kasparov) under standard chess tournament time controls, in 1997. And they built Watson, who famously beat the two greatest Jeopardy winners of all time, Ken Jennings and Brad Rutter.
Beating a chess grand master is certainly a huge accomplishment. But winning at Jeopardy is a non-trivial task, as well. A computer winning at Trivial Pursuit would be a littler easier: load up the computer with as much trivia as possible, and let it go to work. But Jeopardy is a different challenge in that from the answer you need to work out the question. There is a great deal of inference and reasoning, not to mention the ability to simply understand the question being asked.
Both of these competitions required what were, at the time, amongst the largest computers ever built. The original Watson consisted of ten racks of ten Power 750 servers integrated together, a huge amount of horsepower. Each of these computers—Deep Blue and Watson—were developed for a very specific and focussed purpose, and took a lot of research, development and programming to take on the challenge. This is not exactly general-purpose technology available to the average human.
What I learned last week is that all that is changing. An article had passed by my consciousness a little while ago, identifying that IBM was making Watson APIs available to the broader developer community. I didn’t pay a lot of attention at the time, but it’s significant. It means that the ability to leverage the cognitive computing capabilities of Watson are being opened up in more general and universal ways.
How that might affect the average person like you and I is pretty impressive. We’ve been progressively seeing more and more about how artificial intelligence and particularly machine learning have the potential to transform our devices and our lives. Science fiction imagines when the computers take over. In reality, though, this isn’t about the computers taking over and thinking for themselves. But it is about augmenting our ability to organize, function and decide.
You might wonder, isn’t this just like my phone telling me that once I’m done my meeting, I should pick up my dry cleaning from right next door? Not really. While that feels smart (phone looks at calendar, looks at GPS information for meeting location and does a geographic search for anything else that you need to do nearby) it is only your phone doing what it was told. The “look for things around you” is hard-coded in your phone. Someone wrote an algorithm that told it to do that for you. Cognitive computing is something different, in that it’s trying to answer questions that can’t necessarily be predicted (or programmed) in advance.
IBM’s take on cognitive computing is pretty interesting. The goal of what is deemed “cognitive” is four things: the ability of computers to understand imagery, language and unstructured data; the ability to reason, infer and form hypotheses; the ability to learn, develop and adapt on a continuous basis; and the ability to interact, communicating in a relatable way. Doing all of that is a tall order. Doing it well is something altogether more impressive.
One of the concepts I saw demonstrated was the idea of a virtual executive assistant. Imagine, on your phone, an avatar that helps you to manage your day. You’ve got five minutes before your next meeting. Your assistant knows this, reminds you of it, and offers you the five or six most important things you need to pay attention to.
How does it know what’s important? That’s a product of the relationship you have with the people sending you emails and messages, the nature of each request being made and the urgency attached to it. Let’s take a simple example: your boss has sent you an email asking you for the steering committee presentation from last week in preparation for a conversation this afternoon.
Think about how you would deal with this in real life. You’d have to see the email, possibly on your phone. The file might be on your laptop, or your desktop. Or it might be on the network. Regardless, you’d have to compose a response to your boss’s email, find the file, confirm it was the right one, attach it and hit ‘Send.’ On a good day, you MIGHT get that all done in the five minutes you had available. Often, it might have to wait until after your meeting.
So how does a Watson-enabled executive assistant approach the same task? It sees the email, recognizes its importance, and puts it at the top of your ‘to deal with’ queue. And let’s be honest, that prioritization alone is worth the price of admission. But then it gets more interesting: your executive assistant recognizes the nature of the request, finds the most likely file, and asks if you want them to send it. You can preview the file, see a list of other likely alternatives or confirm. Your assistant composes an email, attaches the file and sends it off. Total time to do this? Maybe 10 or 20 seconds. More than enough time to deal with the rest of the priorities.
Or imagine that you’re a project manager, and your sponsor is asking for a status on the project. Imagine the power of your assistant being able to draft a status report. When we have to write our own, we have to manually assemble the data from multiple systems, draft the report, proofread it and then distribute the information. It’s repetitive, but it requires time and effort to do. Why shouldn’t your assistant do the leg work for you, pulling together the data, compiling it into a coherent form, flagging the priority items that require attention and focus and presenting you with a draft for your review and editing?
The promise of cognitive computing isn’t that it replaces people with computers, but that it enhances our ability as individuals to function. We can focus on the complex, ambiguous, difficult and truly value-added part of our work load, and get assistance with the necessary but more tedious aspects of our jobs.
For me, the more that I can get help on prioritizing and focussing my work, the much happier, engaged and more productive I’ll be. The promise of cognitive computing is that it starts leveraging the same kind of power that can play a mean game of Jeopardy—or deliver a surprisingly accurate cancer diagnosis—and literally put it in my back pocket. The astonishing thing is that this isn’t science fiction, and it’s not even a project of where technology will be in coming decades; it will probably be out there and available for us to use within the next year.
Given my clear enthusiasm, it would be easy to read this as nothing more than a glowing advertisement for IBM. It’s not. Full disclosure: I don’t have a formal relationship with the company. Although they did buy me a nice lunch. But more importantly, I appreciated the opportunity to hear about what they are doing, where they are going and what the near term for technology looks like. What they are building, and their vision for cognitive computing in particular, is impressive. It certainly changed my perspective of IBM as a source of innovation. They are not just about building solutions for large corporate organizations with budgets in the hundreds of millions to leverage. They’re also building technology that can better empower me as a person. That’s pretty damned exciting to think about.
Rollie Cole says
Note that Watson plus humans beat Watson alone in a variety of activities. Just as “wearables” constitute experiments around various forms of integrating computer hardware with human bodies, I suspect the Watson API and other equivalent projects will constitute experiments around various forms of integrating computer software with human minds.