By Rob High
AUSTIN, TX – It’s been 7 years since Watson made its debut on the Jeopardy! game show in the United States. Since then, the world has changed dramatically.
We see evidence of AI everywhere – in the way search is performed; in chat-bots; in voice-assistants; in our cars, homes and offices; in medical practice; in financial services. Over 88,000 papers were published on AI in China, USA, Japan and UK between 2011 and 2015. Stanford saw more than 100,000 students enroll in their open online class in August of 2011.
What’s more, global investment in AI-based solutions is soaring. According to IDC, 75% of commercial enterprise apps will use AI by 2020. And not too soon. AI is helping doctors find better treatments and the very latest advances in healthcare for their patients. It is helping businesses combat financial crimes. It is helping engineering companies do a better job in constructing equipment. It is helping CPAs do a better job of filing tax forms for their clients. It is helping businesses adhere to contractual obligations. It is helping consumers get better product support.
This drive to adopt AI is fueled by the benefits it brings to our everyday lives – from getting us help faster, to buying new dog food. More importantly, we’re facing an avalanche of data – more than we can possibly ever hope to keep up with. Some 2.5 million English-language, peer-reviewed scientific papers were published in 2015. Approximately 2.2 million books are published every year. Experts are predicting a 4,300 percent increase in annual data production by 2020. It is virtually impossible to sift through all that. In other words, most of your decisions are being made using a very narrow slice of the available, relevant information. AI is essential to helping us manage and leverage that data when making those decisions.
AI is a tremendous tool for empowering human talent
Even though AI is all around us, and its utility is becoming clearer by the day, AI is still poorly understood by the vast majority of us. Many people naturally assume that AI – artificial intelligence – is about replicating the human mind, or creating an artificial form of our human intelligence. There is a historical precedence for this intuition. Jonathan Von Neumann is quoted by his wife as having said, “he wanted to build a fast, electronic, completely automatic, all-purpose computing machine which could answer as many questions as there were people who could think of asking them.” As far back as 70 years ago, people thought that a machine could think like a human.
“We’re only 7 years into the current era of AI computing. We’re likely to see another several decades of improvement.”
However, there are several flaws with this assumption. First, we really don’t understand the human brain that well. It’s hard to replicate something for which you only have a surface level understanding. But the more important reason is economics. Consider for a moment the last 10,000 years of human civilization. Virtually every tool we have ever created that has had lasting economic value – hammers, shovels, pulleys, hydraulics, bicycles, cars, microscopes, telescopes, the computer itself – has retained its utility chiefly because it amplifies and augments our human strength or reach. These tools have made it possible to do things that we could not do by ourselves. They have enabled us to create homes, schools and cities. They have enabled us to go further, and faster. They have made it possible for us to understand and tackle harder problems.
AI is a tool as well, enabling us to sift through tremendous amounts of information to help us make a better decision. Perhaps it is better to call it augmented intelligence – a tool for augmenting and amplifying our human intelligence. It helps us see other perspectives that we might not have considered; to see through our biases; to think of the questions we are not thinking to ask; to explore new alternatives; to free our mind to exercise our creativity.
Call it augmented – not artificial
More than science, economics will shape the nature of this tool. Scientists may open new paths for the potential evolution of this technology. However, unless it delivers utility, people won’t continue to use it. If it’s not creating value, businesses will stop investing in it. If it’s not gaining attention, scientists will exert their efforts in more fruitful areas of exploration. AI will continue to evolve, but it won’t be in the direction of replicating the human mind.
There is a lot of discussion about the impact that AI will have on jobs. The most cited example is of autonomous cars eliminating the need for taxi drivers, or robots eliminating the need for factory workers (which actually predates the advent of AI as we now know it). There is likely going to be some dislocation within the workforce – there always is and always has been with the introduction of new technologies. The introduction of the steam-shovel in 1796 eliminated the need for thousands of workers digging ditches with hand-shovels. However, there are now over half a million heavy equipment operators in the United States, and demand for more is growing at 12% a year.
On the other hand, one can make the case that AI is not eliminating jobs, but rather, eliminating tasks that you do within a job. Significantly, these tend to be the more mundane and tedious tasks that would otherwise sap our morale and enthusiasm. For example, conversational agents (also referred to as chatbots) are handling a growing number of the most frequently asked questions from clients. In doing so, they are helping clients resolve their most basic concerns quickly and accurately.
Autodesk has off-loaded 100,000 call-center conversations a month with Watson Assistant, and has reduced resolution time by 99% — from an average of 1.5 days to 5.4 minutes for most of their inquiries. However, rather than eliminating jobs, this has freed up the call-center staff to focus their attention on more important and more challenging tasks. Consequently, AI has gotten their customers back on their feet faster, improved customer satisfaction, increased product uptake, and just as importantly, improved job satisfaction for the Autodesk support team.
Beyond the limits of imagination
AI is still in its infancy. It’s been 7 decades since Von Neumann began work on the blueprint for modern computing. We’ve seen tremendous advances in the underlying technologies of programmable computing in the last 70 years. We’ve transitioned from using patch-panels and punch cards of those early days to radio-buttons and sliders on modern smartphones and tablets. We’re only 7 years into the current era of AI computing. We’re likely to see another several decades of improvement.
It is as hard to imagine what the computer systems will look like in the decades ahead as it was for Von Neumann and his counterparts to imagine what they would look like now. I doubt that he ever contemplated the idea that we would be holding a computer a million times more powerful than anything that was available then in the palm of our hand now – let alone that they would power everything from our washing machines to our airplanes. What we can be certain of, no matter what they look like or what questions they are able to answer, is that AI will enable us to do things we could never do and could never have imagined doing – before or without it.
Rob High is an IBM Fellow, Vice President, and Chief Technology Officer at IBM Watson. He has overall responsibility to drive Watson technical strategy and thought leadership. As a key member of the Watson Leadership team, Rob works collaboratively with the Watson engineering, research, and development teams across IBM. Catch his keynote at The AI Summit London, June 13-14.