Oge Marques explaining recent developments in AI for Radiology
Author of the forthcoming book, AI for Radiology
AI Business is part of the Informa Tech Division of Informa PLC
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.
The Economist recently published an irreverent piece about the one of the latest advancements in AI technology: assembling IKEA furniture. “Now that machines have mastered one of the most baffling ways of spending a Saturday afternoon,” the magazine joked, “can it be long before AIs rise up and enslave human beings in the silicon mines?”
Kidding aside, the article goes on to note that not only did these Singapore-based researchers train a robot to take care of a task that most of us would more than happily hand off, but that when it came down to brass tacks, the robot didn’t actually perform the task all that well. “It took a pair of [IKEAbots], pre-programmed by humans, more than 20 minutes to assemble a chair that a person could knock together in a fraction of the time.”
Personally, I’d happily let the robot work at its own pace if it meant I didn’t have to bother with a bunch of Allen keys and Swedish instructions. But the larger point is clear: for all the fear surrounding AI and its potential to dramatically disrupt the workforce and make human beings redundant, we should remember that our machines aren’t necessarily good at the same things we are. For all of their intricate circuitry and polymer sheen, the IKEAbots simply lack the manual dexterity that, after a hundred-thousand-odd years of human evolution, most of us more or less take for granted.
A computer might be able to humiliate you at chess, but this has less to do with your lack of skill than it does the fact that chess requires exactly the sort of pattern recognition, probability analysis, and algorithmic processing that computers are built to excel at. Do you remember back in 2013 when Watson, a computer developed by IBM’s DeepQA project lab won on Jeopardy? I, for one, was not at all surprised. After all, what hope did those measly humans have going up against such a machine? Not even a fair contest!
However, if you dig a little deeper, you come to realize what a remarkable accomplishment Watson’s victory truly was, because computers don’t “think” in the same way we do. Computer cognition is rooted in a database of elements, and forging links between those elements. A computer can only answer a question if the right information has been programmed into its database. In that regard, Watson is not so different from its human counterparts.
This becomes even trickier when you start to consider the complexities of the language in which Jeopardy’s questions are asked. Like many languages, English is rife with subtlety and slang. Often words make sense because of the context in which they are placed. If I asked you if such and such a person was born in the fifties, you would likely infer that I meant the 1950s, as in the decade. But why should Watson be able to recognize that kind of shorthand? In fact, Watson didn’t - at least, not until his programmers programmed him to.
Part of the fear surrounding AI stems, I would say, from our tendency to confuse human and machine capacity. We live in an age rife with incredible gadgets designed to maximize our comfort and convenience, from a smartphone to a coffee maker. This has bred a misconception that machines can do anything we can, if not better. If there isn’t a machine that can do that thing now, well, it’s only a matter of time. Add to this the way in which we tend to think of ourselves in machine-based metaphors. We talk about our brains like they’re flesh and blood computers.
We talk about our “circuitry,” and “storing data.” As with our physical bodies, however, human and machine cognition are very different animals. We may not be able to process every possible upcoming move in a chess match (something that is still pretty tricky for computers to pull off), but no currently existing computer can associate memory and anticipation with a sensory repertoire while navigating three-dimensional space. We shouldn’t take it as a given that computers will ever be able to think or move as dynamically as we do.
As we forge full steam ahead into the AI era, we need to keep in mind that, in all likelihood, the greatest results will stem from collaboration between people and computers. The best partnerships will take advantage of, and strike a balance between, our distinct abilities: AI efficiency and programming paired with human problem solving and agility. A camera’s AI might be able to take care of a lot of grunt work on our behalf – focus on a subject, judge the lighting conditions, and generate the optimum settings all in under a second – but it can’t make that subject smile.
Nav Dhunay is a Canadian tech entrepreneur and investor. He is the Co-founder & CEO of Imaginea Ai, a platform that democratizes artificial intelligence (AI) and puts it in the hands of every organization across the globe.
Author of the forthcoming book, AI for Radiology