Paul Allen has been waiting for the emergence of intelligent machines for a very long time. As a young boy, Allen spent much of his time in the library reading science-fiction novels in which robots manage our homes, perform surgery and fly around saving lives like superheroes. In his imagination, these beings would live among us, serving as our advisers, companions and friends.
Now 62 and worth an estimated $17.7 billion, the Microsoft co-founder is using his wealth to back two separate philanthropic research efforts at the intersection of neuroscience and artificial intelligence that he hopes will hasten that future.
The first project is to build an artificial brain from scratch that can pass a high school science test. It sounds simple enough, but trying to teach a machine not only to respond but also to reason is one of the hardest software-engineering endeavors attempted — far more complex than building his former company’s breakthrough Windows operating system, said to have 50 million lines of code.
The second project aims to understand intelligence by coming at it from the opposite direction — by starting with nature and deconstructing and analyzing the pieces. It’s an attempt to reverse-engineer the human brain by slicing it up — literally — modeling it and running simulations.
“Imagine being able to take a clean sheet of paper and replicate all the amazing things the human brain does,” Allen said in an interview.
He persuaded University of Washington AI researcher Oren Etzioni to lead the brain-building team and Caltech neuroscientist Christof Koch to lead the brain-deconstruction team. For them and the small army of other PhD scientists working for Allen, the quest to understand the brain and human intelligence has parallels in the early 1900s when men first began to ponder how to build a machine that could fly.
There were those who believed the best way would be to simulate birds, while there were others, like the Wright brothers, who were building machines that looked very different from species that could fly in nature. And it wasn’t clear back then which approach would get humanity into the skies first.
Whether they create something reflected in nature or invent something entirely novel, the mission is the same: conquering the final frontier of the human body — the brain — to enable people to live longer, better lives and answer fundamental questions about humans’ place in the universe.
“We are starting with biology. But first you have to figure out how you represent that knowledge in a software database,” Allen said. “I wish I could say our understanding of the brain could inform that, but we’re probably a decade away from that. Our understanding of the brain is so elemental at this point that we don’t know how language works in the brain.”
Hollywood vs. reality
In the Hollywood version of the approaching era of artificial intelligence, the machines will be so sleek and sophisticated and alluring that humans will fall in love with them. The 21st century reality is a little more boring.
At its most basic level, artificial intelligence is an area of computer science in which coders design programs to enable machines to act intelligently, in the ways that humans do. Today’s AI programs can adjust the temperature in your home or your driving route to work based on your patterns and traffic conditions. They can tell you someone stole your credit card to make a charge in a strange city or who has the best odds of winning tonight’s soccer match.
Already in use
In medicine, artificial intelligence algorithms are already being used to do things such as predicting manic episodes in those suffering mental disease; pinpointing dangerous hot spots of asthma on maps; guessing which cancer treatments might give you a better chance at living longer based on your genetic makeup and medical history; and finding connections between things such as weather, traffic and your health.
But when it comes to general knowledge, scientists have struggled to create a tech that can do as well as a 4-year-old human on a standard IQ test. Although today’s computers are great at storing knowledge, retrieving it and finding patterns, they are often still stumped by a simple question: “Why?”
Do we really want it
So while Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana — despite their maddening quirks — do a pretty good job of reminding you what’s on your calendar, you’d probably fire them in short of a week if you put them up against a real person.
That will almost certainly change in the coming years as billions of dollars in Silicon Valley investments lead to the development of more sophisticated algorithms and upgrades in memory storage and processing power.
The most exciting — and disconcerting — developments in the field may be in predictive analytics, which aims to make an informed guess about the future. Although it’s currently mostly being used in retail to figure out who is more likely to buy, say, a certain sweater, there are also test programs that attempt to figure out who might be more likely to get a certain disease or even commit a crime.
Google, which acquired AI company DeepMind in 2014 for an estimated $400 million, has been secretive about its plans in the field, but the company has said its goal is to “solve intelligence.” One of its first real-world applications could be to help self-driving cars become better aware of their environments. Facebook chief executive Mark Zuckerberg says his social network, which has opened three different AI labs, plans to build machines “that are better than humans at our primary senses: vision, listening, etc.”
All of this may one day be possible. But is it a good idea?
Advances in science often have made people uneasy, even angry, going back to Copernicus, who placed the sun — not the Earth — at the center of the universe. Artificial intelligence is particularly sensitive, because the brain and its ability to reason is what makes us human.
In May 2014, cosmologist Stephen Hawking caused a stir when he warned that intelligent computers could be the downfall of humanity and “potentially our worst mistake in history.”