Despite what some critics may have expressed, artificial intelligence will not destroy mankind anytime soon, according to AI2’s CEO and computer scientist, Oren Etzioni. 

During Etzioni’s recent AI conference in New York, Scientific American interviewed the scientist about his thoughts around artificial intelligence and why it will not destroy the human race in the nearest future.

The interview is extensive and very insightful, addressing anything from “Human-Level AI”, to the legal and ethical issues of artificial intelligence.

Following is a short take-out of the interview:

You’ve mentioned that human-level AI is at least 25 years away. What do you mean by human-level AI, and why that time frame?

“The true understanding of natural language, the breadth and generality of human intelligence, our ability to both play Go and cross the street and make a decent omelet—that variety is the hallmark of human intelligence and all we’ve done today is develop narrow savants that can do one little thing super well. To get that time frame I asked the fellows of the Association for the Advancement of AI when we will achieve a computer system that’s as smart as people are in the broad sense. Nobody said this was happening in the next 10 years, 67 percent said the next 25 years and beyond, and 25 percent said “never.” Could they be wrong? Yes. But who are you going to trust, the people with their hands on the pulse or Hollywood?”

Why do so many well-respected scientists and engineers warn that AI is out to get us?

“It’s hard for me to speculate about what motivates somebody like Stephen Hawking or Elon Musk to talk so extensively about AI. I’d have to guess that talking about black holes gets boring after awhile—it’s a slowly developing topic. The one thing that I would say is that when they and Bill Gates—someone I respect enormously—talk about AI turning evil or potential cataclysmic consequences, they always insert a qualifier that says “eventually” or this “could” happen. And I agree with that. If we talk about a thousand-year horizon or the indefinite future, is it possible that AI could spell out doom for the human race? Absolutely it’s possible, but I don’t think this long-term discussion should distract us from the real issues like AI and jobs and AI and weapons systems. And that qualifier about “eventually” or “conceptually” is what gets lost in translation…”

How do you ensure that an AI program will behave legally and ethically?

“If you’re a bank and you have a software program that’s processing loans, for example, you can’t hide behind it. Saying that my computer did it is not an excuse. A computer program could be engaged in discriminatory behavior even if it doesn’t use race or gender as an explicit variable. Because a program has access to a lot of variables and a lot of statistics it may find correlations between zip codes and other variables that come to constitute a surrogate race or gender variable. If it’s using the surrogate variable to affect decisions, that’s really problematic and would be very, very hard for a person to detect or track. So the approach that we suggest is this idea of AI guardians—AI systems that monitor and analyze the behavior of, say, an AI-based loan-processing program to make sure that it’s obeying the law and to make sure it’s being ethical as it evolves over time”.

Do AI guardians exist today?

“We issued a call to the community to start researching and building these things. I think there might be some trivial ones out there but this is very much a vision at this point. We want the idea of AI guardians out there to counter the pervasive image of AI—promulgated in Hollywood movies like The Terminator—that the technology is an evil and monolithic force”.

The full interview can be found at:

For the latest news and conversations about AI in business, follow us on Twitterjoin our community onLinkedIn and like us on Facebook