What if you could get a few of the top minds in artificial intelligence and pick their brain?
Fun idea, right?
I’m lucky enough to explore the bigger AI questions over dinner with friends and with my colleagues at work, but those views don’t necessarily get shared – and there’s a lot of noise when it comes to this exciting technology.
So, I decided to tap my AI peers for their expert views on the top questions around AI to help provide a more balanced view from the AI community:
In part I of this two-part article, we’ll dive into some personal a-ha moments related to AI research, the surprises and discoveries around the technology as well as exploring the most unusual use cases when it comes to AI.
What has been your biggest “eureka” moment in your artificial intelligence research?
Cheyer: The closest thing I’ve had to a “eureka” moment in AI happened while working at Siri, Inc. (before Apple acquired the company to build their voice assistant). In the early days, we had a prototype of Siri that had been developed over a few years as part of academic research–it had interesting technical ideas and seemed to be working quite well for many tasks. We then received our first data dump of 20 million business names, which we loaded into the system as vocabulary. I typed in our most basic natural language command, “start over,” (which is supposed to reset the system to a clean contextual state), and the system responded, “Looking for businesses named ‘Over’ in Start Louisiana”! At that moment, I realized that just about every word in the English language was a business name or geographic location, and that the combinatorial explosion of possible ambiguities was much larger than I had realized, and that there was a significant difference between an academic prototype and completely solving the problem “for real,” with actual data and users providing the requirements. Coaxing the system to climb the hill back to high accuracy given those constraints was one of the most interesting and important projects I have contributed to in my career. The importance of real data is somewhat of an obvious epiphany, but looking backwards, it was a lesson that I had to experience first-hand to fully understand and embrace its significance.
Ackley: I seem to recognize big insights only in hindsight. For example, I pursued artificial intelligence as a way to understand myself and people and the world, but found myself turning towards artificial life — which sounds similar, but AI and alife have separate goals and techniques and research communities. In retrospect, it had become obvious that although intelligence is hugely important, it also tends to overrate itself. To understand what somebody will do in the world, any minute any day, there’s tremendous leverage in the single fact they are a living creature, and must somehow do what living things do, rather than specifically an intelligent one. (Now, alife and AI both can involve lots of computer programming, and my clearer-if-smaller “eureka” moments typically involve nerdy design issues resolved, or, still too often, bugs suddenly diagnosed.)
George: It’s hard to pick one of my own, so instead I’ll highlight some of the outstanding neuroscience research that I think can guide the development of intelligent systems: Hubel and Wiesel, Mountcastle, Rudiger von der Heydt, Tai Sing Lee, Joaquin Fuster, and Jim Dicarlo are examples of scientists who have shed light on the computational principles behind cortical circuitry. Cognitive scientists like George Lakoff and Mark Johnson have plausible theories on how high-level concepts are created from embodied experience. Putting all this knowledge into computational frameworks that have been researched by Judea Pearl, Geoff Hinton, and others is extremely exciting.
AI, by nature, seems to have a tendency, occasionally, to delight and surprise. Have you had such an experience with it, and if so, can you describe it?
Cheyer: I have had many experiences where the AI system I was working on surprised me in a delightful way; that’s one of the most fun reasons to work on AI. Here’s one memorable story that happened when I was working on a project calledCALO(Cognitive Assistant that Learns and Organizes), one of the largest government-sponsored AI and machine learning projects in U.S. history. The goal of CALO was to build an intelligent automated assistant that could enable an information worker (like you and me) to be more efficient in their work lives, by helping manage their tasks, calendars, files, projects, communications, and so forth. I was running a version of the system, and as I would work with emails, files, etc., CALO would automatically build a “semantic map” across all of my information, linking who worked on which projects, determining what role they played and what tasks they worked on, and so forth. Since the CALO project was one of the main things I was working on, it was represented in my projects list, along with all of its sub-projects and tasks. One day, I was interacting with the system using natural language, and (talking to CALO), I used the word CALO as a ProjectName. CALO responded in a way consistent with CALO being used as a PersonName (I wish I remembered the exact query/response) and I was completely shocked. I remember thinking, “Has CALO just become sentient and started thinking of itself as a person now?” I later unraveled the cause of this unexpected behavior and it was something less sensational than that, but for just that moment, I had chills of surprise and delight running up and down my spine…
Ackley: One surprise occurred in early work Michael Littman and I did on the evolution of altruism — a thorny problem for supposedly “selfish” evolution. We programmed evolvable creatures to have neural network “brains,” and made an environment where an individual could benefit only by accepting serious risks. The twist was we tested groups of creatures together, and gave them evolvable abilities to emit initially meaningless “sound” and to hear the sounds of the group. We found that non-communicating individualists always emerged first. In some evolutionary conditions, however, we observed later generation creatures achieving much higher scores than any loner could, by signaling each other about environmental opportunities and dangers — even though such “truthful speech” provided no immediate benefit to the speaker. Evolution is about competition but also cooperation; circumstances and details matter. What was surprising at first, if not exactly delightful, was that in some experiments, even after the rise of cooperating communicators, some low-scoring individualists still survived and spread. We found they had evolved into creatures that not only behaved optimally as individuals, but they were also completely deaf, and they constantly shouted nonsense to mess up anybody who wasn’t. So it goes!
George: One of the characteristics of the systems we build at Vicarious is their ability to imagine different scenarios and possibilities. Imagination can be used in unpredictable ways, and so we can produce strange combinations like shapes that are half-dog and half-car. Another way this shows up is in hallucinating things that are not there, like when we humans see shapes in the clouds.
What has been the more unusual uses of artificial intelligence you’ve seen so far?
Cheyer: AI is being applied to all sorts of tangibly useful tasks, but I particularly like it when AI is applied to creative or artistic domains, which to me, gets to the heart of what makes humans human. Some of my favorite examples include:
• David Cope’s work on EMI (“Experiments in Musical Intelligence”), where a computer program has created beautiful works of music in a variety of styles, from classical, to jazz, to Navajo. This video is an example of his program masquerading as artist “Emily Howell.” Beautiful, no?
• Kim Binsted’s JAPE (“Joke Analysis and Production Engine”), a program that creates puns and other humor (e.g. “What do you call a Martian who drinks beer? An ale-ien!”)
• Story generation by companies like Automated Insights and Narrative Science can write “prose” that summarizes certain events or situations. For example, “Twenty seven Colonials came to the plate and the Virginia pitcher vanquished them all, pitching a perfect game. He struck out 10 batters while recording his momentous feat.”
• Harold Cohen’s AARON, a robot who constructs art, not with pixels but with real paint. It composes the work, chooses colors, mixes them, and then actually does the painting, from start to finish, all without photos or other input as reference. Below is an example painted by AARON in 1992. Others have tried to expand on Harold Cohen’s seminal work, such as Benjamin Grosser’s Interactive Robotic Painting Machine and the e-David Robot Painting machine by Oliver Deussen and Thomas Lindemeier.
Ackley: I’m pretty ignorant of the latest applications, although the recent dreamy images generated by Google engineers from trained artificial neural networks are certainly striking.
This question presumes there are usual uses of AI, and I think that’s noteworthy in itself. Technology in society is by turns new, familiar, expected, boring, and finally invisible — and many AI innovations are now well down that path, from speech recognition for aggravating voice menus to image recognition of zip codes and license plates to fuzzy logic for driving trains and cooking rice.
George: Many years ago, a big energy company approached us with a video analysis project. The company was planning to build a wind turbine farm. They needed an automated way of counting the number of goats that moved along different migration routes by watching surveillance video footage of goats.
I hope you enjoyed the discussion and viewpoints from these experts.
Stay tuned for Part II. We’ll also be exploring:
• The biggest myths around this exciting technology
• How the current fear around AI may evolve in the future
• What the future may bring with AI research?