Developing a system for users to program by demonstration, not code

Deborah Yao, Editor

September 15, 2022

10 Min Read

Developing a system for users to program by demonstration, not code

Intel is doing cutting-edge research in the field of AI. This week, the chipmaker said it has partnered with Mila, an AI research institute in Canada, to develop advanced AI techniques to tackle the global challenges of climate change, new materials discovery and digital biology.

AI Business recently sat down with Lama Nachman, an Intel Fellow and director of the Intelligent Systems Research Lab at Intel Labs, to talk about Intel’s latest innovations in the area of Responsible AI. Nachman’s team specifically focuses on optimizing human-AI collaboration.

The following is an edited transcript of that conversation.

AI Business: As a hardware company, how do you implement Responsible AI?

Lama Nachman: We actually operate at all levels of the (technology) stack, and in each one of those levels, there is definitely something you can do from a responsibility perspective. … From a platform perspective, we think about the areas of responsible AI, whether it is privacy, security, bias, human oversight − all of these different things really require different types of algorithmic innovation and hardware support. Many of those problems tend to be computationally intractable today.

Think about homomorphic encryption. Think about how do you get much more security into the hardware to do a lot of this attestation. Many of those problems actually have roots in hardware. If you're able to change your platforms, you can actually make things possible and make those algorithmic innovations possible but you can also increase the security by creating secure spaces within your hardware to do that, like SGX for example.

… What we've been doing internally (since 2017) has been, ‘what is the framework for Responsible AI that makes sense for Intel?’ And essentially, it has multiple pillars within it. There's the governance piece that is focused on how we ensure that all AI that we're developing internally across that stack we're developing responsibly.

What is our stance in terms of principles? How do we put those into action in terms of what are the processes within the company in the development of AI that needs to happen? And what are the governance structures that are needed to ensure that those are being adhered to? So through the principles of ethical impact assessment and a corporate level Advisory Council, (learnings are shared) with our customers and whoever else can benefit from it.

"So now you're programming by demonstration. … You're now training the user as a developer."

           - Lama Nachman, director of the Intelligent Systems Research Lab, Intel Labs

The second pillar is to invest heavily in research both internally and in funding academia heavily in specific areas that we think we can move the needle. So privacy and security, human-AI collaboration, trusted media, these are areas where we've invested with others to try to actually move that space. And all of that learning goes into our products so that we're developing the right products, the right tools, etc.

The third part is learning and working with the industry to advance the needle for everybody. That means we need to be transparent about what it is that we're doing. And if you look back, the best analogy that I can think of is our diversity representation. We went out and said, ‘these are our diversity numbers’ when people were not publishing those numbers internally, externally. And we said, ‘this is the problem. This is where we want to get to, this is how we're holding ourselves accountable to get right.’

AI Business: What are you doing to advance responsible AI going forward?

Nachman: We're doing a lot of work on human-AI collaboration specifically. This is kind of a nascent area of research.

(Our work revolves around) how do you change the narrative of human-AI competition? … If you assume that AI isn't coming to replace humans, how can an AI system help, support and amplify human capability? (Conversely,) how can the human make the AI system more robust? How do we provide feedback in the loop while we're operating with AI to help AI learn from us as we move forward in solving these problems together?

… Let me give you an example in robotics (Embodied AI). One of the areas we've been looking at is this notion of human-robot collaboration. Programming robots is a very hard; you have to have robotics experts to go program robots. Those are the developers.

Now imagine you have people whom you want to empower with AI and robotics who happen to be working on tasks: 'I'm a validation engineer. I want to go validate that system. I need a robot to help me with that process.' How am I going to get robotics experts to program this for me when I don't have the scale of 100,000s of units (to justify doing the project)?

Now if you empower people (non-developers) to program these robots by demonstration, you are demonstrating to the robot that ‘as the subject matter expert, I know how to do this thing and I'll show you how to do it and then you help me do it.’ So now you're programming by demonstration. … You're now training the user as a developer.

But the problem is, these (users) need to be able to debug that system and modify it. Well they can’t go to code, which is what a developer would do. So we've been inventing new ways of interacting with the output of these systems in ways that people can understand them.

It's like motion primitives (pre-computed motions robots can take). How do you put things back-to-back that says, ‘Oh, you grasp this thing, and you move this thing, and then you put it here,’ and you can look at this as a sequence of events that you can understand. Or you could say, ‘no, I want to change this and make it be that.’

The reason I bring that as an example is that explainability in AI, which is a major issue in this field, is one of the hardest problems (to solve) in my opinion. We think of it as enabling hardcore developers − that's how people think about it. And there are a lot of solutions that say, ‘I'm going to highlight this part of your network because you understand the deep neural network.’ That doesn't work when you actually start to move towards the human-AI system, where your end users who use that technology are the ones that need to program the AI … because they understand the domain.

AI Business: Does that mean using computer vision and reinforcement learning?

Nachman: Reinforcement learning is essentially your typical approach and in this specific case, that's really not what we're doing. We're doing computer vision, we're doing advanced motion primitives. The issue with reinforcement learning is … it’s a bit opaque in some sense for this application. (What we’re working on is for the user to) demonstrate the whole thing, (and the robot) will figure it out. We're trying to make it … almost like steps, so people can modify it, interrogate it and understand it.

We do rely on computer vision, but we don't try to learn a task from start to finish. We try to map that task as a set of steps where each of these steps has a motion primitive that's associated with it.

The higher level point that I'm trying to get across is when we talk about human AI collaboration, it means two things: The AI system is helping the human perform a task so it needs to understand the task and understand what the human needs, to support them in doing the task. But it also means that the human needs to help the AI system continue to advance and that's the bigger problem that we have today of AI systems.

"It's essential for the robot to pick up on a lot of different cues from the human. That's really the key piece."

           - Lama Nachman, director of the Intelligent Systems Research Lab, Intel Labs

What happens today is you would go and train these systems with tons of data and you have to label all of this data, then you put it in a real environment. If the environment changes, which it does all the time, or the test changes, just repeat that process over and over again.

This is not sensible. What we should do is since you have a human there, the human can aid the AI system as things change, in the same way that the AI system can aid the human in doing the test. So that's the premise underneath all of that work.

AI Business: I've seen some research on robotics and how AI can be dangerous if it's a black box. As such, it could actually lead to a robot physically harming people. Do you think there's a danger of that happening?

Nachman: There is danger in anything that is autonomous. There is no question about that. So the question becomes, how do you mitigate that danger? There are a lot of methods in place − you'll see people separate the space of where robots operate and where humans operate. That tends to be a very safe but very conservative approach because I cannot now collaborate with a robot to do the things that I want them to do.

… (For good collaboration), it's essential for the robot to pick up on a lot of different cues from the human. That's really the key piece. … You still fall back on safety measures − collision avoidance and all of these traditional methods − but then you can actually be less conservative by trying to anticipate what a human would do. And that's actually part of our work. We're working on (systems) predicting the physical intent of the user before they move.

Think about how people work together. You can take cues from how somebody is moving and where they're looking and all of that if you're trying to work with them in a shared space. We're leveraging exactly the same type of techniques to make the robot anticipate human actions. This is actually my team’s work, specifically.

AI Business: What's the practical application of this research?

Nachman: Imagine a shared task between a robot and a human. I'm trying to physically work with the robot to move things, to pick things up, to sort things. Anticipating what a person might do can be helpful (to ensure the robot is not) colliding with you; you could actually work together in that space without running into each other, which is what naturally humans do.

AI Business: Robots have worked well by themselves in manufacturing. Why is there a need for a human to work with robots?

Nachman: If you look at the examples of where robotics is today, they work because they work at large scale. Robots work by themselves, and it's absolutely fine for them to work by themselves.

The place where you don't see robotics today are in tasks that are low-volume and very flexible where the cost of bringing them into the solution (is high), either because programming them is very hard or (there are hazards in people) being too close to them. There is a huge number of places where we can utilize robotics that we can't today.

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like