DeepMind Co-founder: The Next Stage of Gen AI Is a Personal AI

Your personal AI can buy goods online for you and even enter contracts as your legal proxy

Deborah Yao, Editor

October 17, 2023

5 Min Read
Getty Images

At a Glance

  • DeepMind co-founder Mustafa Suleyman sees the next stage for generative AI is having your own personal AI.
  • This personal AI will know you deeply and can manage your life, relationships, give you advice and act as your legal proxy.
  • Suleyman sees personal AIs developing as soon as in one to two years.

The co-founder of DeepMind said the next stage in generative AI chatbots is a personal AI that can be your digital representative – even for personal tasks such as buying goods online or entering contracts on your behalf.

Mustafa Suleyman, now CEO and co-founder of Inflection AI, said his startup focused on imbuing its AI chatbot, Pi, with a high emotional quotient (EQ) – to minimize toxic responses − and IQ, for higher accuracy and factuality.

“Now the focus is on giving it high AQ," or action quotient, said Suleyman at ‘The Wall Street Journal Tech Live’ conference.

That means the AI chatbot will be able to automatically call on APIs to do such things as access databases, make bookings and check supplies. It will act as a “great assistant or an amazing professor would, or an amazing coach, or confidante or chief of staff,” Suleyman added. In contrast, AI chatbots like ChatGPT are a “one-stop question-answering machine.”

Moreover, this AI will act in your personal interest whereas today “we all are at the mercy of … big tech companies” such as Amazon for purchases, he said.

“Now, we actually have tools that enable us to provide another party on the side of that exchange, almost adversarially interacting with the platforms (to help) you organize, prioritize, synthesize information and ideas and get things done. And that’s really what a personal AI is.”

Over the next year or two, Suleyman said, Pi will be able to accomplish even abstract goals. For example, you can ask the chatbot to help you get more organized so you will not miss appointments over the next few weeks - with the chatbot able to access email, location and other information.

This personal AI will even act as your proxy in personal settings, such as managing your relationships, recommending where to relocate or job to take and “ultimately, be a legal proxy for you” and buy things for you online and enter into contracts, he said. This should happen in three to four years.

Suleyman has plenty of capital to fund his endeavor. Inflection AI recently landed a mammoth $1.5 billion fundraise, backed by Microsoft founder Bill Gates, former Google CEO Eric Schmidt, LinkedIn co-founder Reid Hoffman, Nvidia and others. It is aiming to build the world's largest AI cluster by purchasing 22,000 of Nvidia's H100 chips to train and deploy large-scale AI models.

"As we've got more chips, we train larger models and we can have more accurate AIs," Suleyman said at the conference.

Pi: A different business model

Suleyman said Pi will do away with the popular internet model of offering a product, like Google Search or Facebook, free for users and instead rely on advertising to pay the bills. The ad-based approach does not align the interest of the tech platform with the user.

“Really, the customer for Facebook and Google and the other big companies is the advertiser,” he said. “It’s not the user.”

Pi will be different since it does not disseminate its APIs, which is needed for commercial uses. Instead, “you as the consumer are the only person that pays for the AI,” Suleyman said.

As for the known risks of generative AI including toxic content, hallucinations and bias, Suleyman said Pi was built to avoid toxic subjects. He claims that “none of the prompt hacks work against us.” Prompt hacks, which includes asking the AI to pretend to be another persona, are geared to get around safeguards in disclosing dangerous or toxic responses.

For hallucinations, Suleyman said Pi has access to real time information but admits this remains a challenge. He said AI models are designed to predict the likelihood of the next word, or person or sentence. But “they are not designed to doubt themselves or to know when they are likely not correct,” he said. “This skill of uncertainty estimation is a critical part of intelligence and actually key to making them reliable.”

“If they could consistently know when they don’t know or be able to communicate when they’re in doubt, we kind of address a big part of that hallucination problem,” Suleyman added. “If it said, ‘I am 100% sure, 80% sure, or 60% sure – that would be a pretty useful skill.”

However, “we are a little away from that” and it “remains challenging to teach these models to say, ‘I don’t know’,” he said. “But I think over the next year, we’ll make a lot of progress in that area.”

As for handling bias and sensitive topics, such as the Israel-Hamas war, Suleyman said there are indisputable facts and there is sensitive information. For the latter, Pi would present both sides of a topic “while being true to observable facts,” although it may be difficult in a real-time conflict where “facts are somewhat slippery” at the moment.

Pi does have an intentional bias: Peace. It also tries to be empathetic, “forgiving of our enemies” without offering specific actions to take and generally avoids criticizing anyone on either side, Suleyman said.

When it comes to the 2024 U.S. presidential elections, Suleyman said Pi is not allowed to represent a candidate or answer questions about their position on issues. Delphi, an AI voice-cloning company, recently unveiled AI chatbots for the presidential candidates that the public can query.

“AIs should not be allowed to participate in electioneering, even if they were perfect,” Suleyman said. “That’s probably got to remain a human part of the process that’s critical to our democracy. I think it’s pretty dangerous if we start to have AIs campaigning and persuading and having conversations with people about whom to vote for.”

He said his startup is “certainly not going to do that and I think other companies shouldn’t either.” He said he is in talks with other big AI companies to agree that the use of AI in electioneering should be avoided.

“We may get it wrong,” Suleyman said. “I think the sensible thing to do is step back from it.”

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like