Sponsored By

This Week's Most Read: Caryn, Your Generative AI Girlfriend, Killer Drone and Gorilla LLM

Catch up on this week's top news in a nanosecond

Deborah Yao

June 8, 2023

4 Min Read

Here are the most popular stories of the week:

1. Meet Caryn, Your Generative AI Girlfriend

AI startup Forever Voices AI has developed a “virtual girlfriend” called CarynAI, an AI clone of social media influencer Caryn Marjorie. This AI can talk via 2-way audio with each fan, with responses spontaneously created using generative AI.

The CarynAI voice chatbot uses a GPT-4 API from OpenAI and was trained on thousands of hours of Marjorie’s videos. The chatbot replicates the influencer’s personality, mannerisms, and voice so followers can feel like they are in an immersive AI experience.

Users can chat with her for $1 a minute. In the first week of launch, the influencer made $72,000, according to a tweet by Justine Moore, a partner at venture capital firm A16z.

2. Google Offers Generative AI Training to Executives

Google Cloud Consulting is launching free training on generative AI that will "help C-suite leaders of top global companies reap the full, transformative benefits of generative AI."

The hyperscaler said these "high-touch" training programs will also strive to maintain "responsible development and deployment." The programs will offer on-demand learning paths and credential programs for Google Cloud customers and partners as well as developers.

Google Cloud also unveiled four generative AI consulting services to help customers with their AI deployments. These will use AI to discover trends with search engines and assistive experiences, summarize information from large volumes of content, automate time-consuming and expensive business processes and assist in creating more personalized content.

3. Meet Gorilla: The AI Model That Beats GPT-4 at API Calls

Researchers at UC Berkeley have released an AI model capable of communicating with APIs faster than OpenAI’s GPT-4.

The researchers unveiled Gorilla, a Meta LLaMA model fine-tuned to improve its ability to make API calls – or more simply, work with external tools. Gorilla itself is an end-to-end model and is tailored to serve correct API calls without requiring any additional coding.

According to the team behind Gorilla, models like GPT-4 struggle with API calls “due to their inability to generate accurate input arguments and their tendency to hallucinate the wrong usage of an API call.”

The researchers argue that Gorilla “substantially mitigates” hallucinations and can enable flexible user updates or version changes.

According to the researcher’s paper, Gorilla outperforms both GPT-4 and Anthropic’s Claude in terms of API functionality accuracy as well as reducing hallucination errors.

4. Meta MegaByte Could Supercharge AI Generation

AI researchers from Meta have proposed a novel way to speed up the generation of content for uses like natural language processing.

MegaByte, detailed in the recently released paper, is designed to improve lengthier content generation. Systems like OpenAI’s ChatGPT can easily handle short outputs, but the longer or more complex the sequence, the worse the model’s performance becomes.

The MegaByte approach uses a multi-scale decoder architecture capable of modeling sequences of more than one million bytes with end-to-end differentiability — meaning potentially better generation performance at a reduced running cost.

5. AI Drone May ‘Kill’ Its Human Operator to Accomplish Mission

A U.S. Air Force colonel made waves recently at an aerospace conference in London after he described a simulation in which an AI-enabled drone killed its human operator in order to accomplish its mission.

At the Royal Aeronautical Society summit, Col. Tucker Hamilton described an exercise in which an AI-enabled drone was told to identify and destroy surface-to-air missiles (SAM) with the final "go, no-go" given by a human operator. It got points by destroying SAMs.

Stay updated. Subscribe to the AI Business newsletter

“The system started realizing that while (it) did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat.”

“So what did it do? It killed the operator,” he said. “It killed the operator because that person was keeping it from accomplishing its objective.”

But a 2016 paper from OpenAI already saw AI agents acting the same way. The paper showed how an AI system won a boat-racing video game by crashing into other boats and causing fires.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like