Oracle's Generative AI Apps Use OCI Stack for Continuous Innovation

Oracle Group vice president Miranda Nash explains how Oracle's new Gen AI apps empower workflows by addressing customer pain points

Ben Wodecki, Jr. Editor

March 15, 2024

4 Min Read

Earlier this week, Oracle unveiled a suite of generative AI apps that businesses can use to help automate tasks. The apps are designed to support specific use cases and are cloud-native to make it easier to integrate.

AI Business sat down with Miranda Nash, GVP, applications development and strategy at Oracle to discuss how these new apps can empower business workflows.

How did Oracle define those apps? Was it the customers who came to you with ideas of use cases they would like to see?

Miranda Nash: Almost everything we do is customer-driven. We're very in touch with customers. 80% came from customers. And a lot of that was not asked with them saying: “Please bring us generative AI that does this.”

It was more, for example, with goals. Our customers would say employees get stuck, they don't finish them, so we focus on that. Another example is performance reviews. Our managers are not doing it, they don’t have consistent quality. That's what we would hear from a customer.

How does this complement the Oracle Cloud Infrastructure (OCI) announced in January?

It completely depends on it. We take full advantage of everything in the OCI stack, that's where our technical-based innovation path comes from. We use the OCI generative AI service, we will start using other services like retrieval, augmented generation (RAG) and agents. Those aren't in what we announced now, but that will be coming.

Related:Oracle Launches Cloud-based Generative AI App Suite

The other thing part of it is it benefits this tech stack. The engagement with us is very tight. My team and the team that leads the OCI generative AI service, they’re in meetings, really collaborating. Our use cases end up improving the OCI general AI service and we've been working very closely with partners as well.

Does this flexibility add an extra layer of benefit to customers? Because in six months, another new technique or underlying system could come along and you roll that into this.

That is the principle of the cloud, keep bringing innovation every quarter. Customers are in control, we don't force it on them, but it's easy to adopt and turn on. These embedded features are just part of their subscription that's not extra cost.

How important is it to instill, as executive VP Doug Kehring said during his keynote, to keep AI simple when applying these new apps? Especially for non-technical users who may not know what things like prompt engineering are.

Take prompt engineering, our customers do not need to know anything about that except to the degree that skills are changing in the industry. But for using our products, for getting full value, they have no expectation of prompt engineering, and that's important because even when folks have used ChatGPT and maybe even used it in their personal lives occasionally, it is radically different to get enterprise scale, reliable results out of a model than it is to ad hoc use it as an individual.

Related:Oracle Looks to Keep AI 'Simple' Across the Stack

AI requires sizable compute which requires a lot of energy to run. Has Oracle considered using a domain-specific small language model for a specific app?

We already use past generations of smaller language models and we use supervised learning for classic individually-trained models. We are always trying to find cheaper, smaller models for any problem and it is fascinating because the very general powerful models almost help to uncover opportunities and then we can go after them in more targeted ways.

What else can we expect to see? Will customers come forward asking for more areas for apps?

Some customers are saying to look at the next version of what we have now. For example, today we are operating in English, that's an obvious one – we have got to be able to cover all the relevant languages for our customers. In addition, adjusting to the cultural norms of the target user or if the user is an agent, and they're interacting with a customer. I think of it as enhancements to what we do, refinement of the prompts and more sophistication.

The other are problems that we are not trying to solve today that are possible only with additional context that you get from RAGs. For example, where an upcoming use case will be answering benefits questions in human language right where you don't want to give the wrong answer, you have a clear source of truth. It's quite straightforward for the LLM to use that as the basis. Beyond that, we are starting to look in the lab at earlier developments in reasoning and the power of these models to parallelize to check each other. There's a lot that can be done there.

How does Oracle look to keep competitive when every major cloud provider is looking at generative AI integrations?

It helps to have the infrastructure so close to us. We can be very focused on the use cases within applications but we've got this big operation that is dealing with all the complexities lower down in the stack. Some of our app competitors, for example, don't have this side, or they are going outside and working with partners. That gets more complicated. Beyond that,  we have a really effective feedback loop with our customers. They are very engaged. We hear from them all the time.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like