August 22, 2023
Last November, OpenAI stunned the world with the debut of ChatGPT, wowing mainstream audiences with the prowess of its conversational AI application. Nearly a year on, AI has become the talk of the town as the hype cycle churns at full force amid concerns that ‘robots are taking over the world.'
Seth Dobrin, as the former global chief AI officer at IBM, has seen AI advances up close. He recently sat down with AI Business to cut through the AI hype. He said generative AI is good for certain tasks that are being overlooked because people are fawning over its conversational ability. Dobrin also shares the three fundamental issues he believes enterprises should focus on with generative AI. He looked to quash AI scaremongering as well, instead pointing to a lack of diversity at the leadership level as the true concern for global AI.
AI Business: When we last spoke, ChatGPT had just come out, there was no Bard, no Claude and next to no talk on legislating AI. How have you seen the AI landscape evolve since the generative AI explosion?
Seth Dobrin: Everything has changed. Back then, AI adoption was going slow, it was more of a push from the AI leaders, and it wasn't pulled from the business.
Up until that point, generative AI and the whole concept of Transformers and foundational models were a thing for nerds.
Then along came ChatGPT. It took off and anyone in the world who could speak English, Spanish or French could interact with it — it was amazing how it exploded. Since then, the pace of this technological change have been incredible. …
On top of that, we're seeing a dramatic pull from business users to start using generative AI. This is from CEOs on down. The question, however, is not how we use generative AI, but what are the business problems that we need it to solve?
There's a fundamental lack of understanding of the core of this technology and how powerful it is to do other things besides just converse. Value will be added as organizations begin to understand the true application of these types of technologies and understanding the business problems.
AI Business: Your big focus has been safety and responsible deployments. But big names, including Microsoft, rushed out products despite recommendations from ethics teams to slow down. What’s your take here?
Dobrin: A slowdown in the development of AI technologies is too late. That would have been due before ChatGPT launched. Now, it's widespread in the public domain. The conversation now needs to focus on three fundamental issues:
1. Ensuring businesses adopt it in a meaningful way
How do we adopt this technology without going afoul of internal policies, corporate ethos or regulations and protect intellectual property and personnel?
Samsung employees used ChatGPT to ask it questions about their core IP, and now that IP is baked into the trained data used to train ChatGPT. That cost Samsung millions of dollars and months to retrain one of these models.
Then MosaicML, which got bought by Databricks, claim they can train and retrain models for less than half a million dollars. You can fine-tune models for your own business for less than that. You can take one off-the-shelf, open-source model, run that on your own environment and control everything to fit your own needs. If you’re careful and have rules and guardrails in your organization, that’s essentially solved.
Then there are hallucinations: The best way to mitigate that is around specific use cases and specific domains – but that problem is not going to go away, even when you can train your own model because it's fundamental to this technology. You need to have a strategy to mitigate that.
2. Regulatory concerns
Most of these models are trained on the whole of the internet, or some version thereof. Is that data collected in a way that is aligned with regulatory policies or regulatory standards? Or your internal corporate policies or internal corporate ethos? Is that something you want to start using internally?
3. Contextual understanding
These models don't contextually understand your business. That gets back to the ability to now fine-tune models based on the context of your organization. This could mean that organizations are all going to have multiple, custom fine-tuned models for a given use case or application.
AI Business: Is there anyone whom you think is at least doing well in regulating AI?
Dobrin: No. Regulators don't understand this technology. Even the technology experts don't fully understand this because it's advancing so fast — it’s hard to keep up with AI.
We have these expert-driven policy bodies in highly technical fields like medicine, the environment and nuclear − we need an AI regulatory agency at the ministry or cabinet level in every region and country around the world. They need to be the ones driving the policies, not the people we elect to office because that's not what we want them to do. We don't want them to be experts in this field.
You're not going to technology-proof regulation because it's changing so fast. The best thing we can do for the time being is to do what the EU is doing, which is focus on the use cases, focus on the outcomes, and focus on the human impacts of AI. We have data protection regulations, but we need to start making sure that we apply them evenly.
The EU is putting some companies on notice that have run afoul of its tech and data regulations. I don't think anyone's paying attention to that. There are some university programs, from the likes of Cambridge and Stanford that are paying attention to this. But it's not getting enough attention in the press. In my opinion, it needs to be top of mind instead of AI robots are going to take over the world.
AI Business: Do you feel that the leading names in AI like OpenAI CEO Sam Altman, Alphabet CEO Sundar Pichai, Anthropic CEO Dario Amodei and Google DeepMind CEO Demis Hassabis are the right people to lead the charge and influence AI legislation?
Dobrin: There's a little bit of disingenuous activity going on right now. Let's go back to last year. OpenAI released ChatGPT, which they knew had a lot of core issues. GPT-3 had been in beta for about a year at that point and for those of us that had access to the betas and the pre-releases, there were some fundamental issues about privacy, bias and hate but OpenAI chose to release this technology anyway.
The media said Google, Amazon, etc. were ‘behind Microsoft.’ None of them were behind, they were just being more responsible with the release of their technology. So, when Sam Altman shows up at Congress saying 'We need regulation, this technology is dangerous’, well, you should have thought about that back in November.
When Sam Altman shows up at Congress saying 'We need regulation, this technology is dangerous’, well, you should have thought about that back in November (before releasing ChatGPT).
When we listen to these technology leaders, we need to look at the organizations that have already earned their trust. I think we can look at my former colleagues at IBM like chief privacy officer Christina Montgomery, as well as Dario Gil who leads their research program.
Looking at (CEO) Satya Nadella at Microsoft, they have a big business to protect. Even though they are big backers of OpenAI, I’m sure they’re being quite thoughtful and quite careful about how they're bringing this technology to their customers.
We need to look to trust technology leaders, but we also need to combine them with others to offset some of those conversations, like bringing in people from nonprofits such as former technologists from big companies who know the inner workings of these large organizations.
We need to make sure that we bring in people from south of the equator; we bring in more people from Asia to this conversation. It can't just be a one-sided Western conversation; it’s got to be a global conversation because this technology is going to or already is impacting the world.
AI Business: Following on, what are your thoughts on this new Frontier Model Forum (a group dedicated to safe AI development founded by OpenAI, Microsoft, Google and Anthropic)?
Dobrin: They want it to be inclusive. It seems they're looking for membership, how they bring members in − (but) how much voice different members get is yet to be seen.
The group is looking at the next generation of technology to determine what safeguards they need to put in place. It’s a little bit of self-protection, but that's fine.
This is a case where the underlying reason for them is irrelevant as long as the outcome is the right outcome, which is making sure this technology is safe for human consumption. Whether it's self-serving or whether they're trying to head off regulation, I don't think it’s that relevant.
What it comes down to is, what does this mean? It was kind of ambiguous what the structure would look like - who has a voice, who doesn't have a voice? They have this huge forum and if 99% of the people are sitting in the audience that have no input (what value does that provide)? ... Whereas if you have 100 people in a room, and they all have a voice that's much more meaningful, as long as those people represent global society.
AI Business: After IBM, you joined the Responsible AI Institute (RAII). Now you’ve left RAII and working on a company. Talk us through what's next for you.
Dobrin: Right now, I’m focusing on a couple of things. I still think it’s important to build software to help companies address generative AI. I’m focusing my efforts on a use case-specific application of these and I've a couple of ideas I'm still fleshing out.
I’m doing some consulting work, helping organizations use AI and build an AI strategy that's inclusive. And I’m still advocating for the safe adoption of this technology. And that's all under the arm of Qantm AI.
AI Business: How do you envisage the next 12 months evolving in AI?
Dobrin: Right now, we’re seeing leading-edge and bleeding-edge companies using generative AI technologies. In 12 months, we'll start seeing the fast followers start to adopt.
There are some core fundamental challenges in enterprises that people have been trying to solve for decades that this technology can help with. There are things AI is good at that people aren't talking about. Generative AI is good at grouping and classifying things to understand corpora of data and information that are presented to it both while it's being trained and also as you're interacting with it. This can help solve massive amounts of complex classification.
Everyone's talking about language generation right now. But the conversation part is just a way to interact with the classification part.
Everyone's talking about language generation right now. But the conversation part is just a way to interact with the classification part. We’re going to start seeing a lot of enterprise-grade application companies start to take off in 12 months.
Hopefully, we'll start seeing the formation of AI-specific regulatory bodies as well. Governments are starting to take it more seriously. In Europe and even in the U.K., we're starting to see individual legislators start to take this seriously and educate themselves.
We're having the wrong conversation today. The challenges are not AI is going to take over the world, that's not going to happen in five years. The world is getting pushed further apart because of AI − AI amplifies what we as humans have done in the past (and it) will happen in less than five years if we don't address the lack of inclusivity of this technology.
Read more about:ChatGPT / Generative AI
About the Author(s)
You May Also Like