Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
The key to safer AI is a deeper understanding of the potential that comes from broader, societal data and AI literacy
Imagine driving a car without the rules of the road. Yikes. The more cars on the road the greater the likelihood of collision or gridlock. Arguably, anything that potentially puts the public at risk needs guidelines for keeping people safe. That very much applies to AI. As the use of AI by consumers, companies and governments grows, rules for how to use it responsibly are increasingly needed. And, just as awareness around the environmental impacts of cars has grown and efforts to reduce them developed, the same is true for AI. Less power-hungry solutions are needed.
To date, much of the discourse around AI has focused on big, existential questions such as “singularity,” when AI will become more intelligent than humans, or whether AI will replace us. That replacement ship has sailed. AI assistants and agents will take over many functions from drafting messages to software code making decisions in virtually all aspects of our lives, whether we realize it or not.
Fortunately, since the future is now, the discussion has become less dire and much more practical in terms of the solutions offered. For the most part, We’re not talking about banning AI but rather designing policies for mitigating risks and implementing mechanisms for education and enforcement. Using the driving analogy, we’re talking about speed limits and seatbelts and also driver’s education and licenses. Advocates of responsible AI are taking the same approach: Don’t ban AI technology itself but rather put guardrails in place to ensure responsible use that mitigates risk.
The Right Rules in Place
Despite the hype and volume of anxiety-inducing news, not all is doom and gloom. AI models have improved processes and productivity across all sectors from breast cancer detection to waste materials reduction and more. To address the more nefarious effects, organizations across the globe are already publishing guidelines and governments are passing legislation, such as the European Union’s AI Act. Technology providers are developing tools to increase AI transparency and explainability. These measures are a first step not only toward identifying and potentially rectifying risks but also educating users to be more aware and developers to be more conscious of the potential impact of these new technologies.
Another positive observation lies in international collaboration. Yes, there are different approaches to AI: A tighter control in China and a more self-governed approach in the U.S., with the EU Act’s risk-oriented guidelines splitting the difference. Beyond these, the Bletchley Accords signed in the U.K. a year ago illustrate the common recognition of risk and the interest and investment in collaboration to promote further awareness and safety.
In addition to government and industry regulation, AI and data governance within organizations is critical. To help understand and mitigate AI risks, everyone within the organization – from the shop floor to the top floor – must be data and AI literate. They must know how data is used, the value it delivers to their organizations, the potential risks to look out for and what their role is. On the more technical or practitioner side, organizations need fine-grained access and usage policies to ensure data is well-protected and used appropriately. Everyone in an organization plays a role in the value chain, whether it’s capturing data accurately, protecting data, building algorithms and applications that analyze the data, or making decisions based on the insights delivered.
A Robust Data Foundation to Meet AI Ambitions
As we all know, there is no AI strategy without a data strategy, or more importantly the data itself. More data and more diverse data not only fuel AI models, they also mitigate the risks of hallucinations. This is where AI systems deliver inaccurate responses, or AI bias, where AI systems produce results that aren't objective or neutral. AI models don’t usually just ‘make up’ answers but they can pull from unreliable sources, like the story about the AI that recommended adding glue to pizza sauce to prevent cheese from sliding off. Particularly in the high-stakes enterprise world, diverse, relevant and high-quality data is the primary ingredient.
In a fortuitous twist of luck, AI is now stepping up to address issues of data quality. For example, AI automation can detect anomalies, proactively fix data upon ingestion, resolve inconsistencies across entities and create synthetic data. AI can also help ensure data security by identifying vulnerabilities. That is not to say that data leaders can rest on their laurels. Responsible data and AI practices dictate robust data governance by leveraging privacy-preserving technologies.
Finally, the data must be relevant to the specific use case. In that way, enterprise AI is different from general AI tools. An enterprise AI model is chosen to address a specific challenge: Predicting sales, recommending a product or service, or identifying anomalies or defects in manufacturing or delays along a supply chain. The choice of AI model, including the decision to build, buy or fine-tune, can mitigate risks of hallucination or bias. Enterprise AI is purpose built and as a result, can be more resource efficient.
Towards Greener AI
That brings us to another AI elephant in the room: Sustainability. AI is expected to have a large impact on climate-related fields, helping to optimize the use of fossil fuels and drive the adoption of other forms of energy. But AI itself is an energy hog. Research studies estimate that ChatGPT currently uses over half a million kilowatt-hours of electricity per day, equal to the consumption of almost 180,000 U.S. households. It’s time to apply AI to help itself find solutions to offset its own energy demands.
From a best practices perspective, companies must find a balance between experimenting with different AI use cases and ensuring proper use, with a genuine purpose and ultimately a return on investment. Adoption of enterprise AI with purpose-built, efficiently trained agents is a first step. Transparency across the value chain, from inputs to outputs and outcomes, enables a greater understanding of environmental impact and the trade-offs made for business value.
A Safer AI Future Starts Now
Encouraging open dialogue and making progress toward AI transparency and hopefully explainability, are critical first steps to mitigating the risks of AI. The global collaboration already happening at events such as the global AI Safety Summit, which produced the Bletchley Accord, is encouraging. Building awareness within the enterprise – at all levels – and among consumers increases the pool of potential watchdogs and arms them with the signs to look for and the questions to ask. As they say, experience is the best teacher.
Those lessons can be applied to improving understanding and defining the requirements for the data and AI platforms of the future. Those requirements will extend current considerations around data diversity, security, governance and sustainability. But the true key to safer AI will be a deeper understanding of the potential – for both good and bad – that comes from broader, societal data and AI literacy.
You May Also Like