Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
The possibilities presented by artificial intelligence (AI) and advanced software provide grounds for optimism for leaders, officials and the rest of society. However, when discussing the risks that are associated with AI, many people are asking the wrong questions, having not been equipped to ask about what really matters. Reshaping the conversation about AI and asking the right questions helps all parties involved in shaping the future of sophisticated software.
AI offers many possibilities and benefits, increasing the efficiency and effectiveness of important tasks.
Sophisticated software enables humans to be more efficient in jobs that otherwise would have taken a long time. Tasks can be automated to a complex level where patterns can be analyzed quickly, followed by the assessment and retrieval of desired information. For example, we can use AI to assess a range of chemical compounds at speed and consider whether they have any medical value in a healthcare setting, something which would take teams of researchers months or years to do experimentally.
AI can also increase effectiveness. Using this technology, we can identify hidden patterns and intricate relationships within a complex dataset, which previously could have been near impossible – or, in some cases, completely impossible for one person alone. For example, we can spot complex or unexpected adverse reactions to a drug before it goes to clinical trials, helping medical professionals make better decisions and assessments of potential medication.
As with any emerging technology, there are also risks, but when it comes to AI, the consequences can be felt on a monumental scale.
Due to extreme automation and the ability of AI tools to integrate different datasets and feed them directly into other systems, the potential damage by AI tools is far-reaching. With human judgment, errors occur on a much more ad hoc basis and can be instantly rectified by the person carrying out the task.
However, due to the implementation of AI and other forms of advanced data science integrating different data outputs in decision-making processes, we are now seeing the breakdown of walls between individual systems across society. The actual impacts of these decisions have become much wider-reaching than many people realize. The prospect of economic collapse caused by faulty software being implemented into a banking network, for example, could soon overshadow the initially feared replacement of humans.
Global safety charity Lloyd’s Register Foundation surveyed more than 125,000 people across 121 countries in their latest World Risk Poll, powered by Gallup, to gather insight into what most affects people’s safety globally. When asked about AI, 28% of the world’s population felt it would mostly harm people in their country over the next 20 years, while 39% believed it would mostly help.
But those worrying about the risk AI poses will often be thinking of a Terminator-style future, where technology begins to outsmart humans and gain sentience while missing a much more immediate threat. If people were empowered to ask the right questions about the increasing use of AI applications in our day-to-day activities, the number concerned about ‘harm’ would doubtless be much higher.
One of the biggest risks of using AI tools to assist in decision-making processes is the lack of clear parameters to protect people when things go wrong. AI systems are developed to reflect, learn and self-correct upon deployment to fix any issues. However, lack of oversight and overreliance on these tools leads to unexpected consequences when the system doesn’t work as intended and individuals are left without any tangible guardrails to protect them from harm.
Consequences of the flawed assumption that automated systems can be treated as infallible were being played out in the UK just a few months ago, with the Post Office Horizon IT scandal in the U.K. seeing corporate and government officials finally held to account over faulty software installed in the 1990s that incriminated innocent people. For people to fully weigh and measure the potential risks of algorithms and AI systems being applied to their daily lives, we must give them the right tools and vocabulary to understand them.
People are not asking the right questions about the use of AI as urgently or vociferously as they should be. Society, including policymakers and decision-makers, is not yet able to approach this issue with the right language or attitude it requires. Naturally, people feel more comfortable asking questions about privacy than other areas of risk associated with AI, but conversations about how it works and the potential societal impact need to be had.
Sense about Science, a U.K. charitable organization that promotes the public understanding of science, has worked internationally with software developers, leading mathematicians, and community groups to figure out what the right questions are.
Communities who want to have conversations about the quality or understand the strengths and weaknesses of AI systems should be asking:
Where has the data come from?
What assumptions are being made about the data and the way it is modeled?
Is the technology or system strong enough to bear the weight that needs to be put on it?
These questions are designed to go beyond the surface level. They empower communities to make informed decisions and understand the potential risks of AI tools. Additionally, the questions can help policymakers who commission and implement AI tools in decision-making to understand the strengths and limitations of the tools when it comes to life, limb and property.
Analogous to Carl Sagan’s assertion that "extraordinary claims require extraordinary evidence", systems with disproportionately large societal impact should be subject to higher standards of validation.
To ask the right questions, communities must feel empowered to scrutinize data and information. This is why Sense about Science founded the Risk Know-How project, in partnership with Lloyd’s Register Foundation. It aims to help communities around the world navigate risk information by assessing claims and data about risk, and weighing up trade-offs to make informed decisions in their own context.
Through the project, in collaboration with people who help their communities to make sense of risk and risk specialists, the project has created a framework that sets out the key concepts needed to make sense of risk. The benefits of Risk Know-How span cultures and contexts, from fishermen in Fukushima better understanding water safety data and enabling locals to understand whether they can safely eat the fish caught at local markets, to farmers in West Africa choosing between drought-resistant and high-yield crops based on meteorological forecasts.
Rather than being lectured by professionals or decision-makers, local communities now feel empowered to go deeper than the surface data and ask what it means in their context, so they can make the trade-offs needed in real life.
Ultimately, there are urgent conversations that need to be had by all members of society when it comes to AI. But these conversations must be meaningful and beneficial to the people who are asking the questions. Not enough focus is being brought to the right areas in sophisticated software and we risk overlooking pressing and urgent safety concerns amid a culture of tech start-ups and early adoption of ‘exciting’ technology in some of society’s most important sectors.
You May Also Like