Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
Recent advancements in AI have rightfully gained the attention of lawmakers and regulators
It is without question that artificial intelligence is the most revolutionary, impactful and rapidly expanding technology today. From major tech companies to everyday consumers, we are developing and experimenting with AI in incredible ways. It is also no secret that this new world comes with real risks that must be taken seriously as innovation forges ahead. That is why many in the AI space are leading a conversation around responsible AI and working to ensure the new Congress and regulators find the right path in their efforts to oversee AI.
Responsible AI is a term people are becoming increasingly aware of, however, most either haven’t heard of it or have not fully thought about what it actually means. Movies and memes might lead you to believe that this means avoiding the dystopian futures of the Terminator or The Matrix. While that may be true in some ways, responsible AI is far more relatable to the here and now. Simply put, responsible AI is about ensuring accuracy, transparency and effective oversight of AI tools and applications.
Information that is accurate, transparent processes and having best-in-class people overseeing products are paramount in meeting customers’ expectations and foundational pillars of successful businesses.
The recent advancements in artificial intelligence have rightfully gained the attention of American lawmakers and regulators and will be an important issue for newly elected leaders to address. Policymakers clearly recognize the need to take action on AI, though numerous attempts at passing legislation overseeing private development and use of AI at the state level have all failed, barring Colorado’s AI Act.
The United States is not alone in taking its time. Outside of the EU, most countries are adopting a wait-and-see approach to identify specific risks and regulate at the sector level, focusing efforts instead on standing up AI safety institutes. In the U.S., Congress has held more than 100 hearings on the subject, President Biden has signed multiple Executive Orders and federal regulators have proposed rules, opened investigations and taken other steps to zero in on the implications of the new technology. Despite these efforts, the complexity of AI governance, as seen in global initiatives like the Bletchley Declaration and a lack of consensus on the score of oversight means the finish line remains stubbornly on the horizon.
Effective oversight of innovative new technologies can often prove challenging for governments. With AI policymaking efforts, governments are analyzing the practicalities of an industry building an airplane while simultaneously flying it. The market has yet to determine where and at what scale AI will make sense to leverage for the myriad purposes it may, or may not, be useful. The implications and realized risks of AI in markets remain largely unclear and government overreach could significantly curtail innovation, mitigating potential societal benefits. We remain optimistic about the forthcoming legislation on AI in Congress, but the possibility of government overreach remains. The scope of proposed rules from federal regulators has further highlighted overreach risks.
Responsible AI development is not just about the building and training of AI models themselves but is rooted in sound policy around the building blocks of AI. Building blocks such as data quality and governance, transparency, risk mitigation and adherence to industry best practices that help ensure final products reflect the elements of responsible AI. Policymakers should focus their efforts beyond just AI as a technology and improve the quality of the building blocks that underpin our most important AI tools.
The good news is that companies are already developing AI products and tools on a platform of responsible AI that can and should inform policymakers. As strategic leaders for our respective companies, our commitment to demonstrating these best practices and advocating for policies that will allow this technology to flourish in responsible ways. I have personally worked to incorporate NIST’s Risk Management Framework into my company’s AI application development and strive to maintain a human-in-the-loop to ensure the quality of user-facing AI outputs.
Accelerating technological advances cannot and will not wait for policymakers. There are serious consequences should they do nothing, just as there are consequences should they enact policy based on inadequate information or overstep with onerous policy regimes. Done right, there is a brave new world emerging that will provide unimaginable advances to the way we live now and into the future.
I am hopeful. I recall some years ago when the “World Wide Web” was in its infancy and was rapidly expanding into the workplace and people’s homes and lives, there was a fear that the law could not evolve with this unknown technology. But the law did evolve and policymakers responded.
The stakes are high. The opportunities are endless. And it’s all possible if we do it right.
You May Also Like