Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
Adopting open-source principles, fostering competition and decentralizing infrastructure can help ensure AI benefits everyone
As artificial intelligence (AI) finds its way into every industry, we have an important obligation to embrace policy that ensures an equitable distribution of AI’s benefits.
Big Tech is pouring billions of dollars into the technology, using massive data sets farmed from unwitting customers to train their AI models — models they plan to keep closed and proprietary, even though it was our data that schooled them.
In the meantime, companies such as Microsoft, Google and Amazon are also cornering the market on the AI computing power needed to train and run inference on these models through their cloud services — a cost that’s only rising alongside AI’s popularity.
And yet there is still time to course-correct.
Rather than panicking over apocalyptic scenarios where computers dominate humanity, we should instead focus on the very real danger that scaled AI models will be monopolized by the handful of platforms that will inevitably insert their own biases – even censoring what inputs and outputs users provide to and receive from these models.
There is a counterweight to the seeming inevitability of this outcome, however: The open-source AI movement, which advocates for the creation and support of data, models and infrastructure that are available to all – often as public goods that provide utility to everyone.
Decentralized infrastructure in particular can play a vital role in stopping Big Tech from strengthening their stranglehold on our futures. By enabling distributed and competitive marketplaces that researchers, developers and startups can use to access affordable computing power, decentralized physical infrastructure networks can help new entrants go toe-to-toe with incumbents.
Without moving quickly to preserve openness, the very governments tasked with regulating this tiny cluster of platforms also risk losing out on the abundance the technology will yield. In doing so, they also hand exponentially more power to the handful of private companies that own most of our digital lives.
After all, the dominance of the major platforms means the dominance of the economies that fostered them. Keeping AI open is essential to ensure this wave of innovation benefits a worldwide population, not just those who live in wealthy economies.
To ensure this wave of innovation is accessible to all and benefits humanity globally, we should consider a set of principles to help preserve openness in the face of a massive corporate power play:
Foster competition with startup sandboxes. Startups need time to experiment and find their footing. In the face of well-heeled competition that has the resources to pay for the high costs of compute for AI innovation, Sandboxes allow startups room to experiment, as they can grow to a certain size before any burdensome AI safety regulations come into effect.
Mitigate risks with decentralization. Stop equating decentralization purely with crypto. Decentralized infrastructure reduces systemic risk caused by the centralization of essential functions in corporate-controlled clouds.
Distributing compute power across a wide network of service-providing nodes also guards against centralized points of failure so systems can continue to function in the event of outages. This boosts resilience against cyber threats by eliminating single points of attack, making it harder for malicious actors to steal data and/or cripple mission-critical systems.
Support those building open-source AI. Those criticizing open-source AI view only the potential negatives of what builders have been trying to accomplish. The goal is to create accessible and free alternatives to closed and expensive models. Builders also want to empower people to use AI on the inputs and outputs of their choosing, on their own machines – rather than only the “allowable” inputs that big tech determines.
Open-source AI builders are also giving people market-based access to compute as opposed to permissioned access controlled only by a handful of companies. These goals are aligned with the principles of market freedom and open innovation, which should be supported at scale.
Encourage innovation. Enforce applications. Current regulatory thinking has centered around preventing new players from developing AI models that are deemed too large, or that consume too much compute to train. These arbitrary constraints not only stifle innovation; they also offer Big Tech a chance to take the lead in the race — especially as they’re fuelled by the cash cows of their existing successful business models built on closed systems.
Regulation should instead focus on harmful uses of the technology: if you’re using AI to steal IP or defraud others, you should be treated as a thief or a fraudster. We shouldn’t hamper innovators from developing foundational models and compute that enable significant human advancement simply because a small minority will choose to use it for ill.
Given the transformational power of AI for humanity, we must take a clear-eyed approach to the real risks – and avoid the trap of letting ill-informed fears drive hasty action that imposes arbitrary and ineffective regulations on innovators.
Governments around the globe should support innovators and ensure our best and brightest minds have a place in the game – not set arbitrary parameters that will perpetuate and enlarge existing monopolies. We must act quickly but with care. The time is now.
You May Also Like