by Ritika Gunnar


AUSTIN – We’ve heard for years now that we’re on the cusp of the AI revolution—a monumental shift on par with the printing press, the transistor, or the Internet. Of course, this is all true, and the hope that AI will create vast new opportunities in every industry is catching on.

In February, the White House signed into law the American AI Initiative, which directs federal agencies to prioritize AI investments in their R&D missions. Some experts estimate that the potential added value to the US economy will reach $15 trillion by 2030.

However, in order to reach the heights of AI’s potential, the industry is going to need to overcome a major hurdle. AI requires access to massive troves of data—Natural Language Processing, for example, can eat up petabytes of training data in the process of developing speech or text analysis.


Related: Why deepfakes pose an unprecedented threat to businesses


Many departments within large companies invest in new AI applications with a specific goal in mind. For example, an IT department adopts AI to perform smarter cybersecurity, a sales department develops a chatbot to assist customers in partnership with their human agents.

Bring together stores of data across departments to power the AI revolution

In these cases, AI implementations happen quickly, and not necessarily with an overarching strategy that takes into account the needs of the entire organization. Bringing together stores of data from various departmental silos and training new sets of algorithms can be time consuming and expensive. This often results in information silos.

We are firmly within the cloud computing era, and data quite often lives in various third-party clouds and across your data bases. Those third parties often have complex licensing agreements, proprietary tools and protocols.

Taking these into account, it can be extremely difficult and expensive to migrate data from vendor to vendor. Thus, large businesses’ data stores are spread across multiple vendors who were chosen for any number of reasons. Sometimes, different clouds or vendors make sense for some projects, but not for others.


Related: Seven years after IBM Watson’s debut, the world has changed beyond recognition


Bringing AI to the data is key to this revolution

This can be a huge missed opportunity when it comes to AI, which works best when it can access the most data possible. However, there’s good news. Even though it can be difficult bringing the data to the AI, organizations can still reap rewards if they are able to bring the AI to the data.

The future in AI is cross-platform, and so AI tools must be cross-compatible. That’s why we’ve opened up all Watson services, like Watson Assistant and Watson OpenScale, to run and deploy in any environment. We call it Watson Anywhere. This means clients can deploy Watson no matter the location –  whether that’s on the IBM Cloud, other clouds like Amazon —Watson runs on any private, public, or hybrid multicloud, as well as locally-managed servers.

Watson is also able to work within any AI framework, so customers can access the broad capabilities of Watson and its family of tools to build new models within frameworks like Tensorflow, Caffe, or Pytorch. Businesses can add Watson AI capabilities to their applications across any location, and take advantage of the development tools, machine-learning models and management services provided.


Related: AI bias isn’t a data issue – it’s a diversity issue


This ability to bring AI to the data, no matter where it lives, rather than having to move the data from one place to another, at no small expense, gives businesses the flexibility and freedom they need to scale and embrace a multi-cloud platform.

The history of consumer technology is one of cycles, and one of those cycles seems to be a vacillation between open and closed systems. In the 80s, the companies that won were the ones that allowed their hardware to run different types of software and allowed their software to be run on different types of hardware.

In the 90s, the internet service providers that lost were the ones that insisted on controlling the browsing experience within their own walled garden. Right now, the cloud is moving from a closed phase to one of openness, and companies that cling to closed systems are going to have difficulty, whereas the ones play well with others, even if it means foregoing temporary gains, are going to see long term rewards.

Join the IBM Watson team and 20,000 other technology and business leaders at The AI Summit London, June 12-13


Ritika Gunnar is VP of Offerings at IBM Watson

R