Keeping AI cost-effective in the move to cloud

How can AI application developers ensure that their move from on-premise to cloud is as painless and efficient as possible?

March 15, 2023

3 Min Read

At a Glance

  • Preparation is everything: Assess your requirements, and how your possible cloud partners match these
  • Keep it clean: AI can be power-hungry, so understand your providers' sustainability credentials
  • Be pragmatic about speed and scale: If you need real-time speeds for users, don't compromise

Most AI projects start as small, experimental tests hosted on a server in-house, and eventually graduate to cloud environments, where their uptime, security, scalability and maintenance can be assured. However, this migration – the ‘teenage’ stage of an AI application’s lifecycle – is often the most difficult.

Moving an AI application to the cloud isn’t just a matter of ensuring greater scalability and improving uptime – it’s often a matter of cost. AI applications usually rely heavily on GPU and GPU-like processors, which can be a significant investment for any startup or lab. Delivering this level of performance at scale is often out of the question from a CapEx point of view, especially for a start-up.

Furthermore, AI application developers often quickly reach the limits of their in-house machines; AI needs to be trained on large datasets, which can mean running out of RAM and storage space. Upgrading to a high-performance machine in the cloud can remove this bottleneck at both the development and production stages. However, there are a number of factors that teams should be aware of and prepare for if they are to make the migration to cloud as painless as possible.

Research and preparation are key. For example, understanding portability, and working on a platform like Docker can greatly help before and after migration. Even before moving to a third-party datacentre, working in a containerized environment means that coworkers and collaborators can quickly replicate the app and its dependencies and run it under exactly the same conditions. However, having an AI application running in a container also means minimizing re-configuration during the migration process itself.

Sustainability is also an important consideration. The University of Massachusetts Amherst research team found out that the GPT-2 algorithm (ChatGPT’s older sibling) created approximately 282 tons of equivalent CO2 – a similar amount to what the entire global clothing industry generated in producing polyester in 2015. AI application developers should be considering sustainability from the get-go, as well as understanding how their partners manage recycling and electronic waste.

At a more specific level, it’s important to be clear about scaling. Having discussions with cloud providers about the specifics of app functionality, including who will be using the app, and what that means for the technical architecture, can make sure that no aspect is left neglected. Most large-scale cloud providers can offer automatic and unlimited scaling, but there’s a significant difference between the set-up needed for a system getting ten requests a day and one that gets ten thousand in a minute, so it’s important to be clear about instance ranges, for example.

Similarly, latency considerations are crucial; the likes of chatbots and other real-time systems need to respond instantly to web users. This means that both code and infrastructure must be sufficiently low-latency, and developers and deployers will need to shave off every possible milli-second. In terms of deployment, this means checking that compute resources, for example, are as close to (or in the same place as) data.

If AI is to reach its full potential, especially in the face of the current energy crisis, it needs to be deployed as efficiently and effectively as possible. Thankfully, if AI developers can plan carefully, choose their partners well, and streamline their processes when they move applications from their on-premise training-wheels environment to the bigger, wider and more flexible world of cloud, then they will considerably increase their chances of successful re-deployments, keeping costs down and meeting the needs of end-users.

If you’d like to read our whitepaper on AI in the cloud, understand how OVHcloud’s solutions can support AI throughout its lifecycle, or get in touch for a conversation with an expert, you can find out more here:

Brought to you by:

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter.

You May Also Like