AI requires IT workloads unlike anything that financial organizations have seen before

AI Business

May 5, 2020

7 Min Read

by Spencer Lamb, KAO Data 5 May 2020

The last 40 years have seen more change in the financial trading sector than the previous 140. The 1980’s witnessed a new breed of salesmen on the trading floors, while the deregulation of the 1986 ‘big bang’ changed stock markets around the world.

There are many myths around the perceived secrets to success within this market’s innovative processes.

They include the legend of location, where companies have built trading offices close to landfall sites for ultra-low latency international connectivity links, in order to gain valuable milliseconds on trades.

There are whispers regarding technologies, such as in the early 2000s, where arrays of programmable logic supported the fastest processors, which ran parallel and computational intensive processing and dramatically increased the capabilities for early algorithm-based trading systems. In hindsight, it’s no small wonder that Intel purchased Altera, the field-programmable gate array (FPGA) manufacturer, for over $16 billion in 2015.

Each of these illustrates this industry’s urgent need for resilient, high performance computing (HPC) systems that operate with minimum latency, delivering immense computational power and data storage capabilities that maximize and enable profitable trades and investments. However, for many Artificial Intelligence (AI), has had what can only be described as a game-changing impact.

The influence of AI

Today, conferences such as NIPS and the AI Summit have fast become Meccas for the global AI industry. For many, joining these events offers direct access to new HPC technologies, expert consultants, manufacturers and data scientists; those who not only understand the convergence of software and hardware in FinTech, but who can collaborate to unlock the next phase of profitable growth. To put it bluntly, banks and Hedge Funds are now ready to put their money where their mouth is, investing in people and machine learning technologies to gain a competitive edge in the financial arena. According to a new research report from Omdia, the financial services industry will be responsible for 10 percent of all spending on AI software and by 2025, the market could be worth around $126 billion. This is a significant increase from 2018 where the estimated market value was around $10.1 billion. There are many cost savings directly enabled via deploying new technologies, and according to Business Insider Intelligence, AI applications could save banks as much as $447 billion by 2023.

AI requires IT workloads unlike anything that financial organizations have seen before. The combination of big data, AI and machine learning to evaluate investment opportunities and optimize trading portfolios, whilst mitigating risks, are changing quantitative data analysis techniques.

Therefore the design of the HPC infrastructure required to undertake this level of analysis is one that has to be underpinned by next-generation data center infrastructure.

Today, machine-learning models are used for credit decisions, to predict risk, to analyze contracts and within both fraud detection and Anti-Money Laundering (AML) initiatives. Algorithmic trading, of course, hasn't gone away, according to Seeking Alpha, by the start of 2019, 80 percent of the daily fluctuations in US stocks were machine-led. New hardware systems, tools and technologies are therefore constantly in development to support faster and more accurate decision-making.

Infrastructure systems today have access to millions of data points collected in many different formats. They include structured, semi-structured and unstructured data. Larger datasets taking into context stock prices, company statements, earnings reports, economic indicators, in addition to data generated via non-traditional sources like social media, web traffic and news platforms, now offer the ability to contextualize a more complete picture and deliver far better trading decisions.

The systems aggregating and processing this level of complexity are helping to determine what information is important and indicate where trends or change in sentiment may generate new financial opportunity.

Training and inference in FS

Most AI-based products are powered by machine learning models and involve two stages of development: training and inference. Inference is the simpler of the two and can be done on edge computing devices – take the input, run it through the model, and get your results. In contrast, training, the process of generating the model from scratch using example data, requires massive amounts of storage and compute power, which often means the need for a data center.

In extreme cases – major language models that power some of the most popular online services can require petabytes of data and hundreds of kilowatts of power to generate more-accurate results. System training workloads also need parallelism, which means cores, and lots of them. Targeting a training workload at a traditional CPU with up to 64 cores will produce a result, but there are far superior ways to do this.

The trend for parallelism, for example, has revitalized the GPU, which now offers thousands of compute cores onboard single chips and has made Nvidia a perennial investor’s favorite.

Where there is profit there will always be competition, and the size of the prize caused an explosion in new chip architectures from startups like Graphcore, Ampere and Cerebras. Meanwhile semiconductor veteran Xilinx has reimagined its FPGA technology for the needs of machine learning, with the adaptive compute acceleration platform (ACAP).

The view from the data center

What unites all of this new IT capability is its power requirement and the ability to remove the heat in a safe and efficient process. Legacy data centers using chilled air to cool the technology suite are not designed to maximize the operation of AI servers and will often require completely different cooling infrastructures, involving a large investment in upgrades or retrofit.

New AI server architectures, with an average power usage of 30+ kW per rack, require specific cooling strategies that only the latest data center designs are capable of accommodating and for the end-user, customization in design is key to anticipating their ever-changing needs.

Liquid cooling, for example, requires infrastructure that contradicts the old data center premise that ‘water and servers should never meet’. What we are witnessing is a massive increase in compute density, which again creates other pressures. The power that facilities can operate within was previously designed based on a nominal server/rack energy usage, yet AI-servers are changing power utilization across site.

Water-cooled processing also means new rack and enclosure architectures, which are evolving away from air-cooled cabinets and indeed many legacy data centers are not engineered to accommodate the floor weight loading that AI-racks require.

Overall, providing the infrastructure that supports AI and ML workloads requires a different approach to design, to cooling and a reputation for technical excellence. For this reason, banks are frequent guests at industry events. Partnerships with the developers and users is essential as AI and machine learning projects are complex, not only because they require a particular set of data, software and skills, but they also need businesses to implement and support ever-changing types of hardware.

Although innovation in the sector is a constant, designing and building the structures to accommodate the latest software requires customizable architectures, built to the latest specifications such as OCP (Open Compute Project) and which remain highly scalable to meet the demands of AI and HPC.

Today, Kao Data has been founded with the expertise to design and build bespoke and enterprise-scale data centers with outrageous power densities. It offers businesses from London and further across the UK Innovation Corridor towards Stanstead and Cambridge, the first generation of facilities suitable for hyperscale-level machine learning on an industrial scale.

The technological requirements in Financial Services are driving innovation in colocation. Yet for these organizations, a data center is not just a secure home for their servers. Through collaboration, partnership and technical excellence they now offer strategic capacity, 100% uptime and scalability, enabling ultra-fast AI processing and highly profitable financial transactions.

Spencer Lamb is VP of Sales and Marketing at KAO Data, a data center campus located near London.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like