Managing Cost and Sustainability for HPC Workloads in Financial Markets

By Warren Barrie, Senior Director at Bulk Infrastructure

In December 2020, a leading group of thirty asset managers, representing over $9 trillion of assets under management, announced the Net Zero Asset Managers Initiative, a commitment to support the goal of net-zero greenhouse gas emissions by 2050 or sooner.

This reflects the emphasis that financial markets now place on sustainability and lower carbon emissions. And it’s not just asset managers who are taking action; a growing number of global banks are also setting targets to reduce their net carbon emissions – JP Morgan, Morgan Stanley, Barclays and HSBC have all recently announced similar commitments.

However, one challenge they all face is that the finance sector is a very compute-intensive industry. High performance computing (HPC) now supports an increasing variety of workloads. This requires a lot of power, and the demand is only growing. Much of the demand for HPC is driven by firms' needs to process ever-expanding data volumes, both structured and unstructured. Additional factors include a range of new and changing regulatory requirements, the constant need to manage risk effectively, and the increased adoption of AI and machine-learning tools across a wide range of applications and use cases. Each of these workloads has its own unique requirements around technology infrastructure. So how can firms ensure they are deploying the appropriate infrastructure to manage those workloads in a way that is not only sustainable from a carbon-neutral perspective but is also cost-effective?

Different workloads, different requirements

Due to take hold in January 2022, one particularly onerous regulation for global banks is the Fundamental Review of the Trading Book (FRTB), a comprehensive suite of capital rules requiring banks to run more stringent models for market risk calculation. The computational workloads and volumes of data necessary to meet these regulatory requirements are vastly more complex and resource-intensive than previous models, hence the growing need for HPC in this area. Additional banking regulations around risk and capital adequacies, such as BCBS 239 (a global directive), CCAR in the US, and the European Banking Authority's stress tests, also place a heavy data processing burden on banks. HPC is also increasingly adopted by the buy-side. Quantitative hedge funds, in particular, need to process and analyze more and more data from a variety of sources to seek new sources of alpha. Although much of this is structured time-series data (historical tick data from global markets, for example), there is a growing trend of firms analyzing unstructured data, such as news, sentiment indicators from social media, and various other forms of 'alternative' data — including things like credit card transactions, web traffic, and geospatial data — to discover trading and investment opportunities.

Additionally, the growth of passive investing and robo-advisors, which use AI and machine learning to provide automated, algorithm-driven investment services with little to no human supervision, is driving a significant demand for HPC to manage the data- and compute-intensive workloads.

All of these workloads are unique in their own way, but one common denominator is that, for the most part, they are not latency-sensitive. Unlike trading and execution algorithms, which generally need to run on servers collocated or hosted in proximity to exchanges’ matching engines, HPC workloads such as these do not need to sit in expensive data centers in the vicinity of London, New York, Chicago, Frankfurt or Tokyo, where the cost of power — in both monetary and environmental terms — is high.

Our N01 data center located in Kristiansand, South of Norway

Sustainable, Low-Cost Power

While some of the above processes could be migrated to the cloud, in practice, the public cloud is inappropriate for handling many of these tasks due to security and privacy concerns and high running costs for the heaviest workloads.

For this reason, firms are now looking at alternative ways to host their HPC infrastructure. And a desirable option is the Nordic region, where there is an abundant capacity of low-cost, green power and excellent connectivity with other global locations (market centers where trading servers are sited).

At Bulk Infrastructure Group, our Norway Data Center Campus N01 is the world’s largest data center powered by 100% green energy. This means that we are the ideal data center partner for power-intensive HPC data processing needs. We offer environmentally friendly solutions powered by fully renewable energy with low levels of CO2 emissions. The abundance of hydropower in the Nordic region also means that we can save firms millions of dollars through low and stable power prices. Cost savings of up to 60 percent can be achieved compared to an equivalent installation in London, for example.

For firms wishing to migrate their HPC workloads to this low-cost, sustainable environment, we smooth the path by working with a network of established partners to help plan and execute projects to meet aggressive timelines, from providing support and advice on complying with tax rules to supporting the logistics of receiving and managing equipment, acting as an importer of record.

In conclusion, as financial markets firms increasingly look for new ways to manage the cost and sustainability for their HPC workloads, they should consider the benefits of working with an infrastructure partner that can satisfy their power-intensive data processing needs with high availability, cost efficiency, responsiveness and low CO2 emissions.

Get in touch today

Do you have any questions or would you like to learn more about Bulk and our solutions? Please get in touch today;