While CPUs are great at being the brains, FPGAs are the real workhorses.
In the fast-moving world of banking, time is almost literally money.
With banks and other financial institutions looking for every possible advantage over their competitors, technology can play a crucial role in tackling their day-to-day challenges.
Much of what tech can do for a bank revolves around risk analytics and high-frequency trading. The former involves measuring the risk – commonly known as VaR or Value at Risk – and the risk distribution based on the bank’s position and market variables. The latter requires data to be decoded as quickly as possible to inform trading decisions – as fast as tens of nanoseconds which is far quicker than a human.
Other key benefits include ultra-low latency trading, algorithm trading, and several use cases in data analytics such as networking acceleration.
These are all areas where Intel® FPGAs can help. Fundamentally, a Field Programmable Gate Array (FPGA) is a chip that can be reprogrammed to perform different functions, but they offer a number of advantages that make them perfect for the aforementioned situations.
“FPGAs are designed to be highly flexible, not just in how you can implement your compute but also in the I/O,” says Ronak Shah, Intel’s Director of Marketing, AI and Market Analytics. “FPGAs come with hundreds or thousands of high-speed I/O, which can operate at very high rates, so they can be configured for many different types of interfaces.”
While CPUs (Central Processing Units) are great at being the brains, FPGAs are the real workhorses, taking the strain off CPUs by running a single algorithm quickly and efficiently. One of the most frequently used algorithms in the risk analytics space is called Monte Carlo.
Monte Carlo algorithms have been used by banks for a number of years for model calibration, value and hedging, risk management, and portfolio optimisation. They use statistical information such as the mean of standard deviation and then perturb the model for millions and millions of iterations in an attempt to model what the worst case scenarios are that could occur. On a CPU these processes can take up to four hours, so they’d tend to be done overnight, but with FPGA acceleration this time can be significantly reduced.
Increasingly, though, the benefit of using FPGAs isn’t just about speed. With Intel’s next generation of FPGAs implementing a cache coherent interface called CXL, the algorithm can be split, with different parts of it run on the CPU (host processor) and the FPGA (offload accelerator). Both products still access the same memory, so you don't have to push and pull the data from the host located DDR (double data rate) memory down to the accelerator DDR memory, which enables you to explore where to split the algorithm and the best way to do it.
“Historically, although FPGAs are high-performing, they have a reputation for being difficult to work with,” explains Shah, “so we’re developing software-based flows to enable access through heterogeneous programming languages such as OpenCL™. We're also building an array of libraries where we've gone down and optimised a lot of these building block functions, so the end user can stay in a C++-based language and access these libraries to accelerate the algorithms much faster.”
Monte Carlo can be inefficient compared to other approaches, such as partial differential equations (PDE) and quadrature, but those cannot solve a lot of the problems Monte Carlo algorithms can. With major plans afoot for the next twelve months, Intel is focused on solving this FSI challenge.
For More Information: