Sign In
GTC Logo
GPU
Technology
Conference

March 17-20, 2015 | San Jose, California
Check back often for session updates.
Registration is currently closed. To be notified when registration opens, along with other pertinent news from GTC, sign up here.

Scheduler Planner

Print
Download Pdf
 
  • List View
  • Calender View

 
Refine:
  • Session Levels:
  • |
  • |
  • |
  • |
  • Session Levels:
  •  
  •  
  •  
  •  
  • = Highly Rated Speaker

TALK

Presentation
Details

S5666 - A True Story: GPU in Production for Intraday Risk Calculations

Regis Fricker Quantitative Analyst, Societe Generale
Régis Fricker is a quantitative analyst at Société Générale since 2007. He is in charge of GPU projects for fixed income and forex library. He received a MSc from Centrale Lille in engineering in 2004 and a MSc from Paris VII in financial mathematics in 2006.

Explore a real-life case of GPU implementation in the context of trading and risk-management of numerically highly demanding financial computations. In this talk, we will present how, at Société Générale, we overcame the practical difficulties and the technical puzzles to put GPU into a concrete production environment. We will first show how GPU have changed the life of the trading desks by speeding up their pricing capabilities and delivering faster risk-analyses. We will then examine specific questions such as: How to use NVIDIA GPUs in a managed library (.NET) ? How to use this technology in the specific context of finance distributed calculation? Insights will be provided on the problems we encountered at each step and on the innovative solutions we have implemented to address them.

Level: Beginner
Type: Talk
Tags: Finance

Day: Wednesday, 03/18
Time: 15:00 - 15:25
Location: Room 210C
View Recording
View PDF

S5334 - A Fast, Portable and Robust Calibration Approach for Stochastic Volatility Models

Matthew Dixon Assistant Professor, University of San Francisco
Matthew Dixon
Matthew Dixon is an Assistant Professor at the University of San Francisco and teaches in both the MSAN and MSFA programs. Matthew has a background as a quant in industry and consults in the areas of algorithmic trading, financial risk management and venture capital. He co-chairs the IEEE/ACM workshop on high performance computational finance at SuperComputing and has published around 20 peer-reviewed technical articles. Matthew has held postdoctorate and visiting research professor positions at Stanford and UC Davis. He is a certified financial risk manager and holds a PhD in Applied Math from Imperial College, a MS in Parallel and Scientific Computing (with distinction) from Reading University and a MEng in Civil Engineering from Imperial College.

The ability to rapidly recalibrate financial derivative models such as stochastic volatility models reduces model risk due to the reliance on stale option chain quotes. This talk will address the following objectives: (1) Gain insight into the challenges of robustly recalibrating stochastic volatility (SV) models and how frequent recalibration reduces error in pricing; (2) Learn about the challenges of deploying the same modeling codebase on GPUs and multi-core CPUs and, (3) Understand how the Xcelerit platform can be used to efficiently deploy C++ written SV models on GPUs and multi-core CPUs.

Level: Beginner
Type: Talk
Tags: Finance

Day: Wednesday, 03/18
Time: 15:30 - 15:55
Location: Room 210C
View Recording

S5126 - GPU Accelerated Backtesting and Machine Learning for Quant Trading Strategies

Daniel Egloff Partner InCube Group, Managing Director QuantAlea, InCube Group and QuantAlea
Highly-Rated Speaker
Daniel Egloff
In 2008 Daniel Egloff set up his own software engineering and consulting company and founded QuantAlea by the end of 2009. Since then he has advised several high profile clients on quantitative finance, software development and high performance computing. In 2014 QuantAlea and InCube merged and he became partner of InCube Group and Managing Director of QuantAlea. He is a well-known expert in GPU computing and parallel algorithms and successfully applied GPUs in productive systems for derivative pricing, risk calculations and statistical analysis. Before setting up his own company he had spent more than fifteen years in the financial service industry, where his work revolved around derivative pricing, risk management with a special focus on market and credit risk, and high performance computing on clusters and grids. He studied mathematics, theoretical physics and computer science at the University of Zurich and the ETH Zurich, and has a Ph.D. in mathematics from the University of Fribourg, Switzerland.

In algorithmic trading large amounts of time series data are analyzed to derive buy and sell orders so that the strategy is profitable but also risk measures are at an acceptable level. Bootstrapping walk forward optimization is becoming increasingly popular to avoid curve fitting and data snooping. It is computationally extremely expensive but can be very well distributed to a GPU cluster. We present a framework for bootstrapping walk forward optimization of trading strategies on GPU clusters, which allows us to analyze strategies in minutes instead of days. Moreover, we show how signal generation can be combined with machine learning to make the strategies more adaptive to further improve the robustness and profitability.

Level: All
Type: Talk
Tags: Finance; Machine Learning & Deep Learning

Day: Wednesday, 03/18
Time: 16:00 - 16:50
Location: Room 210C
View Recording

S5677 - Enabling Financial Service Firms to Compute Heterogeneously with Gateware Defined Networking (GDN)

John Lockwood CEO, Algo-Logic Systems, Inc.
John W. Lockwood, CEO of Algo-Logic Systems, Inc., is an expert in building FPGA-accelerated applications. He has founded three companies focused on low latency networking, Internet security, and electronic commerce and has worked at the National Center for Supercomputing Applications (NCSA), AT&T Bell Laboratories, IBM, and Science Applications International Corp (SAIC). As a professor at Stanford University, he managed the NetFPGA program from 2007 to 2009 and grew the Beta program from 10 to 1,021 cards deployed worldwide. As a tenured professor, he created and led the Reconfigurable Network Group within the Applied Research Laboratory at Washington University in St. Louis. He has published over 100 papers and patents on topics related to networking with FPGAs and served as served as principal investigator on dozens of federal and corporate grants. He holds BS,MS, PhD degrees in Electrical and Computer Engineering from the University of Illinois at Urbana/Champaign and is a member of IEEE, ACM, and Tau Beta Pi.

Stock, futures, and option exchanges; market makers; hedge funds; and traders require real-time knowledge of the best bid and ask prices for the instruments that they trade. By monitoring live market data feeds and computing an order book with Field Programmable Gate Array (FPGA) logic, these firms can track the balance of pending orders for equities, futures, and options with sub-microsecond latency. Tracking the open orders by all participants ensures that the market is fair, liquidity is made available, trades are profitable, and jitter is avoided during bursts of market activity.

Level: All
Type: Talk
Tags: OpenPOWER; Supercomputing; Finance

Day: Wednesday, 03/18
Time: 17:45 - 18:00
Location: Room 220C
View Recording

S5273 - Real-Time Heston Stochastic Volatility Tracking via GPUs for Streaming Transactions Data

Yong Zeng Professor, University of Missouri at Kansas City
Yong Zeng
Yong Zeng is a professor in Department of Mathematics and Statistics at University of Missouri at Kansas City. His main research interest includes mathematical finance, financial econometrics, stochastic nonlinear filtering, and Bayesian statistical analysis. Notably, he has developed the statistical analysis via filtering for financial ultra-high frequency data. He has published in Mathematical Finance, International Journal of Theoretical and Applied Finance, Applied Mathematical Finance, Applied Mathematics and Optimization, IEEE Transactions on Automatic Control, Statistical Inference for Stochastic Processes, among others. He has co-edited 'State space models: Applications to Economics and Finance', a Springer 2013 book volume. He has held visiting professorships at Princeton University and the University of Tennessee. He received his B.S. from Fudan University in 1990, M.S. from University of Georgia in 1994 and Ph.D. from University of Wisconsin at Madison in 1999. All degrees were in statistics.

Volatility is influential in investment, risk management and security valuation, and is regarded as one of the most important financial market indicators. For a model well-fitting the stylized facts of transactions data, this session demonstrates how online tracking of Heston stochastic volatility is made possible by GPU computing. The evolving distribution of the volatility and others as new trade occurs is governed by a stochastic partial differential equation (SPDE). Numerically solving such a SPDE as new data flowing in provides the tracking of volatility. The algorithm can be parallelized and each group of threads solves a PDE using red-black Gaussian-Seidel algorithm. The workload sharing among GPUs is embarrassingly parallel and the code scales linearly with the number of GPUs.

Level: Intermediate
Type: Talk
Tags: Finance; Supercomputing; Big Data Analytics

Day: Thursday, 03/19
Time: 09:30 - 09:55
Location: Room 210C
View Recording

S5547 - Retail bank: 400 times faster

Jun Xie Chief technology officer, Lactec
Dr Jun Xie is one of the pionner of data warehouse and data mining technique as applied to telecom and baning industry. He has 20 year working experience for more than 40 large projects. He is currently lead a team focus on development and application of GPU technique to banking industry

We present a database query engine that speeds up the process of querying a database table using GPU. The process has been applied to a CRM project of large bank where we observed 400 times faster than querying with DB2.

Level: All
Type: Talk
Tags: Finance; Big Data Analytics; Data Center, Cloud Computing & HPC

Day: Thursday, 03/19
Time: 10:00 - 10:25
Location: Room 210C
View Recording
View PDF

S5227 - Financial Risk Modeling on Low-power Accelerators: Experimental Performance Evaluation of TK1 with FPGAs

Rajesh Bordawekar Research Staff Member, IBM T. J. Watson Research Center
Rajesh Bordawekar
Rajesh Bordawekar is a research staff member at IBM T. J. Watson Research Center and a member of the Programming Technologies department at the IBM T. J. Watson Research Center. Rajesh studies interactions between applications, programming languages/runtime systems, and computer architectures. He is interested in understanding how modern hardware, multi-core processors, GPUs, and SSDs impact design of optimal algorithms for main-memory and out-of-core problems. His current interest is exploring software-hardware co-design of analytics workloads. Specifically, he has been investigating how GPUs could be used for accelerating key analytics kernels in text analytics, data management, graph analytics, and deep learning.

We experimentally implement key financial risk modeling algorithms (e.g., Monte Carlo Pricing) on nvidia TK1 and compare its performance against a FPGA implementation. We compute both the FLOPS/dollar and FLOPS/watt, and describe pro and cons of using two different architectures for implementing financial risk modeling algorithms.

Level: Intermediate
Type: Talk
Tags: Finance; Embedded Systems; Developer - Algorithms

Day: Thursday, 03/19
Time: 10:30 - 10:55
Location: Room 210C
View Recording
View PDF

S5570 - Accelerating Derivatives Contracts Pricing Computation with GPGPUs

Daniel Augusto Magalhães Borges da Silva Manager, BMFBOVESPA
Daniel Augusto Magalhães Borges da Silva
Daniel is the manager in charge of risk management and calculation systems in BMFBOVESPA. He received his B.S. in Computer Science from PUC - SP and graduate degree in software engineering from UNICAMP. Daniel has an MBA in Derivatives and Capital Markets at BMFBOVESPA / USP Educational Institute and is currently enrolled in a Master's Program in Economics at FEA - USP.
Alexandre Barbosa Associate Director Pricing and Risk Systems, BMFBOVESPA
Alexandre Barbosa is an Associate Director at BMFBOVESPA. He has a Bachelor of Information Systems and a MBA in Capital Markets and Derivatives. He is responsible for Pricing Systems, Risk Calculation Systems and Risk Scenarios Management Systems.

Explore new techniques used by BMFBOVESPA, the Brazilian Stock Exchange, in the implementation of a new Close-out Risk Evaluation system (CORE) that saved $5 billion dollars in collaterals. CORE uses a set of GPGPUs to produce future price estimates on time. Session topics are: High-Level overview of BMFBOVESPA Clearing House; the CORE risk system; coding guidelines and the interface between CPU and GPU, as calculation routines needed to be the same for both environments; the use of GPUs to calculate 1.32 billion prices and its importance in a crisis event; performance analysis confronting CPU and GPU timings, showing that CPU is not powerful enough; the multi GPU and three tier production environment; daily usage and market results. No prior knowledge is required to attend this session.

Level: All
Type: Talk
Tags: Finance

Day: Thursday, 03/19
Time: 14:00 - 14:25
Location: Room 210C
View Recording
View PDF

S5249 - Groovy and GPU: Enhancing Pricing Performance and Quant Productivity

Felix Grevy Director of Product Management, Misys
Felix Grevy
Director of Product Management at Misys, Felix Grevy is working in the finance industry for 15 years. After various roles in development, sales and product management, he is currently leading the technology strategy for Capital Market, where GPU is one of the key topics to achieve great value for customers.
Bram Leenhouwers Senior Architect, Misys
Bram Leenhouwers
Bram Leenhouwers is a senior architect at Misys. He leads the Fusion Parallel Platform team responsible for the Misys GPU pricing platform. Prior to Misys, he created two startups and worked at ESI, a security software company. When he’s not optimizing openCL code, you can find him playing guitar or video games with Dévi, his hardcore gamer wife.

Discover how Misys quants use a groovy DSL to write efficient GPU enabled pricing models without any OpenCL or CUDA knowledge. Allowing progressive migration from legacy code to GPU enabled models, this framework leverages GPGPU strengths to achieve high performance pricing with a really short learning curve. This session consists in a global overview of the framework, along with some simple pricing examples demonstrating the strengths and ease of use of this approach. We will also discuss how technical concerns are separated from financial modeling to maximize quants efficiency while leaving room for continuous platform improvement on the development side.

Level: Beginner
Type: Talk
Tags: Finance

Day: Thursday, 03/19
Time: 14:30 - 14:55
Location: Room 210C
View Recording
View PDF

S5125 - Designing a GPU-Based Counterparty Credit Risk System

Patrik Tennberg Senior Solution Architect, TriOptima
Patrik Tennberg
Patrik Tennberg manages the development team for TriOptima's new credit risk analytics service triCalculate and also serves as architect and developer. Prior to joining TriOptima in 2011, Patrik was CTO for Informed Portfolio Management where he managed the development team in Sweden and Russia. During his 16+ years in the finance industry, he has also worked at NASDAQ OMX and Cinnober Financial Technology as Architect and Senior Developer and at Nordea as Business IT Architect responsible for among others Credit and Loans. Patrik holds a BSc in Computer Science and Mathematics from the University of Umeå and a MBA from Stockholm School of Economics.

Counter Party Credit risk calculation such as CVA, DVA and FVA is complex and time consuming. Using GPUs can drastically cut execution time at the cost of increased complexity. In this speech I will discuss our counter party credit risk engine and how we where able to drastically cut development time and creating an environment where quants can be productive without detailed knowledge about GPUs, multithreading and memory consumption. I will also discuss multi-GPU programming and how you can seamlessly provide a design where one physical GPU (e.g. K40) can be divided into several logical without impacting the programming model. The advantage of this approach is that you can utilize the GPU better without adding complexity. I will also discuss memory management and CPU/GPU multithreading.

Level: Intermediate
Type: Talk
Tags: Finance; Big Data Analytics

Day: Thursday, 03/19
Time: 15:00 - 15:50
Location: Room 210C
View Recording
View PDF

S5259 - Optimizing High-Dimensional Dynamic Stochastic Economic Models for MPI+GPU Clusters

Simon Scheidegger Postdoc, University of Zurich
Simon Scheidegger
Dr. Simon Scheidegger is a Postdoc at the Department of Banking and Finance in the Group of F. Kübler, where he works since 2012. In 2010, he obtained his PhD in theoretical physics at the University of Basel (Supervisors: Prof. Dr. M. Liebendörfer, Prof. Dr. F-K. Thielemann), being awarded with the Faculty prize of the Department of Science. From 2010 to 2012, he worked as a Credit Risk Modeler at Credit Suisse. His current research activities cover high performance computing in finance & economics, the numerical solution of real business cycle models, overlapping generation models and optimal taxation problems.

In our talk we will present programming and optimization techniques for exposing the potential of CSCS's Cray XC30 "Piz Daint" cluster for economic modelling. Macroeconomic phenomena are often modeled as constrained optimization problems. Targeting limited level of detail, it is often possible to find local solutions to such problems, useful only for examining the macro-economy dynamics around a steady state. Solving globally a model with high heterogeneity (different types of consumers, sectors, or countries) yields into dramatic increase of computational and storage costs. In our solver we combine adaptive sparse grids with MPI+GPU implementation, which allows to compute global solutions for e.g. international real business cycle models with unprecedentedly high heterogeneity.

Level: Intermediate
Type: Talk
Tags: Finance; Developer - Performance Optimization; Supercomputing

Day: Thursday, 03/19
Time: 16:00 - 16:25
Location: Room 210C
View Recording
View PDF

S5522 - Optimizing Performance of Financial Risk Calulations

Amit Kalele Associate Consultant, Tata Consultancy Services Limited
Amit Kalele is working as a associate consultant with Center of Excellence for optimization and parallelization and leads the GPU initiative. His primary areas of work are performance engineering, high performance computing and parallel programming. Prior to TCS he was associated with Computational Research labs as a scientist. Amit received his Ph.D. in Electrical Engineering from IIT Bombay in 2005.
Pradeep Gupta Manager - Developer Technology, NVIDIA
Pradeep Kumar Gupta is currently working as Manager, Developer Technology, Asia South at NVIDIA Pune, India.
Mahesh Barve Assistant Consultant, Tata Consultancy Services, India
Mahesh Barve is associated with Center of Excellence for Parallelization and Optimization at Tata Consultancy Services. His area of interest are performance optimization, parallel computing. Mahesh received his M.Tech from IIT Bombay in Electrical Engineering in 2005 and has previously worked as a researcher at the Tata Institute of Fundamental Research and Symantec Corp.

Risk management is a classical problem in finance. Value at Risk (VaRs) and Incremental Risk Charge (IRC), are used as important measures to quantify market and credit risk. The large number of instruments or assets and their frequent revaluations makes them a significant computational task. These computations are repeated many times in the tasks like back testing, deal synthesis and batch jobs, which runs over night or for days, a significant reduction in turn around time can be achieved. The current state-of-the-art platforms like, K40 GPU, not only enables fast computations but also reduces the computational cost in terms of energy requirement. In this talk we present the performance tuning the VaR estimation problems, option pricing and IRC calculation on latest NVIDIA platforms.

Level: All
Type: Talk
Tags: Finance; Developer - Performance Optimization

Day: Thursday, 03/19
Time: 16:30 - 16:55
Location: Room 210C
View Recording

S5498 - Big Data in Real Time: An Approach to Predictive Analytics for Alpha Generation and Risk Management in the Financial Markets

Yigal Jhirad Portfolio Manager, Cohen & Steers
Yigal Jhirad
Cohen & SteersYigal D. Jhirad, Senior Vice President, is Director of Quantitative Strategies and a Portfolio Manager for Cohen & Steers' options and real assets strategies. Mr. Jhirad heads the firm's Investment Risk Committee. Prior to joining the firm in 2007, Mr. Jhirad was an executive director in the institutional equities division of Morgan Stanley, where he headed the company's portfolio and derivatives strategies effort. He was responsible for developing, implementing and marketing quantitative and derivatives products to a broad array of institutional clients, including hedge funds, active and passive funds, pension funds and endowments. Mr. Jhirad holds a BS from the Wharton School. He is a Financial Risk Manager (FRM), as Certified by the Global Association of Risk Professionals.
Blay Tarnoff Senior Application Developer, Consultant, Cohen & Steers
Blay Tarnoff
Blay Tarnoff is a senior applications developer and database architect. He specializes in array programming and database design and development. He has developed equity and derivatives applications for program trading, proprietary trading, quantitative strategy, and risk management. He is currently a consultant at Cohen & Steers and was previously at Morgan Stanley.

Our presentation this year will provide an update on the signal processing aspect of the presentation we gave last year - An Approach to Parallel Processing of Big Data in Finance for Alpha Generation and Risk Management. We will demonstrate the use of signal processing on financial time-series data to inform us of market patterns and signals that may be evolving in real time. We will implement a signal filtering algorithm on a real time basis on securities price time-series data and will develop a cluster chart organizing these patterns visually.

Level: All
Type: Talk
Tags: Finance; Big Data Analytics; Supercomputing

Day: Thursday, 03/19
Time: 17:00 - 17:25
Location: Room 210C
View Recording

S5360 - Potential Future Exposure and Collateral Modelling of the Trading Book Using NVIDIA GPUs

Grigorios Papamanousakis Quantitative Researcher, Aberdeen Asset Management
Grigorios Papamanousakis
Grigorios Papamanousakis is working within the Liability Driven Investments team of Aberdeen Asset Management as a Quantitative Researcher. He is implementing collateral management and counterparty credit risk models for insurance books on high performance computing platforms. Previously, Grigorios worked in the Royal Bank of Scotland's Economic Capital Modelling team, on valuation, stress-testing and capital modelling of the global loan and credit portfolios of RBS Group. Grigorios trained as an applied mathematician in Greece and a financial engineer in Scotland
Jinzhe Yang Quantitative Researcher, Aberdeen Asset Management
Jinzhe Yang
Jinzhe has taken up the EU-sponsored position implementing real-time simulations for asset and liability modelling. The aim is to achieve a real-time planning and optimisation tool that enables optimal counterparty selection at the point of trading, by minimising PFE and making the most out of available collateral. Jinzhe trained as a computer scientist with particular focus on high-performance computing in climate modelling, data management, and computational finance, using new ultra-high performance processor architectures such as GPUs and FPGAs and scalable platforms such as grids and clouds.
Grzegorz Kozikowski Researcher, University of Manchester
Grzegorz Kozikowski is a Researcher in Risk Management and Computer Science at University of Manchester. His academic career concerns High-Performance Computing and Numerical Methods with application in Quantitative Finance. Previously, Grzegorz was working as a Junior Consultant at IBM Global Business Services and CUDA Developer at IBM Software Group. His current research investigates application of HPC architectures as (GPU, FPGA, multi-core architectures) to hedging derivatives, interest rate derivatives, option pricing and model calibration. His interests concern High Performance Computing, Monte-Carlo simulations, Software Development and Derivatives.

We consider the problem of calculating the collateral exposure of a large derivative book (interest rate swaps, swaptions, inflation swaps, equity options, CDS and cross currency swaps) of a global asset manager. In our presentation we explain how we construct a multi-period, multi-curve, stochastic basis spread model for the calculation of the Potential Future Exposure and the future collateral requirements within an NVIDIA GPU framework. The complexity that arises through the 1mln scenarios x 100k deals x 100 time steps x 10+ curves is an ideal acceleration case for NVIDIA Tesla GPUs. We present the GPU architecture within our framework and the acceleration results.

Level: Intermediate
Type: Talk
Tags: Finance; Big Data Analytics

Day: Thursday, 03/19
Time: 17:30 - 17:55
Location: Room 210C
View Recording
View PDF

Talk
 

POSTER

Presentation
Details

P5293 - Accelerating PCA for Applications in Finance Using cuBLAS

Easwar Subramanian Scientist, TCS
Easwar Subramanian works as a Scientist at Quantitative Finance Lab, TCS, Hyderbad India. His primary research interests are computational problems in financial risk.

In this work, we provide a way to accelerate the computation of principal components of large correlation matrices. The correlation matrix is formed from the historical prices of the financial assets present in a client portfolio. A partial Eigen spectrum containing the leading Eigenvalues and Eigenvectors of the correlation matrix are found using the DSYEVR routine of the LAPACK package. The DSYTRD was ported to GPU using cuBLAS. Multi threaded multi stream implementation enabled high gain for simultaneous processing of multiple PCA requests.

Level: All
Type: Poster
Tags: Finance

Day: Monday, 03/16
Time: 17:00 - 20:00
Location: Grand Ballroom 220A
View PDF

P5294 - Accelerating OLAP Operations in In-GPU-Memory MOLAP Databases

Alexander Haberstroh Senior Software Engineer, Jedox
Alexander Haberstroh studied computer science with a focus on image processing at the University of Freiburg, Germany, where he obtained his Master's degree in 2010. During his studies he was working on a CUDA project, developing algorithms for comparing depth maps, which is used in mobile robot mapping. Since 2011, he has been working at Jedox, concentrating on GPU algorithms for multidimensional OLAP databases.

In times of big data, multidimensional OLAP databases easily grow to the size of billions of data cells. Providing the desired results in a reasonable time is quite a challenge. An in-GPU-memory approach prevents the bottleneck during memory transfers but has to deal with the limited memory resources of GPUs. This poster describes how an appropriate storage model therefore could look like and introduces how two important OLAP operations can be implemented on GPU to gain tremendous speedups in comparison to a multithreaded CPU implementation.

Level: All
Type: Poster
Tags: Finance; Big Data Analytics

Day: Monday, 03/16
Time: 17:00 - 20:00
Location: Grand Ballroom 220A
View PDF

Poster