Sign In
GTC Logo
GPU
Technology
Conference

April 4-7, 2016 | Silicon Valley
Check back often for session updates.

Scheduler Planner

Print
Download Pdf
 

 
Refine:
  • Session Levels:
  • |
  • |
  • |
  • |
  • Session Levels:
  •  
  •  
  •  
  •  
  • = Highly Rated Speaker

TALK

Presentation
Details

S6295 - Counterparty Credit Risk and IM Computation for CCP on Multicore Systems

Prasad Pawar Developer, Tata Consultancy Services
Prasad Pawar is currently working as a developer at TCS in the Parallelization and Optimization group of HPC. Prasad received an M.S. in computer science and engineering from Kolhapur University, Maharashtra in 2008 and B.S. in computer science and engineering from Aurangabad University in 2005. He has seven years of experience in the HPC domain. Prasad is an inventor of one patent and published his research work in various national and international conferences. His research interests include high performance computing, parallelization and optimization, multicore programming, GPGPUs, and algorithms.
Amit Kalele Consultant, Tata Consultancy Services
Amit Kalele has 10 years of industry experience in high performance computing. His primary area of focus is performance optimization and parallelization of applications on multi core and many core CPUs/GPUs. Amit received his Ph.D. from the Department of Electrical Engineering, IIT Bombay.

We'll present how GPUs and performance optimization using the latest features available on Kepler GPUs enabled a critical risk estimation application in a trading system achieved near real-time performance compared to the 25 minutes on legacy systems. The counterparty credit risk is defined as the risk that the counterparty to a transaction could default before the final settlement of the transaction's cash flows. A CCP calculates the mark-to-market margin requirement for each member and blocks it from a member's collateral if the margin is not sufficient. Moreover, the GPU based approach minimizes the risk for the CCP.

Level: All
Type: Talk
Tags: Performance Optimization; Finance

Day: Wednesday, 04/06
Time: 09:00 - 09:50
Location: Room 212A

S6589 - Algorithmic Trading Strategy Performance Improvement Using Deep Learning

Masahiko Todoriki Assistant Vice President, Mizuho Securities. Co., Ltd.
Masahiko Todoriki is the project manager/lead developer of AI platform project in Mizuho Securities Co. Ltd., in the Wholesale IT Strategy Department. He works as a quantitative analyst in the Sales Trading Department to provide feedback of research and analysis using the latest technologies. From 2009 to 2014, he worked on the development, team management, and execution performance analysis of algorithmic trading strategies in Mizuho. Prior to Mizuho, he ran a firm, consulting and developing algorithmic trading strategies for FX and commodity futures. Masahiko majored in pure physics in Waseda University.

Learn how we improved stock price prediction accuracy in 30 minutes compared to two hours for instruments listed in Tokyo Stock Exchange. We used GPGPUs to speed up data preprocessing and deep learning. As a part of training dataset, we used ticks (every single trade) and quotes (every single change in order book) to detect micro changes of the market. As a result, we get constantly better accuracy than the historical probability.

Level: Intermediate
Type: Talk
Tags: Finance; Deep Learning & Artificial Intelligence; Big Data Analytics

Day: Wednesday, 04/06
Time: 09:30 - 09:55
Location: Marriott Salon 1

S6123 - Effects of GPU, AAD and XVA on the Future Computing Architecture of Banks

Pierre Spatz Head of Quantitative Research, Murex
Pierre Spatz heads the quantitative analysis team of Murex, a world leader in trading and risk management software. He holds a M.S. in computer engineering and applied mathematics from ENSIMAG in Grenoble, France.

The 2008 crisis has tremendously changed the way we approach financial computing in the banks. While the complexity and diversity of traded products have been reduced, volumes and regulatory computations needs have exploded while budgets became tight and we do not see any relief in the future. Several solutions including GPU– powerful parallel coprocessor - , AAD– an algorithm - or both of them have been implemented to cope with today workload. All these methods imply at least a partial rewrite of the code. We will come back on our experience and see how well each solution fit different test cases with current or future hardware and extrapolate how the future calculation servers of banks will look like.

Level: All
Type: Talk
Tags: Finance

Day: Wednesday, 04/06
Time: 10:00 - 10:50
Location: Marriott Salon 1

S6702 - Automated Creation of Tests from CUDA Kernels

Oleg Rasskazov Vice President, Quantitative Research, JP Morgan Chase
For the last eight years, Oleg has worked in Quantitative Research at JP Morgan, focusing on High Performance Compute for Equities, Commodities and FX. He has a PhD in Applied Mathematics, focused on computer-assisted proofs.

JP Morgan is extensively using GPUs to speed up risk calculations and reduce computational costs since 2011. The computational library runs a large number of kernels, both hand-written and auto-generated, with a complex data flow. As we were upgrading the CUDA drivers, runtimes, and hardware, we infrequently saw regressions in performance, and numerical values, and understood the need of the test suite that would simplify submission of issue reproducers without sharing whole proprietary library. This talk will present an automated approach to converting individual kernel launches into standalone test cases, subject to some restrictions on the GPU code structure.

Level: Intermediate
Type: Talk
Tags: Finance; Tools & Libraries

Day: Wednesday, 04/06
Time: 14:00 - 14:25
Location: Marriott Salon 1

S6309 - Capitalico - Chart Pattern Matching in Financial Trading Using RNN

Hitoshi Harada CTO, Alpaca
Hitoshi Harada is CTO at Alpaca, a company enabling AI technology to automate professional human tasks. Before Alpaca, Hitoshi worked in the database industry and community for 10 years as a PostgreSQL's major feature contributor, a kernel architect of MPP database Greenplum, and a contributor of open source in-database machine learning library MADlib. He has a large amount of experience in distributed system, data science, and machine learning for industrial applications.

Discretionary trading by technical analysis and momentum strategy in the financial market has been difficult to automate by quant-style rigid conditional programming as it involves a lot of fuzziness and subtleties of human perception. Our application, Capitalico, analyzes the financial time-series data and trader's behavior to solve this problem using the RNN/LSTM. In this talk, we'll introduce the problem and our approach, and detail pitfalls and practices, such as how we choose networks and parameters to achieve the best accuracy and performance with deep learning using GPUs. As we borrowed great ideas from past deep learning applications, we'll help you understand how we converted those ideas to our solution and how to apply deep learning to your problem.

Level: Intermediate
Type: Talk
Tags: Finance; Deep Learning & Artificial Intelligence

Day: Wednesday, 04/06
Time: 14:30 - 14:55
Location: Marriott Salon 1

S6239 - From CVA to the Resolution of a Large Number of Small Random Systems

Lokman Abbas Turki Lecturer, LPMA, Paris 6 University
Lokman Abbas Turki is a lecturer at the Laboratoire de Probabilites et Modeles Aleatoires. Prior to this position, he spent two years as a postdoc at TU Berlin working on probability problems related to market impact and liquidity. Before that, Lokman earned his Ph.D. in probability and worked for a few months at INRIA as an expert in GPU parallelization of financial algorithms. During his Ph.D., he built a strong relationship with financial institutions like Credit Agricole and Pricing Partners.

The credit valuation adjustment (CVA) simulation represents a typical example of a problem that can be successfully overcome using GPUs. It also shows how challenges from a real-world application require advanced computing optimizations. This presentation covers both the algorithmic aspect of using GPUs for the CVA and the implementation optimization that should be performed when resolving a large number of small systems. The algorithmic part involves a nested Monte Carlo method for which we establish a judicious choice that relates the number of the inner and the number of the outer simulated trajectories. The implementation part presents and compares on a large number of small systems: the LDLt factorization, the Householder reduction, and the divide and conquer diagonalization.

Level: Intermediate
Type: Talk
Tags: Finance; Algorithms; Performance Optimization

Day: Wednesday, 04/06
Time: 15:00 - 15:50
Location: Marriott Salon 1

S6400 - Quants Coding CUDA® in .NET: Pitfalls and Solutions

Benjamin Eimer Quantitative Developer, Chatham Financial
Benjamin Eimer has been a quantitative developer at Chatham Financial for the past four years, where he focuses on model development and performance. Previous to working in finance, Ben worked for the National Institute of Occupational Safety and Health as an aerosol scientist. Ben received his Ph.D. in physics with an emphasis in computational modeling and material science from New Mexico State University in 2006.

We'll cover some of the lessons we have learned in developing a hybrid GPU/CPU linear algebra library in .NET to accelerate the financial risk and derivative pricing models developed by our quant team. The purpose of this library is to allow our team to transition to GPU computing incrementally within our extensive .NET codebase. We'll present some of the difficulties encountered when .NET automated garbage collection interacts with low-level memory management and how we addressed them. Solving these problems is essential for running CUDA code as part of a highly available, web service architecture.

Level: All
Type: Talk
Tags: Finance; Tools & Libraries; Programming Languages

Day: Wednesday, 04/06
Time: 16:00 - 16:25
Location: Marriott Salon 1

S6205 - Towards Efficient Option Pricing in Incomplete Markets

Shih-Hau Tan Ph.D. Student, University of Greenwich
Shih-Hau Tan graduated from National Tsing Hua University in Taiwan, finished his M.S. in University of Nice Sophia Antipolis with internship in INRIA in France, and is now working for ITN-STRIKE Marie Curie project under European Union on computational finance in London. His research interests include nonlinear option pricing for incomplete markets, application in commodity markets, and high performance computing with implementation on GPUs.

Nonlinear option pricing is a new approach for traders, hedge funds, or banks to obtain more accurate option price and to do fast model calibration using huge market data. Numerically the main problem is to solve fully nonlinear PDEs and strategies like Newton's method and ADI scheme are employed. Batch operations are used as well for calculating different option pricing problems together at the same time. We'll introduce how to use OpenACC and CUDA libraries to accelerate the whole computation. The complexity analysis will be shown first. We can obtain around 2X speedup by using OpenACC, and around 5X speedup by using libraries from cuSPARSE for solving tridiagonal systems and cuBLAS for computing level-2 functions.

Level: All
Type: Talk
Tags: Finance; OpenACC

Day: Wednesday, 04/06
Time: 16:30 - 16:55
Location: Marriott Salon 1

Talk