Sign In
GTC Logo
GPU
Technology
Conference

March 17-20, 2015 | San Jose, California
Check back often for session updates.

Scheduler Planner

Print
Download Pdf
 
  • List View
  • Calender View

 
Refine:
  • Session Levels:
  • |
  • |
  • |
  • |
  • Session Levels:
  •  
  •  
  •  
  •  
  • = Highly Rated Speaker

HANDS-ON LAB

Presentation
Details

S5651 - Hands-on Lab: Getting Started with CUDA C/C++

Mark Ebersole CUDA Educator, NVIDIA
Highly-Rated Speaker
As CUDA Educator at NVIDIA, Mark Ebersole teaches developers the benefit of GPU computing using the NVIDIA CUDA parallel computing platform and programming model, and the benefits of GPU computing. With more than ten years of experience as a systems programmer, Mark has spent much of his time at NVIDIA as a GPU systems diagnostics programmer in which he developed a tool to test, debug, validate, and verify GPUs from pre-emulation through bringup and into production. Before joining NVIDIA, he worked at IBM developing Linux drivers for the IBM iSeries server. Mark holds a BS degree in math and computer science from St. Cloud State University.

In this hands-on lab, you will learn how to work with the CUDA platform to accelerate C and C++ code on a massively parallel NVIDIA GPU. We'll start with the basics of writing in a CUDA-enabled language, work through accelerating sections of code on the GPU, learn how to error check, and more! As we'll be using GPUs hosted in the cloud, all you are required to bring is a laptop with a modern browser.

Level: Beginner
Type: Hands-on Lab
Tags: Developer - Programming Languages

Day: Tuesday, 03/17
Time: 13:00 - 14:20
Location: Room 211A

S5652 - Hands-on Lab: Accelerating Code on GPUs with Compiler Directives

Mathew Colgrove Dev Tech Software Engineer, NVIDIA
Mathew Colgrove
Mathew Colgrove is a Dev Tech Software Engineer with NVIDIA's Portland Group team. Mat's primary role is to help users in porting code to accelerators using OpenACC and CUDA Fortran as well as assisting with general programming questions. Prior to his current position, he was Quality Assurance manager responsible for both building and maintaining PGI's proprietary automated testing environments. Mat is also NVIDIA's SPEC representative www.spec.org on the CPU and HPG committees.

Learn how to use OpenACC and OpenMP directives to quickly start accelerating your applications in this hands-on lab. You will learn how to identify your GPU, what language features you can use, the most common directives to insert, how to build your program, and how to run your program. Sample programs and self-guided exercises will be provided. As we'll be using GPUs hosted in the cloud, all you are required to bring is a laptop with a modern browser.

Level: Beginner
Type: Hands-on Lab
Tags: Developer - Programming Languages

Day: Tuesday, 03/17
Time: 13:00 - 14:20
Location: Room 211B

S5774 - Self-Paced Labs: Real-World, Online Learning Modules for Beginners and Experts

Whether you're new to CUDA, looking to brush-up your GPU computing skills or looking to learn how to program on multi-GPUs, you'll benefit from NVIDIA's self-paced labs during GTC. Located on the main conference concourse, our full suite of self-paced labs are available every hour. Seats are limited so in order to reserve your space, make sure to sign up below for the time slot of your choice here: Tuesday Labs

Level: All
Type: Hands-on Lab
Tags: Developer - Programming Languages; Developer - Tools & Libraries; Developer - Performance Optimization

Day: Tuesday, 03/17
Time: 13:00 - 17:00
Location: Concourse

S5653 - Hands-on Lab: Scaling to Multi-GPU Acceleration

Justin Luitjens Developer Technologies Engineer, NVIDIA
Highly-Rated Speaker
Justin Luitjens
Justin Luitjens is a Developer Technologies Engineer at NVIDIA. He has been with NVIDIA for 3 years where he has focused on accelerating customer applications.

Multi-GPU systems provide higher performance per dollar than single-GPU systems which has led to a large increase in multi-GPU systems. This hands-on lab will teach you the basics of writing CUDA C/C++ code for multi-GPU applications. We will start with an application that utilizes a single GPU and together we will extend this application to work on multiple GPUs. As we'll be using GPUs hosted in the cloud, all you are required to bring is a laptop with a modern browser.

Level: Intermediate
Type: Hands-on Lab
Tags: Developer - Performance Optimization

Day: Tuesday, 03/17
Time: 15:00 - 16:20
Location: Room 211B

S5674 - Hands-on Lab: Introduction to Machine Learning with GPUs: Handwritten Digit Classification

Jonathan Bentz Solutions Architect, NVIDIA
Jonathan Bentz
Jonathan Bentz is a Solutions Architect, focusing on Higher Education and Research customers. In this role he works as a technical resource for customers to support their use of GPU computing. He delivers GPU programming workshops to train users and help raise awareness of GPU computing. He also works with application developers to assist in optimization for GPUs. Jonathan obtained his PhD in physical chemistry and his MS in computer science from Iowa State University.

In this lab you'll be using a neural network to classify handwritten digits from the MNIST database. You'll have the chance to explore CPU and GPU performance and adjust the training parameters to explore tradeoffs between accuracy and performance. We'll discuss convolutional neural networks and learn about NVIDIA's cuDNN library for implementing deep neural networks. This lab is intended to give you a practical overview of neural networks and convolutional neural networks on GPUs. You will gain insight into some of the computational challenges found in deep learning today and how they're being solved using GPU computing. We'll discuss briefly some of the third-party machine learning frameworks being deployed with GPUs. As we'll be using GPUs hosted in the cloud, all you are required to bring is a laptop with a modern browser. Prerequisites are:(1)Intermediate knowledge of CUDA C/C++ and (2)Intermediate knowledge of GPU libraries.

Level: Advanced
Type: Hands-on Lab
Tags: Machine Learning & Deep Learning

Day: Tuesday, 03/17
Time: 15:00 - 16:20
Location: Room 211A

S5775 - Self-Paced Labs: Real-World, Online Learning Modules for Beginners and Experts

Whether you're new to CUDA, looking to brush-up your GPU computing skills or looking to learn how to program on multi-GPUs, you'll benefit from NVIDIA's self-paced labs during GTC. Located on the main conference concourse, our full suite of self-paced labs are available every hour. Seats are limited so in order to reserve your space, make sure to sign up for the time slot of your choice Wednesday Labs here: Wednesday Labs

Level: All
Type: Hands-on Lab
Tags: Developer - Programming Languages; Developer - Tools & Libraries; Developer - Performance Optimization

Day: Wednesday, 03/18
Time: 09:00 - 17:00
Location: Concourse

S5656 - Hands-on Lab: Debugging and Automated Error Checking Tools and Techniques for GPU Programming

Vyas Venkataraman Senior System Software Engineer, NVIDIA
Highly-Rated Speaker
Vyas Venkataraman
Vyas is the lead engineer on the CUDA Debugger API and CUDA-MEMCHECK. He has been at NVIDIA since 2010. Vyas received his PhD from Boston University.
Nikita Shulga Software Engineer, NVIDIA
Nikita Shulga is a Software Engineer in the CUDA Developer Tools team. He works on the development of the CUDA-GDB and visual tools. Nikita joined NVIDIA in 2012. He holds an MS degree in Applied Math and Physics from Moscow Institute of Physics and Technology, Russia.

While GPUs can provide impressive perf / watt improvements over a traditional CPU, developing massively parallel applications is typically more complicated than dealing with just a few cores. In this hands-on lab, learn about the many powerful tools available to you to help debug and check for race conditions on your CUDA-accelearted application. We will cover tools such as CUDA-GDB, memcheck, initcheck, racecheck, and more! As we'll be using GPUs hosted in the cloud, all you are required to bring is a laptop with a modern browser.

Level: Intermediate
Type: Hands-on Lab
Tags: Developer - Performance Optimization

Day: Wednesday, 03/18
Time: 09:30 - 10:50
Location: Room 211B

S5711 - Hands-on Lab: Multi GPU Programming with MPI and OpenACC

Jiri Kraus Compute DevTech Software Engineer, NVIDIA
Highly-Rated Speaker
Jiri Kraus
Jiri Kraus is a developer in NVIDIA's European Developer Technology team. As a consultant for GPU HPC applications at the NVIDIA Jülich Applications Lab, Jiri collaborates with local developers and scientists at the Jülich Supercomputing Centre and the Forschungszentrum Jülich. Before joining NVIDIA Jiri worked on the parallelization and optimization of scientific and technical applications for clusters of multicore CPUs and GPUs at Fraunhofer SCAI in St. Augustin. He holds a Diploma in Mathematics from the University of Cologne, Germany.

In this session you will learn how to program multi GPU systems or GPU clusters using the Message Passing Interface (MPI) and OpenACC. The session starts by giving a quick introduction to MPI and how a CUDA-aware MPI implementation can be used with OpenACC. Other topics covered are: how to handle GPU affinity in multi GPU systems, overlapping communication with computation to hide communication times and using the NVIDIA performance analysis tools. As we'll be using GPUs hosted in the cloud, all you are required to bring is a laptop with a modern browser.

Level: Beginner
Type: Hands-on Lab
Tags: Developer - Algorithms; Developer - Programming Languages; Supercomputing

Day: Wednesday, 03/18
Time: 09:30 - 10:50
Location: Room 211A

S5647 - Hands-on Lab: DIY Deep Learning for Vision with Caffe

Evan Shelhamer PhD Student / Lead Developer of Caffe, UC Berkeley
Evan Shelhamer
Evan Shelhamer is a PhD student at UC Berkeley advised by Trevor Darrell as a member of the Berkeley Vision and Learning Center. His research is on deep learning and end-to-end optimization for vision. He is the lead developer of the Caffe deep learning framework and takes his coffee black.
Yangqing Jia Research Scientist, Google
Yangqing Jia
Yangqing Jia finished his Ph.D. in computer vision at UC Berkeley supervised by Trevor Darrell in May 2014. He is now a research scientist at Google. His main interests lie in large-scale and cognitive science inspired vision systems. His work focuses on enabling efficient learning of state-of-the-art features and human-like concept generalization from perceptual inputs. He was in the GoogLeNet team that won several of the ILSVRC 2014 challenges. He was also the recipient of the best paper award at ECCV 2014. He is the original author and a core developer of Caffe.

This tutorial is designed to equip researchers and developers with the tools and know-how needed to incorporate deep learning into their work. Both the ideas and implementation of state-of-the-art deep learning models will be presented. While deep learning and deep features have recently achieved strong results in many tasks, a common framework and shared models are needed to advance further research and applications and reduce the barrier to entry. To this end we present the Caffe framework that offers an open-source library, public reference models, and working examples for deep learning. Join our tour from the 1989 LeNet for digit recognition to today's top ILSVRC14 vision models. Follow along with do-it-yourself code notebooks. While focusing on vision, general techniques are covered.

Level: All
Type: Hands-on Lab
Tags: Machine Learning & Deep Learning; Computer Vision & Machine Vision

Day: Wednesday, 03/18
Time: 14:00 - 15:20
Location: Room 211A

S5657 - Hands-on Lab: Optimizing CUDA Application Performance with NVIDIA's Visual Profiler

Yu Zhou Senior System Software Engineer, NVIDIA
Yu Zhou is a Senior Software Engineer in the CUDA Developer Tools team. His main focus is on developing and improving the CUDA profiling tools. He's interested in optimization, scaling and user experience. Yu joined NVIDIA in 2011. He holds an MS degree in Electrical Engineering from Stanford University, and a Bachelor's degree from Nanjing University, China.
Mayank Kaushik Senior Software Engineer, NVIDIA
Mayank Kaushik
Mayank is a Senior Software Engineer in the CUDA Devtools group at NVIDIA. He works on the development of the CUDA profiler and the CUDA debugger on mobile and desktop platforms. Previously he worked in the device driver group at NVIDIA. He has been at NVIDIA since 2007. Mayank holds an MS degree in Computer Engineering from the University of Texas at Austin, and a Bachelor's degree in Electrical Engineering from Delhi College of Engineering, India.

This hand-on session takes you through the various steps involved in optimizing your CUDA application. NVIDIA's CUDA Visual Profiler, a cross-platform performance profiling tool that delivers vital feedback for optimizing CUDA C/C++ applications, will be used on sample application code to dig out the various performance limiters and assist in fine tuning the code. As we'll be using GPUs hosted in the cloud, all you are required to bring is a laptop with a modern browser.

Level: Intermediate
Type: Hands-on Lab
Tags: Developer - Performance Optimization

Day: Wednesday, 03/18
Time: 14:00 - 15:20
Location: Room 211B

S5574 - Hands-on Lab: Applied Deep Learning for Vision, Natural Language and Audio with Torch7

Soumith Chintala Research Engineer, Facebook
Soumith is a Research Engineer at Facebook AI Research. Prior to joining Facebook in August 2014, Soumith worked at MuseAmi, where he built deep learning models for music and vision targeted at mobile devices. In the past, Soumith worked on state-of-the-art deep learning models for pedestrian detection, natural image OCR, depth-images among others while driving his research heavily using CUDA and multiple GPUs.

This is a hands-on tutorial targeted at machine learning enthusiasts and researchers and covers applying deep learning techniques on classifying images, videos, audio and natural language data. The session is driven in Torch: a scientific computing platform that has great toolboxes for deep learning and optimization among others, and fast CUDA backends with multi-GPU support. Torch is supported by Facebook, Google, Twitter and a strong community who actively open-source their code and packages.

Level: Beginner
Type: Hands-on Lab
Tags: Machine Learning & Deep Learning; Computer Vision & Machine Vision; Signal & Audio Processing

Day: Wednesday, 03/18
Time: 15:30 - 16:50
Location: Room 211A

S5722 - Hands-on Lab: Deep Belief Networks Using ArrayFire (Presented by ArrayFire)

Umar Arshad Senior Software Developer and Technical Trainer, ArrayFire
Highly-Rated Speaker
Umar Arshad
Umar Arshad is a Senior Software Developer and a Technical Trainer at ArrayFire. Umar graduated with a degree in computer science from Georgia State University with a concentration in Parallel Programming and Machine Learning. At ArrayFire, he contributes to the open source ArrayFire acceleration library. Umar also provides ArrayFire consulting services and in-person trainings for CUDA and OpenCL.

Working on image processing, computer vision, or machine learning? This tutorial will give you hands on experience implementing Deep Belief Networks using ArrayFire and other CUDA tools. Deep Belief Networks are a specialized neural network that can be efficiently optimized on the GPU. Learn the best practices for implementing parallel versions of popular algorithms on GPUs. Instead of reinventing the wheel, you will learn where to find and how to use excellent versions of these algorithms already available in CUDA and ArrayFire libraries. You will walk away equipped with the best tools and knowledge for implementing accelerated image processing and machine learning. As we'll be using GPUs hosted in the cloud, all you are required to bring is a laptop with a modern browser.

Level: Intermediate
Type: Hands-on Lab
Tags: Developer - Tools & Libraries; Developer - Algorithms; Machine Learning & Deep Learning

Day: Wednesday, 03/18
Time: 15:30 - 16:50
Location: Room 211B

S5776 - Self-Paced Labs: Real-World, Online Learning Modules for Beginners and Experts

Whether you're new to CUDA, looking to brush-up your GPU computing skills or looking to learn how to program on multi-GPUs, you'll benefit from NVIDIA's self-paced labs during GTC. Located on the main conference concourse, our full suite of self-paced labs are available every hour. Seats are limited so in order to reserve your space, make sure to sign up for the time slot of your choice here: Thursday Labs

Level: All
Type: Hands-on Lab
Tags: Developer - Programming Languages; Developer - Tools & Libraries; Developer - Performance Optimization

Day: Thursday, 03/19
Time: 09:00 - 18:00
Location: Concourse

S5654 - Hands-on Lab: Introduction to Python GPU Acceleration

Mark Ebersole CUDA Educator, NVIDIA
Highly-Rated Speaker
As CUDA Educator at NVIDIA, Mark Ebersole teaches developers the benefit of GPU computing using the NVIDIA CUDA parallel computing platform and programming model, and the benefits of GPU computing. With more than ten years of experience as a systems programmer, Mark has spent much of his time at NVIDIA as a GPU systems diagnostics programmer in which he developed a tool to test, debug, validate, and verify GPUs from pre-emulation through bringup and into production. Before joining NVIDIA, he worked at IBM developing Linux drivers for the IBM iSeries server. Mark holds a BS degree in math and computer science from St. Cloud State University.

Python is one of the fastest growing languages today. There is great community support, many tools available, and the ability to quickly iterate on algorithms has made it very popular in the scientific community. In this hands-on lab, we'll use Continuum Analytics' Numba compiler to accelerate Python code on the GPU. As we'll be using GPUs hosted in the cloud, all you are required to bring is a laptop with a modern browser.

Level: Beginner
Type: Hands-on Lab
Tags: Developer - Programming Languages

Day: Thursday, 03/19
Time: 09:30 - 10:50
Location: Room 211A

S5721 - Hands-on Lab: In-Depth Performance Analysis for OpenACC/CUDA/OpenCL Applications with Score-P and Vampir

Guido Juckeland Leader Hardware Accelerator Group, NVIDIA
Guido Juckeland
Guido received his Ph.D. for his work on performance analysis for hardware accelerators. He coordinates the work of the CCoE at Technische Universität Dresden and also represents TU Dresden at the SPEC High Performance Group and OpenACC committee.
Robert Henschel Manager, Scientific Applications and Performance Tuning Research Technologies, Systems Pervasive Technology Institute , Indiana University
Robert Henschel
Robert received a masters degree in computer science from Technische Universität Dresden, Germany. He is now running the Scientific Applications and Performance Tuning group at Indiana University, focused on optimizing scientific applications.

The session will take you step-by-step through the performance analysis of highly parallel GPU applications using any combination of MPI, OpenMP, pthreads, OpenACC, CUDA, or OpenCL. It takes the standard NVIDIA performance tools to the next level by providing the capabilities to also study node-parallel application behavior. During the session you will use Score-P to generate an event log of application activity a.k.a. a trace file and will discover multiple ways to use this data to analyze the application execution. Vampir/Score-P are the de-facto standard for large scale high resolution performance analysis. As we'll be using GPUs hosted in the cloud, all you are required to bring is a laptop with a modern browser.

Level: Advanced
Type: Hands-on Lab
Tags: Developer - Performance Optimization; Developer - Tools & Libraries

Day: Thursday, 03/19
Time: 09:30 - 10:50
Location: Room 211B

S5859 - OpenPOWER Firmware Training Lab

Patrick Williams POWER Firmware Development, IBM
Patrick is a Senior Software Engineer at IBM where he has worked for the last 11 years with the Power Firmware Development organization. He is currently the maintainer of the Hostboot project which is used for booting OpenPower systems.

Architectural overview, selected deep-dive,and hands-on Lab to learn building, modifying, and testing.

Level: All
Type: Hands-on Lab
Tags: OpenPOWER; Embedded Systems; Education & Training

Day: Thursday, 03/19
Time: 12:00 - 13:20
Location: Hall 1-2

S5655 - Hands-on Lab: CUDA Application Development Life Cycle with NVIDIA® Nsight™ Eclipse Edition

Nikita Shulga Software Engineer, NVIDIA
Nikita Shulga is a Software Engineer in the CUDA Developer Tools team. He works on the development of the CUDA-GDB and visual tools. Nikita joined NVIDIA in 2012. He holds an MS degree in Applied Math and Physics from Moscow Institute of Physics and Technology, Russia.

CUDA application development made easy with NVIDIA's Integrated Development Environment on Linux and Mac. Here's your opportunity to go through a step-by-step, hands-on exercise on editing, compiling, debugging and profiling a CUDA application using Nsight™ Eclipse Edition. As we'll be using GPUs hosted in the cloud, all you are required to bring is a laptop with a modern browser.

Level: Intermediate
Type: Hands-on Lab
Tags: Developer - Performance Optimization

Day: Thursday, 03/19
Time: 13:00 - 14:20
Location: Room 211A

S5732 - Hands-on Lab: Deep Learning with the Theano Python Library

Frédéric Bastien Team Lead - Software Infrastructure, Machine Learning Laboratory at the University of Montreal
Frédéric Bastien
Frédéric Bastien is Team Lead - Software Infrastucture at LISA (Machine Learning Laboratory at the University of Montréal, Canada) and lead developer for the Theano library. In 2007, he finished a Master in computer architectures at University of Montreal and has since been working at the LISA lab.

This hands-on lab introduces the Theano framework, a software compiler/library based on Python. Learn how to get started with the software, as well as work through a few useful machine learning examples using accelerated on an NVIDIA GPU. As we'll be using GPUs hosted in the cloud, all you are required to bring is a laptop with a modern browser.

Level: Beginner
Type: Hands-on Lab
Tags: Developer - Tools & Libraries; Machine Learning & Deep Learning

Day: Thursday, 03/19
Time: 14:00 - 15:20
Location: Room 211B

S5814 - Hands-on Lab: Developing, Debugging and Optimizing GPU Codes for High Performance Computing with Allinea

Beau Paisley Support Engineer, Allinea Software
Beau Paisley
Beau is a computer science and mathematics graduate from the College of William and Mary and performed graduate studies in Electrical Engineering at Purdue University. Beau has over twenty five years of experience in development, marketing, and sales roles with research, academic, and startup organizations. He has previously held positions with NCAR, Applied Physics Lab, and several startup and early growth technical computing companies. Beau is now a Support Engineer with Allinea Software.

We will bring CUDA into a compute intensive application by learning how to use CUDA-enabled development tools in the process of profiling, optimization, editing, building and debugging. Using the Allinea Forge development toolkit, we cover how to profile an existing application and identify the most compute intensive code regions. We then replace these regions with CUDA implementations and review the results - before turning to the task of debugging the GPU enabled code to fix an error introduced during the exercise. We will learn debugging techniques for CUDA and debug using Allinea Forge to produce the correct, working, high performance GPU accelerated code. As we'll be using GPUs hosted in the cloud, all you are required to bring is a laptop with a modern browser.

Level: Beginner
Type: Hands-on Lab
Tags: Developer - Performance Optimization

Day: Thursday, 03/19
Time: 16:00 - 17:20
Location: Room 211A

S5822 - Hands-on Lab: Accelerate a Machine Learning C++ example with Thrust

Levi Barnes Developer Tech Software Engineer, NVIDIA
Levi Barnes is a CUDA Developer Technology Engineer at NVIDIA. He supports GPU-acceleration for quantum chemistry, fluid dynamics and signal processing. Levi holds a Ph.D. in Physics from the University of California at San Diego. Prior to joining NVIDIA, he developed mask synthesis software for Synopsys, Inc.

In this lab, we will learn the basics of Thrust and CUB in the context of a basic machine learning algorithm. Thrust is an easy way to get up and running with GPUs; in 80-minutes we'll have a working prototype with great performance. We will also examine some of the new C++11 features and how they can simplify CUDA programming. No knowledge of GPU programming or machine learning is assumed. As we'll be using GPUs hosted in the cloud, all you are required to bring is a laptop with a modern browser.

Level: Beginner
Type: Hands-on Lab
Tags: Developer - Programming Languages

Day: Thursday, 03/19
Time: 16:00 - 17:20
Location: Room 211B

S5777 - Self-Paced Labs: Real-World, Online Learning Modules for Beginners and Experts

Whether you're new to CUDA, looking to brush-up your GPU computing skills or looking to learn how to program on multi-GPUs, you'll benefit from NVIDIA's self-paced labs during GTC. Located on the main conference concourse, our full suite of self-paced labs are available every hour. Seats are limited so in order to reserve your space, make sure to sign up for the time slot of your choice here: Friday Labs

Level: All
Type: Hands-on Lab
Tags: Developer - Programming Languages; Developer - Tools & Libraries; Developer - Performance Optimization

Day: Friday, 03/20
Time: 09:00 - 15:00
Location: Concourse

S5710 - Hands-on Lab: In-Situ Data Analysis and Visualization: ParaView, Calalyst and VTK-m

Marcus Hanwell Technical Leader, Kitware, Inc.
Marcus Hanwell
Marcus D. Hanwell is a Technical Leader in the Scientific Computing group at Kitware, Inc. He leads the Open Chemistry project, which focuses on developing open-source tools to for chemistry, bioinformatics, and materials science research. He completed an experimental PhD in Physics at the University of Sheffield, a Google Summer of Code developing Avogadro and Kalzium, and a postdoctoral fellowship combining experimental and computational chemistry at the University of Pittsburgh before moving to Kitware, Inc. in late 2009. He is a member of the Blue Obelisk, blogs, is @mhanwell on Twitter, and is active on Google+ . He has also written several guest posts for opensource.com and the Kitware Source. He is passionate about open science, open source and making sense of increasingly large scientific data to understand the world around us. Dr. Hanwell has played a key role in developing new development workflows as Kitware's open source projects moved to Git, works on Gerrit code review integration, runs CDash@Home cloud-based testing with Gerrit code review, and contributes to next generation build systems in the VTK, ITK, and Titan projects. He has also been awarded and led a Phase I and Phase II SBIR project to further develop open-source chemistry tools for computational chemistry, and has taken part in international collaborations to establish open standards for data exchange in chemistry. Additionally, Dr. Hanwell has been an active member of several open-source communities. He is one of the core developers of Avogadro, an open-source, 3D, cross-platform molecular visualization and editing application/library. He has been an active member of the Gentoo and KDE communities, and is a member of the KDE e.V. His work in Avogadro was featured by Trolltech on their "Qt in Use" pages, and he was selected as a Qt Ambassador. He won a Blue Obelisk award for his work in open chemistry, and continues to develop and promote open approaches in chemistry and related fields. He has also worked throughout his career on approaches that use both experimental and computational approaches to validate theories and models, and continues to seek ways in which software tools can be created to make comparison and validation simpler.
Robert Maynard R & D Engineer, Kitware, Inc.
Robert Maynard joined Kitware in March 2010 as a Research and Development Engineer. He received his B.S. in Computer Science from Laurentian University in May 2007. After graduating Robert spent 3 years at MIRARCO where he was the primary programmer on ParaViewGeo, a fork of ParaView designed for the mining industry. He is active contributor to the CMake, DAX, ParaView, and VTK projects. He has extensively contributed to the build systems for multiple open source projects.

As datasets from computational efforts are growing rapidly due to increasing computing power, scientists, engineers and medical researchers are looking for efficient and cost effective ways to enable data analysis and visualization. ParaView is an open-source, multi-platform data analysis and visualization application. It was developed to analyze extremely large datasets using distributed memory computing resources. Attendees will learn the basics of using ParaView, ParaView Catalyst and developing VTK-m worklets with hands-on exercises. The hands-on lab features detailed guidance in implementing solutions in C++, with some Python examples. Attendees should bring laptops to install software and follow along with the demonstrations.

Level: Intermediate
Type: Hands-on Lab
Tags: Visualization - In-Situ & Scientific; Developer - Tools & Libraries; Supercomputing

Day: Friday, 03/20
Time: 09:30 - 10:50
Location: Room 211B

S5895 - Accelerating Computer Vision Algorithms on CUDA-Capable Embedded Platforms

Alexander Smorkalov Senior Software Engineer, Itseez
Alexander Smorkalov is a Senior Software Engineer at Itseez, leading a team of developers maintaining the OpenCV library on mobile and embedded platforms. He received a master's degree from Nizhny Novgorod State University, in Russia. His professional interests include: system programming, computer vision, and performance acceleration of multimedia applications. Alexander is a contributor to OpenCV library development, and has worked on porting it to Android, Windows RT and Embedded Linux platforms.

Computer Vision is becoming a part of everyday life, and is an integral feature of modern smart devices. The OpenCV library (http://opencv.org) provides powerful tools for initial algorithm development, and enables deployment on a wide spectrum of target platforms, ranging from servers to embedded and mobile devices. Despite rapid growth in available computational power, application performance and responsiveness still remain a key challenge. In this hands-on lab with Jetson TK1 dev kits, we will study how an OpenCV-based application can be ported to the NVIDIA Jetson embedded platform, and then accelerated using CUDA technology. We will start with a computational photography application that uses only CPU cores for processing. Next, we will profile the application and see that the CPU is not powerful enough to process high resolution images with acceptable performance. After that, we will replace detected hotspots with CUDA implementations, enabling decent processing speed. Finally, some tips and best practices for cross-platform development will be given.

Level: Beginner
Type: Hands-on Lab
Tags: Embedded Systems; Computer Vision & Machine Vision

Day: Friday, 03/20
Time: 09:30 - 10:50
Location: Room 211A

S5823 - Hands-on Lab: Accessing the GPU from Java

Matthew Peters Software Engineer, IBM
Matthew Peters
Matthew Peters is a software engineer working directly on the C and C++ code within IBM's J9 Java Virtual Machine. He has worked on various areas of the JVM, including garbage collection and locking, and led the performance test team, analysing performance problems and putting reliable performance data in front of decision makers at many levels within IBM. In the last year Matthew has led the team adding GPU support into the JVM. Matthew holds a BA in Mathematics from University of Cambridge and an MSc in Software Engineering from University of Oxford.

Got your data in the Java heap? Want to use the GPU on it? The language of choice for using the GPU may be C or C++, but you might not have that choice. If your data is already in the Java heap, or you are integrating with an application written in Java, or even if it's just that your favorite programming language is Java... until now your only option if you wanted to use the GPU from Java was to write a lot of ugly Java - C interface code. No longer. This hands-on lab will look at two different ways to use IBM's Java 8 JVM to access the GPU from Java on a Power-based system. Using Conway's Game of Life as a simple programming example, you will try out CUDA4J, a Java API to CUDA which allows a Java program to move data between the Java heap and the GPU and invoke kernels written in C in a way that is natural and expressive to Java programmers and minimizes the amount of C code required. You will also try out the new support for automatic parallelisation of Java 8 lambda expressions, where Java code is compiled and run on the GPU with not a line of C code in sight As this lab will run on a Power+GPU cluster, please make sure you have an SSH client installed on your machine before attending!

Level: Beginner
Type: Hands-on Lab
Tags: Developer - Programming Languages

Day: Friday, 03/20
Time: 13:30 - 14:50
Location: Room 211B

S5850 - Project Tango UnitySDK & C/C++ API Introduction

Jason Guo Developer Relations Engineer, Google
Jason Guo is a Developer Relations Engineer at Google Project Tango team. He received his Master degree from Carnegie Mellon University, Entertainment Technology Center. Jason holds a background in human computer interaction, digital media, and computer graphics. With a great passion of exploring new interactive experiences, Jason has been involved with multiple AR/VR projects related to depth sensing and 6 DOF motion tracking. Jason is also one of the major contributors of Project Tango examples and demos.
Chase Cobb Software Engineer, Google
Chase Cobb is a Software Engineer on Google's Project Tango team. Chase has four years of experience in game development and has worked on everything from mobile titles to triple-A console games. His expertise is in mobile game development using the Unity game engine. Over the last year, Chase has been developing the Project Tango Unity SDK and has created many technical demos, as well as sample projects.

UnitySDK Session (40 min.): Part I of this session walks the user through porting an existing game to the Tango platform, using motion tracking, and demonstrating the best developer practices. We will introduce developers to the UI/UX library and demonstrate the benefits it provides in terms of user feedback. C/C++ API Session (40 min.): In Part II, developers will create a JNI based motion tracking example from an Android Studio skeleton project. We will walk through Tango API, including configuration, Android lifecycle, and implementing basic motion tracking functionalities. Seating is limited. Please fill out the form at the following link to reserve your seat and a Project Tango device here: Reserve Your Spot

Level: All
Type: Hands-on Lab
Tags: Augmented Reality & Virtual Reality; Computer Vision & Machine Vision; Game Development

Day: Friday, 03/20
Time: 13:30 - 14:50
Location: Room 211A

Hands-on lab