Sign In
GTC Logo
GPU
Technology
Conference

March 24-27, 2014 | San Jose, California
Slidecasts of GTC sessions are available now for conference registrants – please “Sign In” to view.
PDFs of presentation slides will be available by mid-April. Registrants must login to view slidecasts and PDFs.
For non-registrants, this GTC content will be available at the end of April on GTC On Demand.

GPU Technology Conference Schedule Planner

Print
 
  • List View
  • Calender View

 
Refine:
  • Session Levels:
  • |
  • |
  • |
  • |
  • Session Levels:
  •  
  •  
  •  
  •  
  • = Highly Rated Speaker

TALK

Presentation
Details

S4743 - First In Vivo Medical Images Using Photon-Counting, Real-Time GPU Reconstruction

Augustus Lowell ( VP of Engineering Technology, Triple Ring Technologies )
Augustus Lowell
Gus Lowell has over 20 years of experience in systems and software engineering, product specification development, and product planning, with involvement in the medical device field, semiconductor processing, and internet servers. His background includes expertise in pipelined data processing, image and signal processing algorithm development, distributed processing, embedded systems, real-time and event-driven systems, fault-detection and safety-critical systems, and structured analysis and object-oriented design. He has designed high-speed digital imaging processors and systems; analog/digital, data acquisition, machine-control interfaces, and sensor interfaces. He is skilled in coding in a number of assembly languages and C++/C/Pascal/Fortran. Gus has held senior engineering and project management positions at both Fortune 100 and start-up companies, including Abbott Laboratories, Tetris Systems, NexRay, and in the military space program for the United States Air Force. Gus holds three patents and has authored a number of scientific papers. He received his BS in Electrical Engineering from the Massachusetts Institute of Technology.

Triple Ring Technologies has worked on several generations of a cardiology imaging system. The unique x-ray imaging chain allows for up to 20x radiation exposure reduction and 3D localization. Each generational improvement in image quality required a 10x or more increase in the number of computations required to process the images. With sample rates of nearly 1 Msps and high-density detectors comprising over 200,000 elements, the latest generation system generates 160 billion samples per second and processes them into real-time images useful to a cardiologist. Historically, the processing elements used to achieve required computation rates were created using pipelined parallel processing stages in state-of-the art FPGAs, or exotic massively-parallel processor arrays. The latest generation of NVIDIA GPUs have changed this. We have recently implemented a novel image processor using an array of nine GPUs. We will show the first cardiac imaging study using this approach.

Session Level: All
Session Type: Talk
Tags: Real-Time Graphics Applications; Medical Imaging & Visualization; Clusters & GPU Management; Computational Physics

Day: Tuesday, 03/25
Time: 13:00 - 13:50
Location: Room LL21F

S4421 - GPU Computing with MATLAB

Andy Thé ( Sr. Product Marketing Manager - Image Processing , MathWorks )
Andy Thé
Andy holds a B.S. in Electrical Engineering from Georgia Institute of Technology and a B.A. in Business from Kennesaw State University. Before joining MathWorks, Andy spent 12 years as a field applications engineer focused on embedded processors at Texas Instruments, and 3 years as a software marketing manager for real-time software at IntervalZero.

Learn how to use NVIDIA GPUs to accelerate computationally intensive MATLAB applications in areas such as image processing, signal processing, and computational finance. We will use an image processing example to demonstrate how you can speed up your MATLAB code by using built-in GPU enabled functionality or by replacing key computations with CUDA kernels. We will also illustrate how MATLAB can be used as a development environment and test framework for CUDA kernel evaluation, visualization, and validation.

Session Level: Beginner
Session Type: Talk
Tags: Programming Languages & Compilers; Video & Image Processing; Medical Imaging & Visualization

Day: Tuesday, 03/25
Time: 14:00 - 14:50
Location: Room LL21F

S4247 - A GPU-Based Free-Viewpoint Video System for Surgical Training

Pierre Boulanger ( Professor, University of Alberta )
Pierre Boulanger
Dr. Boulanger worked for 18 years at the National Research Council of Canada as a senior research officer where his primary research interest was in 3D computer vision, rapid product development, and virtualized reality systems. He now has a double appointment as a professor at the University of Alberta's Department of Computing Science and at the Department of Radiology and Diagnostic Imaging. His main research topic and teaching is on virtualized reality systems. He is also principal investigator for stereo IPTV at TRLabs. In 2004, Dr. Boulanger was awarded an iCORE/TRLabs industrial chair in Collaborative Virtual Environment and is now the new CISCO chair in healthcare solutions. He has published more than 270 scientific papers in various Journals and Conferences. He is on the editorial board of two major academic journals. Dr. Boulanger is also on many international committees and frequently gives lectures on rapid product development and virtualized reality. He is the Director of the Advanced Man Machine Interface Laboratory. He is also the scientific director of the Servier Virtual Cardiac Center. On the commercial side, Dr. Boulanger is the president of PROTEUS Consulting Inc. an Alberta-based consulting firm specialized in Virtual Reality Applications.

In this presentation, we propose a novel GPU-based algorithm capable of generating free viewpoints from a network of fixed HD video cameras. This free viewpoint TV system consists of two main sub-systems: a real-time depth estimation sub-system, which extracts a disparity map from a network of cameras, and a synthetic viewpoint generation sub-system that uses the disparity map to interpolate new views between the cameras. In this system, we use a space-sweep algorithm to estimate depth information, which is amiable to parallel implementation. The view generation sub-system generates new synthetic images from 3D vertices and renders them from an arbitrary viewpoint specified by the user. Both steps are computationally extensive, but the computations can easily be divided from each other and thus can be efficiently implemented in parallel using CUDA. A surgical training application is presented.

Session Level: Beginner
Session Type: Talk
Tags: Computer Vision; Video & Image Processing; Virtual & Augmented Reality; Medical Imaging & Visualization; Recommended for All Press

Day: Tuesday, 03/25
Time: 15:00 - 15:25
Location: Room 212B

S4434 - Real-Time 4K JPEG2000 for Broadcast and Digital Cinema

Jiri Matela ( CEO, Comprimato )
Jiri Matela
Jiri Matela received BSc and MSc degrees in Compute Science from Masaryk University in Brno, Czech Republic in 2007 and 2009. He is currently working toward the PhD degree at the Masaryk University focusing at image compressions, reformulations of image processing algorithms for massively parallel GPU architectures, high-speed networks. He is a member of team that recently received ACM Multimedia Best Open-Source Software Award for real-time image compressions and video transmission application UltraGrid and that demonstrated one of the first real-time compressed transmissions of video in 8K Ultra High-Definition resolution. Jiri is founder of Comprimato Systems, a company focusing on GPU accelerated image compressions and video codecs.

JPEG2000 is compression standard for digital cinema post-producition and it is an emerging standard for broadcast contribution and archiving. So far the JPEG2000 format was considered as computationally too heavy to be used for other then standardized applications such as cinema distribution. We present successful GPU design and implementation of JPEG2000 codec allowing for real-time film compression and decompression in digital cinema and broadcast applications. Fast GPU processing will help to further spread JPEG2000 as archiving and mezzanine format.

Session Level: Intermediate
Session Type: Talk
Tags: Media & Entertainment Summit; Video & Image Processing; Medical Imaging & Visualization

Day: Tuesday, 03/25
Time: 17:30 - 17:55
Location: Room 211B

S4636 - Deep Neural Networks for Visual Pattern Recognition

Dan Ciresan ( Researcher, IDSIA )
Dr. Dan Ciresan received his PhD from "Politehnica" University of Timisoara, Romania. He first worked as a postdoc before becoming a senior researcher at IDSIA, Switzerland. Dr. Ciresan is one of the pioneers of using CUDA for Deep Neural Networks (DNNs). His methods have won five international competitions on topics such as classifying traffic signs, recognizing handwritten Chinese characters, segmenting neuronal membranes in electron microscopy images, and detecting mitosis in breast cancer histology images. Dr. Ciresan has published his results in top-ranked conference proceedings and journals. His DNNs have significantly improved the state of the art on several image classification tasks.

GPU-optimized Deep Neural Networks (DNNs) excel on image classification, detection and segmentation tasks. They are the current state of the art method in many visual pattern recognition problems by a significant margin. DNNs are already better than humans at recognizing handwritten digits and traffic signs. The complex handwritten Chinese characters are recognized with almost human performance. DNNs are successfully used for automotive problems like traffic signs and pedestrian detection; they are fast and extremely accurate. DNNs help the field of connectomics by making it possible to segment and reconstruct the neuronal connections in large sections of brain tissue for the first time. This will bring a new understanding of how biological brains work. Detecting mitotic cells in breast cancer histology images can be done quickly and efficiently with DNNs. Segmenting blood vessels from retinal images with DNNs helps diagnosticians to detect glaucoma.

Session Level: Beginner
Session Type: Talk
Tags: Computer Vision; Machine Learning & AI; Medical Imaging & Visualization; Automotive

Day: Tuesday, 03/25
Time: 17:30 - 17:55
Location: Room LL21B

S4259 - Optimization of a CUDA-based Monte Carlo Code for Radiation Therapy

Nick Henderson ( Postdoctoral Scholar, Stanford University, Institute for Computational and Mathematical Engineering )
Nick Henderson is a Postdoctoral scholar with the CUDA Center of Excellence in the Institute for Computational and Mathematical Engineering at Stanford University.

Learn about optimization efforts in G4CU, a CUDA Monte Carlo code for radiation therapy. G4CU is based on the core algorithm and physics processes in Geant4, a toolkit for simulating particles traveling through and interacting with matter. The techniques covered will include the use of texture references for look-up tables, device configuration for different simulation components, and scheduling of work for different particle types.

Session Level: Intermediate
Session Type: Talk
Tags: Computational Physics; Medical Imaging & Visualization; Numerical Algorithms & Libraries

Day: Wednesday, 03/26
Time: 09:00 - 09:25
Location: Room 212A

S4422 - A New GPU-Based Level Set Method for Medical Image Segmentation

Wenzhe Xue ( Research Assistant, Mayo Clinic Arizona; Arizona State University )
Wenzhe Xue
Wenzhe Xue is working towards his Ph.D. on Biomedical Informatics at ASU, and is a research assistant in the Medical Imaging Informatics (MII) lab at Mayo Clinic Arizona, under the supervision of Dr. Ross Mitchell. Wenzhe works on developing novel GPU-based level set methods for medical image segmentation and validating on both synthetic and real clinical image data. He aims to provide an accurate, precise, and fast tool for quantitative imaging on cancer treatment research and studies.

We have developed a new approach to measure lesion volumes in medical images using GPU programming. The approach is based on the level set method and minimizes the number of voxels included in the computational domain with unique efficiency. The underlying cost function and specifics of the level sets approach are not limited by the implementation, and multiple methods for determining the boundary progression speed are possible. We have experimented with intensity-based approaches as well as higher-order feature spaces using multiple image contrasts. We have tested our approach on synthetic images and in a clinical setting. GPU programming also enables real-time 3D rendering and visualization of the propagating level set surface volume. This GPU-enabled combination of speed and interactivity makes our approach an excellent candidate for use in oncology where change in tumor volume guides clinical decision making and assessment of treatment effectiveness.

Session Level: Beginner
Session Type: Talk
Tags: Medical Imaging & Visualization; Video & Image Processing; Combined Simulation & Real-Time Visualization; Recommended Press Session – HPC-Science; Recommended for All Press

Day: Wednesday, 03/26
Time: 09:00 - 09:25
Location: Room LL21B

S4270 - Computation of Mutual Information Metric for Image Registration on Multiple GPUs

Andrew Adinetz ( Researcher, Julich Supercomputing Centre, Forschungszentrum Jülich )
Andrew Adinetz
Andrew V. Adinetz got his M.S. degree in Computer Science in 2006 from Lomonosov Moscow State University, and his Ph.D. in Computer Science in 2009, also from MSU. He's currently working as a researcher at Forschungszentrum Jülich (NVidia Application Lab, Jülich Supercomputing Centre). His current research interests include GPU programming, algorithm design for many-core architectures, high-performance computing and programming languages.

Because of their computational power, GPUs are widely used in the field of image processing. And while registration of brain images has been previously accelerated with GPUs, registration of human brain images presents new challenges due to large amounts of data and images not fitting in the memory of a single device. We present how we address these challenges with a multi-GPU approach. We present in detail how we overcome challenges arising due highly irregular communication during metric computation. Our evaluation demonstrates that adequate performance is achieved with multiple GPUs even with high volume of communication.

Session Level: Beginner
Session Type: Talk
Tags: Medical Imaging & Visualization; Recommended Press Session – HPC-Science; Recommended for All Press

Day: Wednesday, 03/26
Time: 09:30 - 09:55
Location: Room LL21B

S4148 - Experiences Porting Real Time Signal Processing Pipeline CUDA Kernels to Kepler and Windows 8

Ismayil Guracar ( Senior Key Expert, Siemens Medical Solutions USA, Inc. Ultrasound Business Unit )
Ismayil Guracar has been working in the ultrasound imaging field for over 27 years. He is currently a Senior Key Expert with the Innovations Applications Group at Siemens Healthcare, Ultrasound Business Unit in Mountain View, California. His interests include ultrasound image formation and high performance real time signal processing, especially using GPUs. He holds 64 US patents and has pioneered new ultrasound technologies in the areas of parametric and molecular imaging and contributed to the development of many successful diagnostic medical imaging products.

The move to the Kepler generation of GPU cards created new challenges and opportunities for an existing medical ultrasound imaging product with high performance real time signal processing kernels based on CUDA running on the Fermi based Quadro 2000 and WinXP. The initial port to Kepler and Win8 only required a new driver, however significant degradation in execution speed was noted compared to the earlier generation. I will show how various causes of the slowdown were identified and the strategies we developed, including increasing instruction level parallelism (ILP), to refactor kernels to achieve the full potential of the Kepler architecture.

Session Level: Intermediate
Session Type: Talk
Tags: Medical Imaging & Visualization; Signal & Audio Processing; Performance Optimization

Day: Wednesday, 03/26
Time: 10:00 - 10:50
Location: Room LL21B

S4342 - CUDA-Accelerated MATLAB without Parallel Computing Toolbox for 3D Medical Image Segmentation

Jung W. Suh ( Senior Research Scientist, KLA-Tencor )
Jung W. Suh
Jung W. Suh is a senior algorithm engineer and research scientist at KLA-Tencor. Dr. Suh received his Ph.D. from Virginia Tech in 2007 for his 3D medical image processing work. He was involved in the development of MPEG-4 and Digital Mobile Broadcasting (DMB) systems in Samsung Electronics. He was a senior scientist at HeartFlow, Inc., prior to joining KLA-Tencor.His research interests are in the fields of biomedical image processing, pattern recognition, machine learning and image/video compression. He has more than 30 journal and conference papers and 6 patents.

Learn how to accelerate your MATLAB codes using CUDA without Parallel Computing Toolbox. Although the Parallel Computing Toolbox is useful for speeding up, this toolbox may not be accessible to every MATLAB user and may have limitations in fully exploiting the power of both MATLAB and CUDA. For the purpose of general speeding up of MATLAB applications, the GPU-utilization through c-mex would provide more flexibility and power in many situations. This session will go through the MATLAB implementation of the atlas-based 3D hippocampus segmentation for MRI image as an example. The atlas-based segmentation is widely used in neuroimage analysis due to its reliable segmentation result even for the challenging target objects with ambiguous and complicated boundaries. However, it requires a high computational power because 3D image registration is used during the segmentation process. This session will show the each step of CUDA optimization for our atlas-based segmentation MATLAB codes from profiling to CUDA conversions through c-mex.

Session Level: Intermediate
Session Type: Talk
Tags: Medical Imaging & Visualization; Video & Image Processing; Computer Vision

Day: Wednesday, 03/26
Time: 14:00 - 14:25
Location: Room LL21B

S4886 - Working with the Latest Oculus Rift Hardware and Software

Michael Antonov ( Chief Software Architect, Oculus )
Michael Antonov
Michael was co-founder and CTO of Scaleform, the #1 user interface (UI) technology provider in the video game market, which was acquired by Autodesk in March 2011. At Scaleform, he was the lead architect of the Scaleform GFx hardware-accelerated Flash vector graphics engine. Michael is an expert in complex multi-threaded architecture, computer graphics, programming language design, and engineering management.

Oculus VR is revolutionizing the way people experience 3D worlds. The company's first product, Oculus Rift, is a virtual reality headset that allows users to step inside virtual environments. It provides an immersive, stereoscopic 3D experience with an ultra-wide field of view and super-low-latency head tracking. Since the debut of the Oculus Rift development kit at Game Developers' Conference 2013, Oculus has added a high-definition display, positional tracking and low-persistence support. Also, we've made critical improvements to the Oculus SDK, adding new features while making things simpler and reducing latency. In this talk, we'll discuss everything you need to get started integrating the latest Oculus Rift hardware with your 3D environment. The talk includes an overview of the latest hardware, a technical breakdown for engineers and a game design discussion. We'll also talk about our vision for future hardware development leading to the consumer Rift.

Session Level: All
Session Type: Talk
Tags: Media & Entertainment Summit; Virtual & Augmented Reality; Defense; Medical Imaging & Visualization

Day: Wednesday, 03/26
Time: 14:00 - 14:25
Location: Room 211B

S4206 - Hardware and Software Design for a 1000 FPS Real-Time Soft-Field Tomography System

Patrik Gebhardt ( Ph.D. Student, Ruhr-University Bochum )
Patrik Gebhardt
Patrik is a Ph.D. Student at the Institute of Electronic Circuits researching the combination of several tomographic imaging techniques for determining the volume fractions of the components of multi-phase flows. During my studies of Electrical Engineering I have been developing GPU based PIC and FDTD software for plasma and metamaterial simulation.

See how to build up a high speed, low latency real-time measurement system for industrial process tomography based on Electrical-Impedance-Tomography, which is capable to generate more than 1000 cross-sectional images of a pipe per second, using (A) FPGAs for data acquisition of several ADCs and preprocessing in parallel and (B) GPUs for solving the underlying PDE and reconstruct these images with a latency of approx. 50 ms. Examples of the signal processing algorithms as well as the methods used to accelerate the reconstruction process on GPUs will be given.

Session Level: Beginner
Session Type: Talk
Tags: Signal & Audio Processing; Medical Imaging & Visualization

Day: Wednesday, 03/26
Time: 14:30 - 14:55
Location: Room 210D

S4364 - Enabling Real-Time Cancer Research Tools: Accelerating Analysis of Cell Responses to Chemotactic Gradients

Jimmy Pettersson ( GPU Computing Specialist, High Performance Consulting )
Jimmy has over 4 years experience programming GPUs in areas ranging from signal & image processing, finance, and medical applications.

Learn how we used CUDA to accelerate cancer research by building a complete real-time automated analysis tool for research scientists. By shortening an analysis process down from days to less than a minute we're enabling scientists to spend more time focusing on their work: cancer research, molecular drug screening on a cellular level, etc,. The talk will also get into some of the computational challenges and algorithm design opportunities that were seized upon.

Session Level: Intermediate
Session Type: Talk
Tags: Medical Imaging & Visualization; Bioinformatics & Genomics

Day: Wednesday, 03/26
Time: 14:30 - 14:55
Location: Room LL21B

S4645 - Scalable Rendering Architecture: Challenges & Approaches

Ketan Mehta ( Principal Software Engineer, Vital Images Inc )
Ketan Mehta
Ketan has been working and leading rendering at Vital Images for 7+ years, where technology moved from stand alone workstations to enterprise rendering.

Learn about challenges in deploying medical imaging advanced visualization in enterprise data-centers and the impacts of virtualization. This talk will discuss and dispel some myths about virtualization with GPUs. The talk will also cover key challenges of designing scalable rendering architecture that can support tens to hundreds of users, focusing on system and architecture challenges along with software design concerns that need to be addressed. Active participation and sharing of experiences from the audience is welcome and encouraged.

Session Level: Intermediate
Session Type: Talk
Tags: Medical Imaging & Visualization; Remote Graphics & Cloud-Based Graphics; Clusters & GPU Management

Day: Wednesday, 03/26
Time: 15:00 - 15:25
Location: Room LL21B

S4513 - GPU Acceleration of Processing and Visualization for Various Optical Coherence Tomography Methodologies

Kevin Wong ( Graduate Student Researcher, Simon Fraser University )
Kevin Wong
Kevin Wong is a Master’s Student in the Biomedical Optics Research Group at Simon Fraser University. He received his Bachelor of Applied Science degree in Biomedical Engineering at Simon Fraser University. He developed his interest in GPU computing during a research project on the acceleration of Fourier Domain Optical Coherence Tomography processing. His graduate research concentrates on further improving high performance computing by exploring the potential of multi-GPU solutions for the FDOCT processing pipelines.

The goal of this session is to explore the many GPU computing applications for accelerating the processing pipeline and visualization algorithms in Fourier Domain Optical Coherence Tomography (FDOCT) for medical applications, such as ophthalmic imaging. We will describe the GPU programming techniques that we used for accelerating and optimizing real-time FDOCT processing, which is currently the fastest GPU implementation for FDOCT to the best of our knowledge. We will demonstrate two additional novel variations of functional OCT imaging made possible by GPU acceleration: real time speckle variance OCT (svOCT) to visualize the vasculature network in the retina, and wavefront sensorless adaptive optics OCT for ultrahigh resolution volumetric imaging. We will present videos to illustrate the use of our GPU-based FDOCT processing and the imaging applications in a clinical environment.

Session Level: All
Session Type: Talk
Tags: Medical Imaging & Visualization

Day: Wednesday, 03/26
Time: 15:30 - 15:55
Location: Room LL21B

S4363 - Accelerated X-Ray Imaging: Real-Time Multi-Plane Image Reconstruction with CUDA

Prashanth Bhat ( CTO, Manipal Dot Net Pvt. Ltd. )
Prashanth Bhat
Dr. Prashanth Bhat is Chief Technology Officer and Executive Director (Software) at Manipal Dot Net Pvt. Ltd., India, a technology outsourcing company which takes up software development and hardware design projects for worldwide clients. His areas of expertise include High performance parallel computing, GPU acceleration using CUDA, Search engine technology, and Embedded systems. Prior to joining Manipal Dot Net in 2007, he worked in the search engine industry for over eight years, during his tenures at Yahoo! Inc (USA), Overture Services, and Alta Vista Search. In these roles, he has contributed to the core search engine, the Contextual Match advertising infrastructure, and also a distributed machine learning architecture. As a summer intern at HP Research Labs, Palo Alto, he developed new process scheduling techniques for HP's high-end parallel servers. Dr. Prashanth Bhat graduated with a PhD in Computer Engineering from the University of Southern California, Los Angeles. He holds 3 US patents in the field of High Performance Computing and Search engines, and has authored about 15 international publications.

Explore the realm of modern X-ray Fluoroscopy, where ever-increasing data rates and computational requirements are the norm. This session presents an efficient and scalable CUDA solution for multi-plane image reconstruction, an essential yet computationally challenging component of these systems. Our parallelization strategy incorporates several non-trivial techniques to improve performance: (a)reduce run-time computations by using pre-computed LUTs; (b)reduce memory bandwidth consumption by accumulating computations in registers before writing to memory; (c)exploit 2D data locality by using the GPU's texture memory and cache; (d) optimize occupancy by tuning the thread-block configuration. We present experimental results on three Kepler GPUs: GeForce GTX690, Tesla K10, and Tesla K20. On the GTX690, we show real-time rates of 15 fps for 32 1000x1000 image planes, with speed-ups of 6000x over a CPU implementation, and 10x over an alternative CUDA approach. On both Tesla GPUs, we show linear scaling, making a multi-GPU solution viable.

Session Level: All
Session Type: Talk
Tags: Medical Imaging & Visualization; Video & Image Processing; Ray Tracing

Day: Wednesday, 03/26
Time: 16:00 - 16:50
Location: Room LL21B

S4297 - Optimization Opportunities and Pitfalls when Implementing High Performance 2D Convolutions

Ian Wainwright ( GPU Computing Specialist and Consultant, High Performance Consulting Sweden )
Highly-Rated Speaker
Ian Wainwright works as a software consultant within GPU Computing and mainly works with medical imaging, and signal processing for the aerospace and defence industry, and finance. He received a masters in engineering physics with a major in computational science from Uppsala University, Sweden.

Learn how to develop high performance 2D convolutions using Kepler specific features, such as warp-shuffle and __restrict__ pointers. Alternative strategies, such as FFT-based and shared memory-based implementations and their disadvantages, will also be presented.

Session Level: Advanced
Session Type: Talk
Tags: Video & Image Processing; Signal & Audio Processing; Medical Imaging & Visualization

Day: Thursday, 03/27
Time: 10:00 - 10:25
Location: Room 210E

Talk
 

PANEL

Presentation
Details

S4633 - Real-Time Functional Brain Imaging: How GPU Acceleration Redefines Each Stage

Adam Gazzaley ( Associate Professor , UCSF )
Adam Gazzaley
Dr. Adam Gazzaley obtained an M.D. and a Ph.D. in Neuroscience at the Mount Sinai School of Medicine in New York, completed clinical residency in Neurology at the University of Pennsylvania, and postdoctoral training in cognitive neuroscience at UC Berkeley. He is the founding director of the Neuroscience Imaging Center at the UC San Francisco, an Associate Professor in Neurology, Physiology and Psychiatry, and Principal Investigator of a cognitive neuroscience laboratory. His laboratory studies neural mechanisms of perception, attention and memory, with an emphasis on the impact of distraction and multitasking on these abilities. His unique research approach utilizes a powerful combination of human neurophysiological tools, including functional magnetic resonance imaging (fMRI), electroencephalography (EEG) and transcranial stimulation (TES). A major accomplishment of his research has been to expand our understanding of alterations in the aging brain that lead to cognitive decline. His most recent studies explore how we may enhance our cognitive abilities via engagement with custom designed video games, neurofeedback and TES. Dr. Gazzaley has authored over 80 scientific articles, delivered over 300 invited presentations around the world, and his research and perspectives have been consistently profiled in high-impact media, such as The New York Times, New Yorker, Wall Street Journal, TIME, Discover, Wired, PBS, NPR, CNN and NBC Nightly News. Recently, he wrote and hosted the nationally televised, PBS-sponsored special “The Distracted Mind with Dr. Adam Gazzaley”. Awards and honors for his research include the Pfizer/AFAR Innovations in Aging Award, the Ellison Foundation New Scholar Award in Aging, and the Harold Brenner Pepinsky Early Career Award in Neurobehavioral Science.
Tim Mullen ( Ph.D Candidate, Swartz Center for Computational Neuroscience, Institute for Neural Computation, UC San Diego )
Tim Mullen
Tim Mullen was born in La Plata, Argentina and raised throughout the Indian subcontinent, South America, and Europe. He obtained dual B.A.s in Computer Science (high honors) and Cognitive Neuroscience (highest honors) from UC Berkeley ('08). At Palo Alto Research Center he developed novel applications of wearable brain-computer interface technology for human-computer interaction. He then obtained the M.S. in Cognitive Science from UC San Diego ('11) where he is completing his Ph.D at the Institute for Neural Computation. The recipient of multiple fellowships, his research includes the application of dynamical systems analysis and statistical machine learning theory to human electrophysiological recordings with the goal of improving detection and prediction of both cognitive and neuropathological events. Mullen is a firm believer in translating theoretical results into practical application. He is actively involved in advancing open-source scientific software, such as the widely used EEGLAB platform, as well as wearable neurotechnology and brain-computer interfaces. An avid musician, Mullen has also developed a number of new-media installations and performances that invite the participant to explore intricate relationships among mind, matter, and music. He is currently an Artistic Partner for classical arts organization Mainly Mozart, where he directs the annual Mozart & the Mind Festival.
Christian Kothe ( Staff Research Associate , UCSD )
Christian  Kothe
Christian Kothe received the M.S. degree in computer science from the Berlin Institute of Technology, Berlin, Germany, in 2009. Since 2010, he has been a Research Associate at the Swartz Center for Computational Neuroscience, University of California San Diego, La Jolla. His current research interests include machine learning, signal processing, optimization, and their application to neuroscience problems such as source-space modeling and in particular cognitive state assessment.
Oleg Konings ( Algorithm Engineer, Gazzaley Lab at UCSF )
Equity Option floor trader Pacific Stock Exchange(9 years).High-Frequency Trading Algorithm Development (8 years).Specialist in coverting serial CPU Dynamic Programming and Graph algorithms to CUDA GPU implementations(see GitHub site for examples)Division I Top Coder Algorithm competitor (blue 1469, currently ranked in top 15% in United States)

Learn how massively parallel CPU-GPU architectures and distributed optimization algorithms are advancing the state-of-the art in real-time non-invasive electroencephalography (EEG) and brain-machine interfaces (BCI), offering new perspectives in how we study and interface with the human brain. Specifically, we will discuss recent efforts to accelerate key computationally-intensive inference problems. These include accurate neuronal source reconstruction, large-scale dynamical system identification, graph-theoretic connectivity analysis, and statistical machine learning for improved neuronal and cognitive state inference. We will examine distributed implementations of Alternating Direction Method of Multipliers (ADMM) convex optimization, using cuBLAS and custom CUDA kernels. Among these, a CUDA implementation of the sum-of-norms regularization (group lasso) will be discussed and compared to a serial C++ implementation and an optimized multi-core CPU MATLAB implementation.

Session Level: Intermediate
Session Type: Panel
Tags: Medical Imaging & Visualization; Real-Time Graphics Applications; Performance Optimization

Day: Wednesday, 03/26
Time: 17:00 - 17:50
Location: Room LL21B

Panel
 

KEYNOTE

Presentation
Details

S4780 - Keynote: Video Games and the Future of Cognitive Enhancement

Adam Gazzaley ( Associate Professor, UCSF )
Dr. Adam Gazzaley obtained an M.D. and a Ph.D. in Neuroscience at the Mount Sinai School of Medicine in New York, completed clinical residency in Neurology at the University of Pennsylvania, and postdoctoral training in cognitive neuroscience at UC Berkeley. He is the founding director of the Neuroscience Imaging Center at the UC San Francisco, an Associate Professor in Neurology, Physiology and Psychiatry, and Principal Investigator of a cognitive neuroscience laboratory. His laboratory studies neural mechanisms of perception, attention and memory, with an emphasis on the impact of distraction and multitasking on these abilities. His unique research approach utilizes a powerful combination of human neurophysiological tools, including functional magnetic resonance imaging (fMRI), electroencephalography (EEG) and transcranial stimulation (TES). A major accomplishment of his research has been to expand our understanding of alterations in the aging brain that lead to cognitive decline. His most recent studies explore how we may enhance our cognitive abilities via engagement with custom designed video games, neurofeedback and TES. Dr. Gazzaley has authored over 80 scientific articles, delivered over 300 invited presentations around the world, and his research and perspectives have been consistently profiled in high-impact media, such as The New York Times, New Yorker, Wall Street Journal, TIME, Discover, Wired, PBS, NPR, CNN and NBC Nightly News. Recently, he wrote and hosted the nationally televised, PBS-sponsored special "The Distracted Mind with Dr. Adam Gazzaley". Awards and honors for his research include the Pfizer/AFAR Innovations in Aging Award, the Ellison Foundation New Scholar Award in Aging, and the Harold Brenner Pepinsky Early Career Award in Neurobehavioral Science.

A fundamental challenge of modern society is the development of effective approaches to enhance brain function and cognition in both healthy and impaired individuals. For the healthy, this serves as a core mission of our educational system and for the cognitively impaired this is a critical goal of our medical system. Unfortunately, there are serious and growing concerns about the ability of either system to meet this challenge. I will describe an approach developed in our lab that uses custom-designed video games to achieve meaningful and sustainable cognitive enhancement (e.g., Anguera, et al. Nature 2013), as well the next stage of our research program, which uses video games integrated with technological innovations in software (e.g., brain computer interface algorithms, GPU computing) and hardware (e.g., virtual reality headsets, mobile EEG, transcranial electrical brain stimulation) to create a novel personalized closed loop system. I will share with you a vision of the future in which high-tech is used as an engine to enhance our brain's information processing systems, thus reducing our reliance on non-specific drugs to treat neurological and psychiatric conditions and allowing us to better target our educational efforts.

This keynote will be preceded by naming the winner of the CUDA Center of Excellence Achievement Award, winner for Best Poster, and the new CUDA Fellows, followed by the launch announcement of the Global Impact Award. (Award ceremony duration approximately 15 minutes).

Session Level: All
Session Type: Keynote
Tags: Medical Imaging & Visualization; Video & Image Processing; Recommended for All Press

Day: Thursday, 03/27
Time: 10:30 - 12:00
Location: Hall 3

Keynote
 

SPECIAL EVENT

Presentation
Details

S4888 - Birds of a Feather Breakfast: Performance Optimization & Medical Imaging

Join peers who share common interests over breakfast and participate in discussions without any pre-planned agenda. We've organized this year's BoFs around core topic areas and have assigned discussion leaders for each topic. Topics will be announced one week before the conference and will be listed on each BoF table. Breakfast can be purchased onsite.

Session Level: All
Session Type: Special Event
Tags: Performance Optimization; Medical Imaging & Visualization; NVIDIA - Special Event

Day: Tuesday, 03/25
Time: 08:00 - 08:50
Location: Room 220A

Special event