Deep Learning Accelerators for Client Systems

Thu. November 1| 10:00 AM - 10:45 AM | 208A

Conference: ESC Minneapolis 2018

Track: ESC Track D: Advanced Technologies

Format: 45-Minute

Pass Type: Conference Pass (Paid)

Currently a large amount of focus exists on enabling usages and experiences based on deep learning on edge-based platforms. Edge-based solutions vary a lot based on focus toward "Always on ultra lower power domain," "Efficiency focused driven by battery life," or "Performance focused where accuracy prediction is of the highest need."

Focus of attention in this session is how can first-party ISV's, external ISV's, or OEM/ODM players make a choice in selecting the right embedded accelerator (CPU, GPU, VPU, or NPU or a need for a specific DL accelerator) for enabling a certain user experience. Does this experience need the support for custom programmability (offered by DSP's or general-purpose compute engines) or should more emphasis be placed on fixed function accelerators tailored toward CNN or DNN support. What are the metrics that drive the HW selection ?

Once a HW choice is made, what would be the challenges in the SW world to maintain certain user experience consistency which is truly agnostic of the underlying HW? Should this be the burden of OS providers like MSFT, Google, Apple to tackle this or does SOC players like Intel, QCM, NVIDIA play a role in this?

Level: N/A


Speakers

Vinesh Sukumar

Vinesh Sukumar

Director

Intel

Role: Speaker