Deep Learning Accelerators for Client Systems
Thu. November 1| 10:00 AM - 10:45 AM | 208A
Conference: ESC Minneapolis 2018
Track: ESC Track D: Advanced Technologies
Pass Type: Conference Pass (Paid)
Focus of attention in this session is how can first-party ISV's, external ISV's, or OEM/ODM players make a choice in selecting the right embedded accelerator (CPU, GPU, VPU, or NPU or a need for a specific DL accelerator) for enabling a certain user experience. Does this experience need the support for custom programmability (offered by DSP's or general-purpose compute engines) or should more emphasis be placed on fixed function accelerators tailored toward CNN or DNN support. What are the metrics that drive the HW selection ?
Once a HW choice is made, what would be the challenges in the SW world to maintain certain user experience consistency which is truly agnostic of the underlying HW? Should this be the burden of OS providers like MSFT, Google, Apple to tackle this or does SOC players like Intel, QCM, NVIDIA play a role in this?