Nvidia wants its new Drive PX 2 supercomputer to drive your next car.
Nvidia’s autonomous vehicle platform.
Like almost all current self-driving car architectures, Nvidia’s starts with massive cloud-based deep-learning compute power. To hear Huang tell it, everyone has given up on traditional object recognition algorithms and is moving to the booming technology of deep learning. Deep learning is characterized by software that determines its own features for use in solving recognition and other problems. Previously, for tasks such as object, gesture, and facial recognition, developers had to painstakingly identify characteristics that could be measured and used as features to help their software learn. Improving results meant hand-tuning those features. When processor power was more limited, that was the only practical approach.
Nvidia calls its system for automotive deep learning DIGITS. It’s designed to pipe its results to the runtime DriveWorks platform powered by its Drive PX 2. There is one sleight-of-hand here, though. Developers still needs to decide which inputs to measure, and how to calibrate and quantify them before feeding them into the software. It’s not like a HAL 9000 that can be unleashed on the world without guidance and suddenly learn how to drive or sing.
Whatever solution automakers develop, Nvidia wants to make sure it will run on its processors. It has worked hard to ensure that essentially every major deep learning software toolkit can run using its CUDA platform. In turn, CUDA is binary compatible on Nvidia processors ranging from Jetson to Titan X, and now Drive PX. Whether it’s the usually-cloud-based training software– Nvidia DIGITS — or the vehicle runtime — Nvidia DriveWorks — Nvidia wants to ensure that its processors are involved.