In this chapter of our thought leadership series, AI Business caught up with Kari Ann Briski, the Director of Deep Learning Software Product at NVIDIA. Based in San Francisco, Kari works together with researchers and enterprise customers to bring the benefits of deep learning to their applications.
Deep learning is being applied to solve many big data problems from computer vision, image recognition, speech recognition, and autonomous vehicles. With deep learning, there is enormous potential to cure disease, construct smart cities and revolutionize analytics. Today more than 19,000 companies are currently using deep learning to transform their capabilities. In our interview, Kari reveals how NVIDIA is powering the AI revolution with the variety of frameworks and solutions on offer.
NVIDIA Corporation is an American technology company founded in 1993 in Santa Clara, California. The chip company design, develop and market three dimensional (3D) graphics processors and related software. They became known for popularizing the graphics processing units (GPUs) when they marketed the GeForce 256 in 1999. Back then the GPU was fundamentally applied for computer graphics and image processing in video games. Scientists found that the same processing power that allows GPUs to render images in video games was ideal for the number-crunching required for deep learning – the advanced machine learning technique used in artificial intelligence and cognitive computing. Today NVIDIA’s GPUs, (most recently Volta) lay the foundations for some of the major advancements in AI. Their GPUs for deep learning are available in desktops, notebooks, servers, and supercomputers around the world, as well as in cloud services from Amazon, IBM, Microsoft, and Google.
Kari began our interview emphasizing that the major goal at NVIDIA is accessibility to AI.
“NVIDIA wants to democratize AI. That means providing a platform that others can build their solutions on top of. The aim is to help everyone succeed and we do this by providing a platform to all businesses, researchers, and data centers around the world. We don’t pick favorites and we work hard to ensure all deep learning frameworks and models accelerate on our platform.
Additionally, where NVIDIA can add unique value, like autonomous vehicles, smart cities, and healthcare, we dive in. We work with partners because we’re excited to see the future and believe we can make a difference.”
Kari continues, “Internally NVIDIA employs researchers, domain specific scientists and solution architects that use our own platform to solve emerging and complicated problems. The atmosphere is one of genuine excitement to see where we can go next.”
Deep Learning SDK
The NVIDIA Deep Learning SDK is an essential resource for GPU developers. It provides a range of powerful tools and libraries for designing and deploying GPU-accelerated deep learning applications. Today it has spawned the development of many deep learning frameworks including Caffe, CNTK, TensorFlow, Theano, and Torch. Kari reveals that the success is due to its accessibility.
“The NVIDIA Learning SDK contains solutions for both neural network training and inference. It’s a platform that can satisfy the needs of both advanced deep learning researchers as well as applied deep learning practitioners. It supports a variety of organizations with many different use cases, whether the AI application is deployed in a data center, on an embedded device, or in the cloud.”
“I’ve personally seen so many fun and interesting AI applications, from large organizations to small businesses and individuals who previously knew nothing about deep learning,” Kari explains. “They used NVIDIA DIGITS (our GUI tool for deep learning) – to create neural networks that are deployed in production environments today.”
Kari states, “It’s transformative because it shows anyone can do it. We are providing a platform for rapid AI development previously not attainable. You can now deploy that neural network to infer answers or make predictions that will serve up solutions and insight like never before.”
Next generation GPU Volta
Kari tells us about Volta, the new state of the art, next generation GPU platform, unveiled earlier in May 2017. NVIDIA claim that Volta is the most powerful GPU architecture the world has ever seen with over 21 billion transistors. The result is an unparalleled level of speed. Kari believes Volta is the new driving force behind AI and will accelerate AI development for all users.
“I’m going to geek out a bit here, but stay with me, Volta is our latest architecture for deep learning training and inferencing. It introduces Tensor Core technology which provides 8x more throughput, using mixed precision methodology. A TensorOp can accumulate half precision inputs into either single or half precision outputs. This allows the accurate training of a neural network in FP16 storage.”
“There have been a couple of large companies boasting that they can train ImageNet in under an hour. That’s cool, but small companies don’t have access to that type of infrastructure or scale (256 Tesla Pascal GPUs). Using a DGX-V (8 Volta GPUs) you can train ImageNet in 8 hours (If you were to train ImageNet today on a public cloud instance using 8 K80s it would take over 48 hours).”
“Ultimately the new Volta architecture will encourage enterprises to enter the AI game in a cost effective manner; allowing anyone to perform more rapid experimentation on neural networks and faster time to delivery on inference applications and services.”
Sectors poised for change and transformation.
“Transportation will experience the most immediate, rapid change. I’m also excited to experience the impact of the health industry; where w are likely to see better living through faster diagnoses and new drug discoveries, along with access to more healthcare services and second “opinions” on that X-ray or MRI.”
“Music and storytelling should get very interesting very fast. Remember the “choose your own adventure” books when you were a kid, different story lines and endings depending on your choices or likes? Well, AI could provide that sort of service, on the fly, based on your interests. No more heart ache after you’ve finished a series of books or binge-watched a tv series and now it’s over, a neural network could create a book for you based on the books or series of books you like.”
“I’m also excited about communication, I’m a big Star Trek fan, the universal translator has to be coming soon – you’ve already seen recurrent neural networks in production on social media sites, seamlessly transcribing posts into your native language. I’m also very hyped about robotic assistants trained from reinforcement learning”
Driving towards the Smart City
Kari argues that the $10 trillion transportation industry is the next sector poised for change. She emphasizes that through NVIDIA Metropolis, the edge- to- cloud platform, they are committed to driving towards the smart city.
“There is tremendous potential for self-driving transportation across a spectrum of use cases from personal vehicles, shared vehicles, bus and van transports. This will have a huge impact on productivity, access to goods and services, or care for the elderly. From an infrastructure and community perspective, it will be refreshing to see parking lots turned into recreational areas, fewer accidents for teenagers and drunk drivers, or the ability to bring more synchronization to high traffic areas.”
“Beyond autonomous cars, NVIDIA is ‘all in’ when it comes to building safer, smarter cities. We have the right edge to cloud platform to achieve that goal. AI is changing how we capture, inspect and analyze data and our solution, called Metropolis, provides the tools and technologies for intelligent video analytics.”
Open Source is key
We asked Kari about regional development and Kari believes that the open nature of AI research means that development will take place from all sides and will transcend national boundaries.
“I think both North America and Asia will have a strong presence. What I love about AI research is that, for the most part, it’s open. The community shares with each other. Research conclusions on new architectures of deep neural networks are frequently published so it doesn’t matter where you are, or where you live, it’s about the possibility of the outcome. When it comes to “the race”, the trend is that the IP is in the data that trains your neural network – which is why we’ve seen a strong presence from Amazon, Facebook, and Google in North America and Alibaba, Baidu and Tencent in China.”
“NVIDIA heavily contributes to open source projects, both in the frameworks (deep learning libraries) as well as posting neural networks that we have researched for specific AI applications. Most of the deep learning frameworks are developed in open source and there is a good community that provides checks and balances on each other.”
Kari Ann Briski will be speaking at the AI Summit San Francisco and she had some thoughts on what she’s expecting.“I’m looking forward to taking it all in, hearing from both the thought leaders and the disruptors. I want to hear new use cases and seek input on how to better our software products and platform to power the fourth industrial revolution!”