Blaize’s Pathfinder P1600 integrated with a depth camera aims to offer a cheaper depth sensing option

Ben Wodecki, Jr. Editor

June 30, 2021

2 Min Read

American chip design startup Blaize and eYs3D Microelectronics have unveiled a reference design for advanced depth perception in robotics, security, and autonomous vehicles, that they say offers a cheaper option than Lidar-based systems.

The design integrates Blaize Pathfinder P1600 System-on-Module for edge AI applications with a ‘stereo’ vision sensor developed by eYs3D, which promises “millimeter-level accuracy of depth at optimal range”.

“The Blaize and eYs3D integration enables faster time-to-market for systems incorporating visual simultaneous location and mapping (VSLAM), facial feature depth recognition, and gesture-based commands,” Rajesh Anantharaman, Blaize’s senior products director, said.

Goodbye Lidar, hello Pathfinder?

Blaize, known as Thinci until 2019, was founded in California in 2010. The company recently expanded into China, Taiwan, and Southeast Asia, with Weikeng Group set to distribute its products in new markets.

eYs3D is a provider of computer vision platforms based in Taiwan. The company’s chips are currently embedded in Valve’s Index VR headsets and some consumer cleaning robots.

Integration between the two products enables the P1600 to convert the depth camera’s USB output to high-speed Ethernet connectivity, for enhanced video processing. Software development kits for the reference design will accommodate a wide range of operating systems, programming languages, and development tools, the companies said.

"Depth-sensing technology has been widely adopted commercially in consumer and industrial applications in the last few years,” James Wang, eYs3D’s chief strategy officer, said.

“We are now seeing growing applications in robotics, 3D scene learning, drones, smart retail, and other markets.”

The reliance on Lidar in device autonomy is already being questioned by a certain CEO. Elon Musk is a vocal advocate for using a vision-only approach in autonomous vehicles – believing that cameras are faster than either Lidar or radar.

His beliefs are being implemented in the company’s cars, with the newly-built North American Model Y and Model 3 vehicles featuring no radar – relying on cameras and machine learning as part of their autopilot and advanced driver assistance systems.

Tesla’s latest self-driving system is capable of collating video from eight cameras that surround the vehicle at 36 frames per second, providing information on the car’s surroundings.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like