Nvidia의 자율주행 책임자는 Waymo와 Tesla를 이길 계획을 밝혔습니다.

The Verge | | 🔬 연구
#자동차 #테슬라 #chatgpt #nvidia #review #tesla #waymo #엔비디아 #자율주행
원문 출처: The Verge · Genesis Park에서 요약 및 분석

요약

엔비디아 자율주행 총괄인 신저우 우는 회사의 핸즈프리 자율주행 시스템의 안정성이 확보될 때마다 젠슨 황 최고경영자(CEO)에게 직접 시승을 제안합니다. 최근 두 사람은 엔비디아가 부분 설계에 참여하고 테슬라의 시스템과 유사한 메르세데스-벤츠의 드라이빙 어시스트 프로가 탑재된 CLA 세단을 타고 샌프란시스코 도심을 주행했습니다. 이들은 와이모와 테슬라를 능가하겠다는 목표를 가지고 혼잡한 도로 상황에서도 자율주행 기술의 경쟁력을 선보였습니다.

본문

Every six months or so, Nvidia’s head of automotive, Xinzhou Wu, invites CEO Jensen Huang to go for a ride in a vehicle equipped with the company’s hands-free autonomous driving system. But only when Wu has “good confidence” in the system’s driving capabilities. Nvidia’s head of autonomous driving opens up about his plan to beat Waymo and Tesla Xinzhou Wu says you don’t need millions of miles of driving data to beat Tesla. All you need are the right sensors and an AI system that can actually reason. Xinzhou Wu says you don’t need millions of miles of driving data to beat Tesla. All you need are the right sensors and an AI system that can actually reason. Recently, the two went for a drive from Woodside, California, to downtown San Francisco in a Mercedes CLA sedan with MB.Drive Assist Pro, a hands-free driver-assist system partly designed by Nvidia that’s similar to Tesla’s Full Self-Driving. The mood was light, even if the traffic was pretty heavy. “Let me know when you’re in autonomous mode,” Huang said to Wu, according to a video of the ride provided to The Verge, “then I can be less concerned about my safety.” Over the course of the 22-minute video, the Mercedes navigates Huang and Wu through a series of everyday obstacles, like construction sites, double-parked cars, and lanes narrowly channeled through rows of orange cones. Nvidia’s system seems quite capable, though the video is edited and not presented in real time. (Nvidia spokesperson Jessica Soares later said there were no disengagements during the ride.) Still, it seemed not dissimilar from my own experience last year riding shotgun with Nvidia executives in a Mercedes with the hands-free driving system activated. I was impressed by the system’s ability to handle traffic signals, four-way stops, double-parked cars, unprotected left turns, and all the pedestrians and cyclists and scooter-riders that San Francisco can throw at you. If Tesla can do it with a bit of silicon and a bunch of cameras, it stands to reason that the world’s most valuable company could figure it out too. ‘The ChatGPT moment for physical AI’ After years of operating behind the scenes, Nvidia is attempting to stake out a more prominent leadership position on autonomous driving. Not only is it supplying the chips to companies like Tesla, but it’s also offering its own AI-powered driving features to partners like Mercedes, Jaguar Land Rover, and Lucid. At CES earlier this year, Huang unveiled Alpamayo, a portfolio of AI models, simulation blueprints, and datasets, that can give vehicles Level 4 autonomy, allowing them to fully drive themselves under specific conditions. Huang touted the announcement as “the ChatGPT moment for physical AI.” In the car with Wu, Huang is less bombastic and more introspective — but no less bullish on the technology’s future. “I think the challenge, of course, is Alpamayo, as incredibly smart as it is — and it can reason about the circumstance — we don’t know what it can’t do,” he said. “And so that’s the challenge, and that’s the reason why our classical stack is so incredibly important.” After years of operating behind the scenes, Nvidia is attempting to stake out a more prominent leadership position on autonomous driving Huang boasts that Nvidia’s approach to autonomous driving is “unique” because it combines an end-to-end AI model with a traditional, human-engineered “classical” stack. Pure end-to-end models are difficult to verify for safety, he theorizes. In contrast, the classical stack follows well-established engineering protocols and processes that make it easier to verify certain behaviors are safe enough. By combining both approaches, Nvidia’s system can benefit from a human-like driving style while still maintaining a safety framework grounded in traditional rules of the road. Huang’s claim of a unique approach in the industry doesn’t completely hold up; other AV operators also utilize end-to-end neural networks in tandem with explicit safety rules governing how a vehicle should respond. But it is certainly true that end-to-end learning, which tends to be less robotic and more human-like in its driving, is becoming more in vogue. Waymo relies on a hybrid system, while Tesla exclusively relies on end-to-end neural networks. In an interview, Wu said that end-to-end models are better able to respond to things like speed bumps or lane changes without feeling mechanical or overly robotic. “That’s why it’s really the ChatGPT moment,” he said. “It’s like only when your car really drives with confidence … then basically customers will feel more willing to use it.” Tesla and the high cost of self-driving I asked Wu how he thought Nvidia’s approach compared to Tesla’s Full Self-Driving, which has driven over 8.5 billion miles but has been implicated in a number of troubling safety incidents, including 23 injuries and at least two fatalities. Last December, a Nvidia executive told me that the company had tested the two systems against each other.

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →