Our content generation service is experiencing issues. A human-curated summary is being prepared.
Industry Applications

Rivian builds AI chips for driving with efficiency/performance, ASIL compliance

3 min read

Rivian’s decision to design its own AI silicon marks a clear shift from relying on off‑the‑shelf processors to a purpose‑built solution for self‑driving cars. The company says the new neural engine is aimed at the kind of compute load that typical automotive CPUs can’t handle, targeting a throughput that rivals high‑end data‑center GPUs while staying within the power envelope of a vehicle. In an industry where safety certifications can make or break a product, meeting the Automotive Safety Integrity Level (ASIL) thresholds is as critical as raw speed.

Rivian’s engineers claim the architecture balances those demands, promising a level of efficiency and performance that aligns with the stringent risk classification standards governing safety‑critical electronics. The firm also puts a number on its ambition: a neural engine capable of delivering roughly 800 trillion operations.

Rivian says the chip's architecture will deliver "advanced levels of efficiency, performance, and Automotive Safety Integrity Level compliance," referencing a risk classification system for safety‑critical automotive electronics. Rivian estimates its neural engine can perform 800 trillion operations.

Advertisement

Rivian says the chip's architecture will deliver "advanced levels of efficiency, performance, and Automotive Safety Integrity Level compliance," referencing a risk classification system for safety-critical automotive electronics. Rivian estimates its neural engine can perform 800 trillion operations a second (TOPS) while its third generation computer can do 1,600 trillion 8-bit integer operations per second (INT8 TOPS) while utilizing data sparsity. For comparison, Nvidia's H100 class GPUs are quoted at 3,000-3,900 INT8 TOPS on datasheets with sparsity, while Google's TPU v5e per-chip INT8 number is estimated 393 INT8 TOPS.

(Google recently announced its seventh-generation TPUs capable of over 40 exaflops in clustered pods.) Rivian estimates its AI chip can perform 1,600 trillion 8-bit integer operations per second (INT8 TOPS) while utilizing data sparsity. It has a processing power of 5 billion pixels per second. And it features RivLink, a low latency interconnect technology that allows chips to be connected to multiply processing power.

The processor is also enabled by an in-house developed AI compiler and platform software. Most notably, the announcement of the proprietary silicon aligns Rivian with Tesla, the other major automaker that has been trying to brute-force its way to self-driving cars by making its own chips, while the rest of the auto industry increasingly lines up behind Nvidia. Rivian is an EV-only manufacturer, just like Tesla, and has said that vertical integration is a key element to its future growth.

Rivian will use a variety of sensors to power its autonomous driving, including lidar.

Related Topics: #AI #Rivian #AI silicon #neural engine #ASIL #INT8 TOPS #GPU #TPU

Rivian’s new AI chip promises 1,600 trillion operations per second, with a neural engine rated at 800 trillion operations. The company says the architecture targets higher efficiency, performance, and ASIL compliance. Yet the announcement comes after years of development by rivals such as Tesla, leaving analysts to wonder how quickly Rivian can translate raw compute into reliable autonomous driving.

The chip’s specifications are impressive on paper, but real‑world validation remains pending. Without independent testing, it’s unclear whether the claimed safety‑integrity level will hold under diverse driving conditions. Rivian’s move signals a willingness to invest in in‑house silicon rather than relying on third‑party solutions.

Whether this strategy will close the gap with established players is still an open question. For now, the hardware exists; the software stack, integration challenges, and regulatory approvals are yet to be demonstrated. Stakeholders will be watching the next milestones closely.

A bold step. Future test drives will reveal latency, power draw, and how the chip interacts with Rivian’s sensor suite.

Further Reading

Common Questions Answered

What compute performance does Rivian claim for its new neural engine and third‑generation computer?

Rivian states the neural engine can deliver 800 trillion operations per second (TOPS), while the third‑generation computer can achieve 1,600 trillion 8‑bit integer operations per second (INT8 TOPS) by leveraging data sparsity. These figures are intended to rival high‑end data‑center GPUs while staying within a vehicle's power envelope.

How does Rivian's AI chip architecture address Automotive Safety Integrity Level (ASIL) compliance?

The chip’s architecture is designed to meet ASIL requirements, which classify the risk level of safety‑critical automotive electronics. Rivian emphasizes that the hardware provides advanced efficiency and performance while adhering to the stringent safety standards demanded by the automotive industry.

Why is Rivian moving from off‑the‑shelf processors to a purpose‑built AI silicon solution?

Rivian believes that conventional automotive CPUs cannot handle the heavy compute loads required for autonomous driving, prompting the development of a custom neural engine. This shift aims to combine data‑center‑level throughput with vehicle‑compatible power consumption and safety certifications.

What challenges remain for Rivian's AI chip despite its impressive specifications?

Although the chip promises high TOPS numbers on paper, real‑world validation of its reliability for autonomous driving is still pending. Analysts are watching how quickly Rivian can translate raw compute power into a safe, functional self‑driving system compared to rivals like Tesla.

Advertisement