🤖 AI Summary
At Tesla’s annual meeting Elon Musk said the company may both outsource chip production (including talks with Intel Foundry) and ultimately build its own “TeraFab” — a semiconductor complex with capacity far beyond existing TSMC “Gigafabs” (>100,000 wafer starts per month). He framed the move as necessary to secure the massive supply of AI processors Tesla needs for cars, robots and data centers (Dojo was canceled and Tesla plans to use in‑house AI5 chips alongside tens of thousands of Nvidia GPUs). Nvidia CEO Jensen Huang quickly warned that advanced chip manufacturing is much more than building plants — it’s “engineering, science and artistry.”
The announcement matters because creating a leading‑edge foundry is astronomically expensive, slow and technically intricate: single fabs cost tens of billions, multi‑fab complexes (TSMC’s Arizona project) can exceed $100B, and process development for modern nodes takes 5+ years with thousands of tightly coupled FEOL/MOL/BEOL steps, TCAD modeling, PDK/SPICE/tool integration and long yield‑ramp cycles. Tool and talent bottlenecks (ASML availability, experienced fab engineers) further complicate a newcomer’s path. If Tesla pursued vertical integration successfully it could reshape supply chains and competition for AI silicon; if not, it illustrates the steep tradeoffs between supply security and the immense capital, time and expertise required to become an IDM.
Loading comments...
login to comment
loading comments...
no comments yet