Tesla, Inc., the clean-energy company that's about more stuff than just some of the world's most desired electric cars, is seemingly on the lookout to improve its datacenter infrastructure. According to a new job listing on Tesla's corporate website (spotted by Elektrek), the company is looking to hire a "Sr. Engineering Program Manager, Data Centers." That hire is usually a good step for any company planning to operate datacenters built on custom or proprietary silicon - perhaps Tesla is looking to build a dojo for its Dojo AI accelerators?
The position is based in Austin, Texas, where the company has several facilities focused on manufacturing and R&D. However, that doesn't mean that its effects will only take effect in Austin - especially considering reports that Tesla has taken over some number of X's (formerly Twitter) datacenters back in Sacramento. It's also unclear whether Tesla is just being hyperbolic regarding how "first of their kind" these datacenters will be. There are multiple ways to fit that definition that doesn't involve as much engineering work as one would expect.
Tesla originally announced its Dojo D1 (it seems the product name has since changed to Dojo V1) in 2021, even as it promised to increase the amount of processing power available for the company to train its self-driving AI systems. At the time, these Dojo ASICs (Application-Specific Integrated-Circuit) were meant to carry up to 50 billion transistors each, delivering around 362 TeraFLOPs of power per custom chip.
Exponential improvement coming to FSD Beta once Dojo is up and running. pic.twitter.com/iDddgQ0LzlJune 21, 2023
It's unclear whether Tesla has toyed with the design in the meantime (although we'd say that's likely). What we do know is that Elon Musk has clarified that the first generation ship won't include general AI processing and will instead focus on accelerating "video training" for the firm's computer vision systems. According to Musk, V2 of Dojo will address these limitations and eventually become a full-fledged, general AI processor not unlike NVIDIA's hot-of-the-presses H100 and its DGX GH200 supercomputing system.
Tesla has purchased a number of GPU accelerators from Nvidia. The company already bragged about it even when it "only" deployed 7,360 A110 accelerators. But that number has increased and the company aims to have as many as 100 ExaFLOPs on-hand by October 2024 (already counting the deployment of its Dojo supercomputer).
Tesla designing and ordering its own custom ASICs and general-purpose AI accelerators should give the company increased control over feature-sets and reduce the Total Cost of Ownership (TCO) for its datacenters. But then again, all companies designing silicon these days mostly take honey from the same TSMC-branded pot; and there's only so much capacity to go around.
So while Tesla is likely to enjoy several benefits by designing and integrating its own High-Performance Computing (HPC) systems, it's unlikely that Elon Musk will find fewer reasons to complain that "everyone and their dog" is buying GPUs. There are only so many wafers to distribute across the cadre of TSMC clients, and most would also love to stay at the cutting edge of fabrication.