【Hacker News搬运】谷歌首个张量处理单元:架构
-
Title: Google's First Tensor Processing Unit: Architecture
谷歌首个张量处理单元:架构
Text:
Url: https://thechipletter.substack.com/p/googles-first-tpu-architecture
很抱歉,在使用 webscraper 抓取指定内容时遇到了连接超时的问题。由于无法连接到指定的 Substack 文章,我无法直接分析或总结其内容。如果您有其他请求或需要帮助,请告诉我。
Post by: c_joly
Comments:
hipadev23: How is it that Google invented the TPU and Google Research came up with <i>the</i> paper on LLM and NVDA and AI startup companies have captured ~100% of the value
hipadev23: 为什么谷歌发明了TPU,谷歌研究公司提出了关于LLM和NVDA的</i>论文,人工智能初创公司已经获得了约100%的价值
nl: On the podcast interview now Groq CEO Jonathon Ross did[1] he talked about the creation of the original TPUs (which he built at Google). Apparently originally it was a FPGA he did in his 20% time because he sat near the team who was having inference speed issues.<p>They got it working, then Jeff Dean did the math and the decided to do an ASIC.<p>Now of course Google should spin off the TPU team as a separate company. It's the only credible competition NVidia has, and the software support is second only to NVidia.<p>[1] <a href="https://open.spotify.com/episode/0V9kRgNS7Ds6zh3GjdXUAQ?si=qE3MLNuFQLCFmSnvhG87jQ&nd=1&dlsi=3752b373bdfd4f2c" rel="nofollow">https://open.spotify.com/episode/0V9kRgNS7Ds6zh3GjdXUAQ?si=q...</a>
nl: 在播客采访中,Groq首席执行官乔纳森·罗斯(Jonathon Ross)[1]谈到了最初的TPU(他在谷歌建立的)的创建。显然,最初他在20%的时间里做的是FPGA,因为他坐在有推理速度问题的团队附近<p> 他们成功了,然后Jeff Dean计算了一下,决定做一个ASIC<p> 现在,谷歌当然应该将TPU团队分拆为一家独立的公司。它;这是NVidia唯一可靠的竞争对手,软件支持仅次于NVidia<p> [1]<a href=“https://;/;open.spotify.com/x2F;插曲/:0V9kRgNS7Ds6zh3GjdXUAQ?si=qE3MLNuFQLCFmSnvhG87jQ&;nd=1&;dlsi=3752b3bdfdf4f2c”rel=“nofollow”>https:///;open.spotify.com/;插曲/;0V9kRgNS7Ds6zh3GjdXUAQ?si=q</a>
formercoder: Googler here, if you haven’t looked at TPUs in a while check out the v5. They support PyTorch/JAX now, makes them much easier to use than TF only.
formercoder: 谷歌人在这里,如果你已经有一段时间没有看TPU了,看看v5。他们支持PyTorch;JAX现在使它们比仅使用TF更容易使用。
xrd: This article really connected a lot of abstract pieces together into how they flow through silicon. I really enjoyed seeing the simple CISC instructions and how they basically map on to LLM inference steps.
xrd: 这篇文章将许多抽象的部分连接在一起,了解它们是如何流过硅的。我真的很喜欢看到简单的CISC指令,以及它们如何基本上映射到LLM推理步骤。
kleton: Which ocean creature name is the current TPU?
kleton: 当前的TPU是哪个海洋生物名称?