Samsung’s new HBM3E memory technology hits 1.2TB/s — you can bet that it will power Nvidia’s next AI monster GPU, the GH200

Key Takeaways:

– Samsung has developed the HBM3E memory standard, also known as Shinebolt, which is 50% faster than its predecessor and can reach speeds of up to 9.8Gb/s.
– Shinebolt is designed for use with high-performance GPUs in AI processing and LLMs, and it is expected to be integrated into components developed by Nvidia.
– High-bandwidth memory (HBM) is faster and more energy efficient than conventional RAM, and Samsung’s HBM3E uses non-conductive film (NCF) technology to stack layers higher and maximize thermal conductivity.
– Samsung claims that HBM3E will power the next generation of AI applications, improving AI training and inference in data centers and reducing total cost of ownership.
– Samsung has signed an agreement with Nvidia to supply HBM3 memory units, and it is expected that HBM3E components will be included in this partnership once they enter mass production.

TechRadar:

Samsung’s latest memory technology has hit staggering speeds of 9.8Gb/s – or 1.2TB/s – meaning it’s more than 50% faster than its predecessor. 

The HBM3E memory standard, nicknamed Shinebolt, is the latest in a series of high-performance memory units Samsung has developed for the age of cloud computing and increased demand for resources.

Source link

AI Eclipse TLDR:

Samsung has developed a new memory technology called HBM3E, which has achieved speeds of 9.8Gb/s, making it more than 50% faster than its predecessor. The HBM3E memory standard, also known as Shinebolt, is designed for the age of cloud computing and increased demand for resources. It is the successor to Icebolt and is specifically designed for use with high-performance GPUs in AI processing and LLMs. The technology uses 3D stacking to stack layers of chips on top of each other, maximizing thermal conductivity and achieving higher speeds and efficiency. Samsung claims that HBM3E will power the next generation of AI applications, improving AI training and inference in data centers. The technology is expected to be included in Nvidia’s next-gen AI chip, the H200, as part of a partnership agreement between the two companies. Samsung is set to supply approximately 30% of Nvidia’s memory by 2024.