Before the end of the decade, memory semiconductors will take the lead in AI supercomputing, according to Samsung. The business thinks memory chips will outperform Nvidia GPUs in AI server applications. Additionally, Kye Hyun Kyung claimed a few months ago that Samsung will ensure “memory semiconductor-centered supercomputers will be pushed out by 2028.” Samsung is reportedly getting ready for large-scale manufacturing of high-bandwidth memory (HBM) chips for applications based on artificial intelligence (AI) this year, according to reports.
As per the information from the Korean media, the South Korean conglomerate will reportedly begin the bulk production of HBM chips for AI by H2 2023. The company aims to compete with SK Hynix. Do note here that SK Hynix is already at the top in the AI memory semiconductor market.
In 2022, SK Hynix was able to grab a 50% market share in the HBM market. On the other hand, Samsung was able to secure 40% of the market share, as per stats shared by TrendForce. The remaining 10% was held by Micron. However, if we observe the whole DRAM segment, the HBM market only accounts for 1% of it.
In addition to this, the demand and requirements for HBM solutions are anticipated to mount with the growth of the AI market. To keep up with SK Hynix and prepare for these market shifts, Samsung now plans to create its HBM3 chips in large quantities. Whether or not “AI” has become a buzzword, AI servers are proliferating, and high-bandwidth memory solutions are gaining popularity.
The HBM3 solutions by Samsung are built by vertically stacking multiple DRAM chips. They offer 16GB and 24GB capacities. Furthermore, they offer speeds of up to 6.4Gbps.
Brian is the news author at Research Snipers which mainly covers Technology News, Microsoft News, Google News, Facebook, Apple, Huawei, Xiaomi, and other tech news.