H100 Hopper GPU

Nvidia Gimps to Sell H100 Hopper GPU to China as H800

You will not discover Nvidia’s H100 (Hopper) GPU within the record of greatest graphics playing cards. Nevertheless, the energy of the H100 lies in synthetic intelligence (AI), making it a coveted GPU within the AI ​​business. And now that everybody is leaping on the AI ​​majority, Nvidia’s H100 has simply gotten much more well-liked.

Nvidia claims the H100 offers as much as 9x quicker AI coaching efficiency and as much as 30x quicker inference efficiency than the earlier A100 (Ampere). With this degree of efficiency, it is easy to see why anybody would wish to get their arms on an H100. As well as, Reuters (opens in new tab) Nvidia reportedly modified the H100 to adjust to export guidelines in order that the chipmaker may promote the modified H100 to China because the H800.

Final yr, US officers made a number of laws to stop Nvidia from promoting its A100 and H100 GPUs to Chinese language clients. Guidelines restricted GPU exports with chip-to-chip knowledge switch charges under 600GB/s. Within the AI ​​world, the place methods have to maneuver large quantities of information to coach AI fashions like ChatGPT, throughput is essential. Blocking the chip-to-chip knowledge switch charge ends in a big drop in efficiency as a result of slower switch speeds improve the time required for knowledge switch and subsequently improve coaching time.

With the A100, Nvidia diminished the GPU’s 600GB/s interconnect to 400GB/s and rebranded it as A800 to commercialize it within the Chinese language market. Nvidia takes the same method to the H100.

Nvidia has diminished the chip-to-chip knowledge switch charge on the H800 to about half that of the H100, in line with Reuters’ Chinese language chip business supply. This leaves the H800 with an interconnect restricted to 300GB/s. It is a extra important efficiency increase than the A100 and A800, which suffers from a 33% decrease chip-to-chip knowledge switch charge. Nevertheless, the H100 is considerably quicker than the A100, so Nvidia has put a extra extreme chip-to-chip knowledge switch restrict on the previous.

Reuters turned to an Nvidia spokesperson to ask what units the H800 aside from the H100. Nevertheless, the Nvidia consultant solely said that “our 800 collection merchandise are totally compliant with export management laws”.

Nvidia has three of the highest Chinese language expertise corporations presently utilizing the H800: Alibaba Group Holding, Baidu Inc, and Tencent Holdings. China has banned ChatGPT; That is why tech giants are competing with one another to provide a home ChatGPT-like mannequin for the Chinese language market. Whereas an H800 with half the chip-to-chip switch charge will undoubtedly be slower than a full-fat H100, it nonetheless will not sluggish. Within the case of corporations probably utilizing hundreds of Hopper GPUs, we’ve to surprise if which means utilizing extra H800s to do the identical job with much less H100.

#Nvidia #Gimps #Promote #H100 #Hopper #GPU #China #H800

Leave a Reply

Your email address will not be published. Required fields are marked *

GeForce Previous post Inno3D’s GeForce RTX 4070 Will Come With 8-Pin Power Connector
Hoeven Holds Planning Session for Counter-UAP, Announces FAA BVLOS Exemptions for Unmanned Flights, and Outlines Economic Impact of Sky Range Program Next post Hoeven Holds Planning Session for Counter-UAP, Announces FAA BVLOS Exemptions for Unmanned Flights, and Outlines Economic Impact of Sky Range Program