×
  • Tech - News - Tech Companies
  • Updated: May 29, 2023

Chipmaker Nvidia Introduces AI Products After $184 Billion Surge

Chipmaker Nvidia Introduces AI Products After $184 Billion

CEO of Nvidia Corp. Jensen Huang has announced a new group of AI-related products and services in an effort to further cash in on the craze that has made his business the most valuable chipmaker in the world.

Huang informed the crowd at the Computex expo in Taiwan that the extensive new range includes an AI supercomputer platform dubbed DGX GH200 that would assist tech businesses in developing ChatGPT substitutes.

Some of the first users of the equipment are anticipated to be Google, Meta Platforms Inc., and Microsoft Corp.

The supercomputer's innovative NVLink Switch System enables 256 GH200 Grace Hopper superchips—each of which contains an Arm-based Grace CPU and an H100 Tensor Core GPU—to function as a single GPU.

NVIDIA says that this enables the DGX GH200 to have 144 terabytes of shared memory and achieve 1 exaflop of performance.

According to the business, it is roughly 500 times the amount of memory included in a single DGX A100 machine.

For comparison, the only known exascale system is Frontier at Oak Ridge National Laboratory in Tennessee, which achieved a performance of around 1.2 exaflops on the Linmark benchmark.

It is listed in the most recent ranking of the Top500 supercomputers.

That is more than twice as good as the Japanese Fugaku method, which came in second.

Essentially, NVIDIA asserts that it has created a supercomputer that is on par with the most potent system now in existence (Meta is constructing what it says will be the world's fastest AI supercomputer once it is finished).

The DGX GH200, according to NVIDIA, has architecture that gives 10 times more bandwidth than the model it replaces, "delivering the power of a massive AI supercomputer with the simplicity of programming a single GPU."

The DGX GH200 has caught the eye of some well-known individuals.

The first businesses to use the supercomputer to evaluate its capacity for generative AI workloads should be Google Cloud, Meta, and Microsoft.

By the end of 2023, NVIDIA anticipates the availability of DGX GH200 supercomputers.

The business is also developing Helios, a supercomputer that combines four DGX GH200 systems. By the end of the year, NVIDIA anticipates Helios will be operational.

During his lecture, Huang covered a number of generative AI-related topics, including one pertaining to video games.

The NVIDIA Avatar Cloud Engine (ACE) for Games is a service that programmers may use to build unique AI models for voice, dialogue, and animation.

According to NVIDIA, ACE for Games is able to "give non-playable characters conversational skills so they can respond to questions with lifelike personalities that evolve."

Related Topics

Join our Telegram platform to get news update Join Now

0 Comment(s)

See this post in...

Notice

We have selected third parties to use cookies for technical purposes as specified in the Cookie Policy. Use the “Accept All” button to consent or “Customize” button to set your cookie tracking settings