Google reveals its newest A.I. supercomputer, says it beats Nvidia


Google headquarters is seen in Mountain View, California, United States on September 26, 2022.

Tayfun Coskun | Anadolu Company | Getty Photographs

Google printed particulars about one of its artificial intelligence supercomputers on Wednesday, saying it’s sooner and extra environment friendly than competing Nvidia methods, as power-hungry machine studying fashions proceed to be the most popular a part of the tech trade.

Whereas Nvidia dominates the marketplace for AI mannequin coaching and deployment, with over 90%, Google has been designing and deploying AI chips known as Tensor Processing Models, or TPUs, since 2016.

Google is a significant AI pioneer, and its staff have developed among the most vital developments within the subject over the past decade. However some imagine it has fallen behind by way of commercializing its innovations, and internally, the corporate has been racing to launch merchandise and show it hasn’t squandered its lead, a “code crimson” state of affairs within the firm, CNBC previously reported.

AI fashions and merchandise similar to Google’s Bard or OpenAI’s ChatGPT — powered by Nvidia’s A100 chips —require numerous computer systems and lots of or 1000’s of chips to work collectively to coach fashions, with the computer systems working across the clock for weeks or months.

On Tuesday, Google stated that it had constructed a system with over 4,000 TPUs joined with customized parts designed to run and prepare AI fashions. It has been working since 2020, and was used to coach Google’s PaLM mannequin, which competes with OpenAI’s GPT mannequin, over 50 days.

Google’s TPU-based supercomputer, known as TPU v4, is “1.2x–1.7x sooner and makes use of 1.3x–1.9x much less energy than the Nvidia A100,” the Google researchers wrote.

“The efficiency, scalability, and availability make TPU v4 supercomputers the workhorses of huge language fashions,” the researchers continued.

Nevertheless, Google’s TPU outcomes weren’t in contrast with the newest Nvidia AI chip, the H100, as a result of it’s newer and was made with extra superior manufacturing know-how, the Google researchers stated.

Outcomes and rankings from an industrywide AI chip take a look at known as MLperf have been released Wednesday, and Nvidia CEO Jensen Huang stated the outcomes for the latest Nvidia chip, the H100, have been considerably sooner than the earlier technology.

“At present’s MLPerf 3.0 highlights Hopper delivering 4x extra efficiency than A100,” Huang wrote in a weblog publish. “The subsequent stage of Generative AI requires new AI infrastructure to coach Giant Language Fashions with nice energy-efficiency.

The substantial quantity of pc energy wanted for AI is expensive, and plenty of within the trade are centered on creating new chips, parts similar to optical connections, or software program methods that cut back the quantity of pc energy wanted.

The facility necessities of AI are additionally a boon to cloud suppliers similar to Google, Microsoft and Amazon, which may lease out pc processing by the hour and supply credit or computing time to startups to construct relationships. (Google’s cloud additionally sells time on Nvidia chips.) For instance, Google stated that Midjourney, an AI picture generator, was educated on its TPU chips.



Image / Information Source

Leave a Reply

Your email address will not be published. Required fields are marked *