Intel Data Center and Artificial Intelligence Webinar Liveblog: Roadmap, New Chips and Demos

Intel Data Center and Artificial Intelligence Webinar Liveblog: Roadmap, New Chips and Demos

Refresh

Rivera thanked the viewers for attending the webinar and likewise shared a abstract of key bulletins.

In a nutshell, Intel has introduced that its first-generation environment friendly Xeon, Sierra Forest, will include an unimaginable 144 cores, thus providing higher core density than AMD’s rival 128-core EPYC Bergamo chips. The corporate additionally made enjoyable of the chip in a demo. Intel has additionally revealed the primary particulars of the second-generation environment friendly Xeon Clearwater Forest, set to launch in 2025. future node

Intel additionally offered a number of demos, together with head-to-head AI benchmarks displaying a 4-times efficiency benefit for the Xeon on two 48-core chips versus AMD’s EPYC Genoa, and a reminiscence output benchmark displaying the subsequent one. Gen Granite Rapids Xeon delivers an unimaginable 1.5TB/s of bandwidth on a dual-socket server.

As that is an investor occasion, the corporate will now conduct a Q&A session specializing in the monetary side of the presentation. We won’t deal with the Q&A right here, except the solutions are particularly about {hardware}, which is our power. For those who’re extra within the monetary aspect of the dialog, you may: see the webinar here.

(Picture credit score: Intel)

Lavender additionally outlined the corporate’s efforts to scale and speed up growth by the Intel Developer Cloud. Intel has 4x the variety of customers because it introduced this system in 2021. And with that, he returned the flag to Sandra.

DCAI

(Picture credit score: Intel)

Intel launched SYCLomatic to robotically migrate CUDA code to SYCL.

DCAI

(Picture credit score: Intel)

Intel’s efforts on OneAPI proceed with 6.2 million energetic builders utilizing Intel instruments.

DCAI

(Picture credit score: Intel)

Intel is aiming for an open multivendor method to supply an alternative choice to Nvidia’s CUDA.

DCAI

(Picture credit score: Intel)

DCAI

(Picture credit score: Intel)

Intel can be working to create a software program ecosystem for synthetic intelligence that rivals Nvidia’s CUDA. This consists of adopting an end-to-end method that features silicon, software program, safety, privateness and belief mechanisms all through the stack.

DCAI

(Picture credit score: Intel)

Intel’s Greg Lavendar, Intel’s Senior Vice President and CTO, took to the webcast to debate the democratization of AI.

DCAI

(Picture credit score: Intel)

Rivera touted Intel’s 97% scale effectivity in a cluster benchmark.

DCAI

(Picture credit score: Intel)

CPUs are additionally good for smaller inference fashions, however discrete accelerators are essential for bigger fashions. Intel makes use of Gaudi and Ponte Vecchio GPUs to cater to this market. Hugging Face not too long ago mentioned that Gaudi’s Hugging Face Transformers library gave him 3x the efficiency.

DCAI

(Picture credit score: Intel)

Intel is working with content material suppliers to carry out AI workloads on video streams, and AI-based computing can speed up, compress and encrypt knowledge shifting throughout the community, all taking place on a single Sapphire Rapids CPU.

DCAI

(Picture credit score: Intel)

Rivera summarized Intel’s in depth efforts within the AI ​​area. Intel predicts that AI workloads will proceed to run predominantly on CPUs, with 60% of all fashions operating on CPUs, particularly small and midsize fashions. In the meantime, giant fashions will make up about 40% of workloads and run on GPUs and different customized accelerators.

DCAI

(Picture credit score: Intel)

Intel additionally has one other full record of chips for AI workloads. Intel famous that it’ll launch 15 new FPGAs this yr, a file for the FPGA group. We’ve not heard of any main success with Gaudi chips but, however Intel continues to enhance its product line and has a next-gen accelerator on its roadmap. Gaudi 2 AI accelerator is transport and Gaudi 3 is taped.

DCAI

(Picture credit score: Intel)

Rivera has now introduced the Clearwater Forest, a continuation of the Sierra Forest. Intel did not share many particulars past the 2025 timeframe launch, however mentioned it can use the 18A course of for the chip, not the 20A compute node that arrived six months in the past. This would be the first Xeon chip with 18A processing.

DCAI

(Picture credit score: Intel)

Spelman is again to point out us all 144 cores on the Sierra Forest chip operating in a demo.

DCAI

(Picture credit score: Intel)

Intel’s e-core roadmap begins with the 144-core Sierra Forest, which is able to present 256 cores in a single dual-socket server. The 144 cores of the fifth-gen Xeon Sierra Forest additionally outweighs AMD’s 128-core EPYC Bergamo when it comes to core depend, nevertheless it most likely does not take the lead in thread depend — Intel’s client market e-cores are single-threaded, however the firm didn’t disclose whether or not e-cores for the information middle help hyperthreading. AMD shared that the 128-core Bergamo is hyper-threaded, thus offering a complete of 256 threads per slot.

Rivera says Intel powered up the silicon and had an OS boot in lower than 18 hours (an organization file). This chip is the lead software for the ‘Intel 3’ compute node, so success is paramount. Intel is assured sufficient that it has already made the chips obtainable to its clients and confirmed all 144 cores in motion on the occasion. Intel initially targets sure varieties of cloud-optimized workloads for its e-core Xeon fashions, however expects them to be adopted for a a lot wider vary of use circumstances as soon as launched.

DCAI

(Picture credit score: Intel)

We are able to see the demo right here.

DCAI

(Picture credit score: Intel)

Throughout the webinar, Intel launched the dual-socket Granite Rapids, which supplies a monstrous 1.5TB/s DDR5 reminiscence bandwidth; it is a claimed 80% peak bandwidth enhance over obtainable server reminiscence. For perspective, Granite Rapids outperforms Nvidia’s 960GB/sec Grace CPU superchip designed particularly for reminiscence bandwidth and AMD’s dual-socket Genoa with a theoretical peak of 920GB/s.

Intel achieved this feat by utilizing DDR5-8800 Multiplexer Mixed Rank (MCR) DRAM, a brand new kind of bandwidth-optimized reminiscence Intel invented. Intel has already launched this reminiscence with SK hynix.

Granite Rapids will arrive in 2024, carefully following the Sierra Forest. Intel will manufacture this chip on the ‘Intel 3’ course of, which is a extremely improved model of the ‘Intel 4’ course of that lacks the high-density libraries required for the Xeon. That is the primary p-core Xeon on “Intel 3” and could have extra cores than Emerald Rapids, larger reminiscence bandwidth than DDR5-8800 reminiscence, and different unspecified I/O improvements. This chip is presently being sampled to clients.

DCAI

(Picture credit score: Intel)

Rivera confirmed us the corporate’s upcoming Emerald Rapids chip. Intel’s next-generation Emerald Rapids is scheduled for launch in This fall of this yr; it is a compressed timeframe relative to Sapphire Rapids’ launch just a few months in the past.

Intel says it can ship quicker efficiency, higher energy effectivity, and extra importantly, extra cores than its predecessor. Intel says it has Emerald Rapids silicon in-house, and validation is progressing as anticipated because the silicon meets or exceeds efficiency and energy targets.

DCAI

(Picture credit score: Intel)

Intel’s Sapphire Rapids helps AI-powered AMX know-how that makes use of completely different knowledge sorts and vector processing to spice up efficiency. Lisa Spelman ran a demo displaying that 48-core Sapphire Rapids outperformed 48-Core EPYC Genoa by 3.9x throughout a variety of AI workloads.

DCAI

(Picture credit score: Intel)

Intel launched Sapphire Rapids with over 450 design successes and over 200 designs shipped from main OEMs. Intel claims a 2.9x productiveness enhance from technology to technology.

DCAI

(Picture credit score: Intel)

Intel has divided the Xeon roadmap into two strains, one with P-core and the opposite with E-Core, every with its personal benefits. P-Core (Efficiency Core) fashions are conventional Xeon knowledge middle processors with cores that solely ship the complete efficiency of Intel’s quickest architectures. These chips are designed for peak AI workload efficiency per core. In addition they include accelerators like we noticed at Sapphire Rapids.

The E-Core (Effectivity Core) lineup consists solely of chips with smaller effectivity cores, as we have seen with Intel’s client chips that eschew sure options just like the AMX and AVX-512 to supply larger density. These chips are designed for top vitality effectivity, core density and total effectivity, which is enticing to hyperscalers. Intel’s Xeon processors won’t have any fashions with each P cores and E cores on the identical silicon, so they’re completely different households with completely different use circumstances.

E-cores are designed to combat Arm opponents.

DCAI

(Picture credit score: Intel)

Intel is working to develop a broad portfolio of software program options to enrich its chip portfolio.

DCAI

(Picture credit score: Intel)

Rivera defined that Intel typically appeared by the lens of CPUs to measure complete knowledge middle income, however has now expanded its scope to incorporate various kinds of computing, similar to GPUs and devoted accelerators.

DCAI

(Picture credit score: Intel)

Sandra Rivera took the stage to stipulate her new knowledge middle roadmap, which is able to handle Intel’s efforts in synthetic intelligence and the overall addressable market (TAM) for knowledge middle enterprise, which Intel values ​​at $110 billion.

DCAI

(Picture credit score: Intel)

#Intel #Information #Heart #Synthetic #Intelligence #Webinar #Liveblog #Roadmap #Chips #Demos

Leave a Reply

Your email address will not be published. Required fields are marked *

Parloa raises $21M to add some automation to contact centers Previous post Parloa raises $21M to add some automation to contact centers
How to create a sales development rep strategy to fill your B2B pipeline Next post How to create a sales development rep strategy to fill your B2B pipeline