Guys, I think Intel might be into mixed martial arts. The company walked into MWC and immediately threw a hardware-software jab-cross combo designed to knock out sustainability and artificial intelligence (AI) needs for 5G and edge deployments. First the hardware

Let’s start with the hardware front, where there were two key announcements of note. First, Intel announced Sierra Forest, a Xeon processor for 5G core that is coming in 2024 and offers a 2.7x improvement in performance per rack compared to the prior generation of chips.

Marketing mumbo jumbo aside, Intel VP and GM for wireline and core networks Alex Quach told Silverlinings, “This is really about reducing the number of racks, reducing the power needed to bring data centers to the edge.”

But how? Well, according to Quach, Sierra Forrest takes advantage of Intel’s Infrastructure Power Manager software, which allows the chip to dynamically shift power states (that is, active vs. idle) based on traffic needs at any given time. That’s a change from the historical practice of setting chips to always function at maximum power.

Quach said until now, performance has been the number one criteria for service providers. But with electricity costs and constraints increasing, and more servers making their way to the edge, operators are increasingly focused on reducing power.

Enter Power Manager. The Power Manager capability has been available on Intel’s chips since Ice Lake’s release in 2021. But Quach said since that time Intel has worked to continually improve power performance by reducing the amount of time it takes to change states. So now, chips like Sierra Forrest can go from 2.1 GHz of power to 800 MHz in nanoseconds without incremental steps, he said. When paired with commercial application software from major telco vendors, Power Manager can deliver up to 30% power savings in the core network on a runtime basis.

Couple that with the addition of way, WAY more processing cores (288 in Sierra Forrest vs 52 in the 4th generation processor) and that’s where all the performance and power improvements are coming from in Sierra Forrest.

The second hardware announcement was around Granite Rapids-D, Intel’s next-gen processor with built-in AI integration and Intel virtual radio access network (vRAN) boost acceleration that is due out in 2025. The company said it is currently sampling silicon and both Samsung and Ericsson are working with the chip. We’ll be tracking this chip as more information becomes available about what it means for the vRAN space.

Inferencing at the edge

On the software side, Intel brought it all together by taking the wraps off its new Edge Platform, which is….kind of exactly what it sounds like.

To be more precise, it’s a modular software abstraction layer that sits on servers to enable infrastructure management, workload optimization and AI development capabilities via a dashboard. The AI capabilities specifically are based on Intel’s OpenVINO toolkit and geared toward inferencing use cases rather than model training.

If this all sounds familiar, that’s because Intel teased the platform in September 2023 back when it was called Project Strata.

Pallavi Mahajan, Intel corporate VP and GM of its Network and Edge group, noted during a press briefing that deploying AI in the cloud is very different from doing so at the edge. That’s because there’s a lot of diversity of hardware, software and operating systems at the edge, with limited power and space. The platform, she added, is designed to address all those challenges to make edge AI deployments easier for service providers and enterprises alike.

So, what’s the big deal?

Well, Ron Westfall, a research director at Futurum Group, said the platform directly addresses “major pain points in the adoption and scaling of AI. This aligns with the industry-wide trend of enterprise-managed data becoming increasingly processed outside the data center or cloud.”

“The ability to process data locally can assure that organizations fulfill the laws and regulations pertaining to keeping generated data private as well as protecting all-valuable IP,” Westfall said.

And if you believe analyst Jack Gold, principal at J.Gold Associates, the platform could have a huge impact since “probably 85+% of core networks and RAN system already run on Intel [hardware] now, [and] this is an attractive extension and compatible for making edge systems deployable.” That’s coupled with the fact that Gold believes 80% to 90% of AI workloads will ultimately be inference rather than training.

Basically, data centers are doing the heavy lifting for AI training today, but AI will increasingly move to the edge.

“So, it’s important to have edge systems that are optimized to process inference workloads,” Gold concluded.

Original article can be seen at: