When the telecom industry is busy with Mobile World Congress preparations, taking stock of the vRAN/open RAN market and the key emerging trends, especially the compute architecture, is worthwhile.

It is becoming clear that companies that provide advanced technologies, feature parity (or superiority) with legacy RAN, and the most power- and cost-efficient solutions will win the race.

vRAN/open RAN – operators’ primary considerations

As operators embark on their journey, it is getting clear that it will be a two-step process. First, a single vendor vRAN with open interfaces. Second, multi-vendor open RAN. This approach minimizes the system integration burden and enables smooth migration. They are also realizing that outlandish cost-saving claims of open RAN are not true. If at all, the initial deployments will be more expensive. But, the hope is that without vendor lock-in, the second step might bring cost savings.

Feature parity with established 5G networks is becoming another critical consideration. While initial vRAN/open RAN deployments only had simpler 4T4R and 8T8R MIMO configurations, the more advanced 32T32R and 64T64R are beginning to happen. 5G itself and many such features were delayed in vRAN/open RAN. Parity becomes even more important when commercializing Rel. 17 and Rel. 18. features. These bring additional processing complexity, creating another major challenge — power efficiency. 

In a recent survey conducted by GSMA Intelligence, energy efficiency came as the top consideration for operators, even higher than security.

The reason energy efficiency is this high is twofold. First, fundamental operational and financial needs, and second, climate change compulsions. Reducing carbon footprint and becoming carbon neutral is in almost every operator’s corporate charter. 

The most effective option for operators to save energy is in RAN. GSMA Intelligence estimates that RAN accounts for a whopping 73% of operators’ total power consumption. That is apparent, as each operator has hundreds of thousands of base stations. Even a slight improvement in energy efficiency in base station components can have a significant impact. So, suffice to say, power consumption is one of the most, if not the most, important considerations when operators evaluate vRAN/open RAN solutions.

The best compute architecture

One of the key things holding off vRAN and open RAN for this long, while the core network has been virtualized for a long time, is the critical and demanding nature of RAN workloads. The complexity lies in Layer-1 (aka physical layer or PHY) processing.

vRAN/open RAN comprises of three parts. First is the Central Unit (CU), which manages Radio Resource Control and Packet Data Convergence Protocol functions. The Second is the Distributed Unit (DU), which manages Radio Link Control, Medium Access Control, and PHY. And third is the Radio Unit (RU), which manages digital-to-analog conversion, MIMO antenna management, and others.

From a protocol perspective, CU manages Layer-3 and part of Layer-2. DU manages part of Layer-2 and part of Layer-1. RU manages the remaining portion of Layer-1. The complexity, latency constraints, and processing needs drastically increase as you move down from Layer-3 to Layer-1. In fact, Layer-2 and 1 together consume almost 90% of the processing power of RAN.

The crucial Layer-1 functionality is divided into two parts: Low-Phy and High-Phy. Low-Phy is managed by Radio Unit (RU). The High-Phy, which includes the most demanding functions such as demodulation, beamforming, channel coding, and Forward Error Correction (FEC), is managed by DU. These functions are highly latency-sensitive and consume a significant portion of the 90% processing power mentioned above.

Regarding compute, there is consensus on using dedicated, optimized silicon, such as application-specific processors (ASICs) for RU. The common industry perception was that general-purpose compute, often called COTS — Commercial Off The Shelf servers based on x86 or Arm processors are good enough for the CU. However, big data center operators like hyperscalers and large enterprises realized long ago that generic compute is highly inefficient for complex networking and security workloads, such as IPSec and encryption. Such functions are typically offloaded to optimized Accelerators known as DPUs or Smart NICs. When the industry is starting on the vRAN journey, it is similarly beginning to realize that COTS servers are also inefficient for High-Phy.

High-Phy is where 5G rubber hits the road and is the essence of 5G radio technology. High-Phy functions make or break vRAN/open RAN. RAN vendors spend years, if not decades, on optimizing the performance of these functions. They also offer the opportunity for vendors to differentiate. In such a case, it is straightforward and logical to understand that purpose-built and optimized Accelerators are an absolute necessity for this critical workload.

So, DU will be a mix of COTS (aka host processor) running Layer-2 and one or more Accelerators running High-Phy (and networking functions). These are connected through a standard PCIe interface, widely used in the IT industry. This setup also has many other advantages. Being optimized for specific radio workloads, Accelerators are far more energy efficient, require far less cooling, and have a smaller PCB footprint. They are easy and cost-effective to scale. For example, you can add more Accelerators, not the expensive COTS processors, to increase capacity or introduce new features such as URLLC. The PCIe interface ensures full interoperability, and eliminates vendor lock-in, be it Accelerators or host processors. Additionally, this allows Accelerator vendors to differentiate. Finally, the Accelerator + PCIe + host processor setup, in the true spirit of Open RAN, offers the best-of-the-breed combination: best Accelerators from radio experts and COTS processors from generic compute experts.

Theoretically, Accelerators could be FPGAs or optimized processors (ASICs or standard/semi-custom baseband processors). However, because of the nature of the specialized workload, it makes sense to use optimized processors rather than FPGA, from power, performance, and PCB space considerations. Of course, all accelerators are not created equal. They differ in terms of the number of Layer-1 functions supported and their configuration.

Accelerators can be deployed in look-aside or in-line configurations. As the names suggest, in the look-aside configuration, Accelerator acts as a side gig, relying on the host processor to communicate with Low-Phy. In the in-line configuration, Accelerator is in charge and communicates directly with Low-Phy. When the functions that Accelerator runs are so critical and latency-sensitive, it is a no-brainer to utilize in-line configuration. Recently announced Marvell’s OCTEON 10 Fusion is an excellent example of an optimized, in-line Accelerator.

Closing thoughts

It’s pretty clear that a combination of dedicated, optimized in-line Accelerator for High-Phy (and networking), ASICs for RU, and COTS host servers for everything else is the most optimal compute configuration for vRAN/open RAN. Then the question for operators becomes how to choose the best vendor for their network. That boils down to whoever offers the best performance (processing power, capacity, and power consumption), advanced features such as 64T64R massive MIMO, beam steering, beamforming techniques, carrier aggregation, etc., in a standard compliant and virtualized architecture. Also, vendors’ track record and experience in cellular infrastructure matter.

Obviously, whichever vendor scores high on these parameters will win in the marketplace. The beauty of open RAN is that operators have the luxury of selecting the best of the breed, be it COTS, Accelerators, cloud providers, and others, for a true multi-vendor open RAN. However, that creates system integration challenges. But that is the topic for another day and another article.

Original article can be seen at: