By Prof. Carla Fabiana Chiasserini, Politecnico di Torino, Italy

Prof. Carla Fabiana Chiasserini, member of the PREDICT-6G Consortium on behalf of Politecnico di Torino, Italy, highlights the challenges that the ubiquitous use of machine learning is posing and how the PREDICT-6G project is developing solutions to make it sustainable.

Machine Learning (ML) is all around: it is becoming an essential component of many user applications and network services. However, we all know that training and executing a ML model may exact a significant toll from the computational and network infrastructure due to its high resource demand. Consequently, current implementations of ML operations are heavy energy consumers, which makes the pervasiveness of ML we are witnessing not sustainable.

PREDICT-6G is committed to find breakthrough approaches to take and solve the challenge. Specifically, it has tackled the use of services for the optimal configuration of virtualized radio interfaces and of user applications at the network edge, for which ML can be the problem and the solution at the same time. 

Network Function Virtualization (NFV) and edge computing are indeed disrupting the way mobile services can be offered through mobile network infrastructure. Third parties such as vertical industries and over-the-top players can now partner up with mobile operators to reach directly their customers and deliver a plethora of services with substantially reduced latency and bandwidth consumption. Video streaming, gaming, virtual reality, safety services for connected vehicles, and IoT are all services that can benefit from the combination of NFV and edge computing: when implemented through virtual machines or containers in servers co-located with base stations (or nearby), they can enjoy low latency and jitter, while storing and processing data locally. 

The combination of NFV, edge computing, and an efficient radio interface, e.g., O-RAN, is therefore a powerful means to offer mobile services with high quality of experience (QoE). However, user applications are not the only ones that can be virtualized: network services such as data radio transmission and reception are nowa- days virtualized and implemented through Virtual Network Functions (VNFs) as well; and both types of virtual services, user’s and network’s, may be highly computationally intensive. On the other hand, it is a fact that computational availability at the network edge is limited. It follows that in the edge ecosystem, user applications and network services compete for resources, hence designing automated and efficient resource orchestration mechanisms in the case of resource scarcity is critical. 

Further, looking more closely at the computational demand of virtualized user applications and at that of network service VNFs, one can notice that they certainly depend on the amount of data each service has to process, but they are also entangled. As an example, consider a user application at the edge and (de-)modulation and (de-)coding functions in a virtualized radio access network (vRAN). For downlink traffic, the application bitrate determines the amount of data to be processed by the vRAN; on the contrary, for uplink traffic, the data processed by the vRAN is the input to the application service. A negative correlation, however, may also exist: the more data compression is performed by a user application, the higher its computational demand, but the smaller the amount of data to be transmitted and the less the computing resources required by the vRAN. In a nutshell, a correlation exists between the amount of data processed/generated by virtual applications at the edge and network services VNFs, and such correlation can be positive or negative depending on the type of involved VNFs. Experimental tests performed within PREDICT-6G clearly show such correlation. 

Then, owing to the complex involved dynamics, PREDICT-6G has developped a scalable reinforcement learning-based framework for resource orchestration at the edge, which leverages a Pareto analysis for provable fair and efficient decisions. The developed framework, named VERA [1], meets the target values of latency and throughput for over 96% of the observation period and its scaling cost is 54% lower than a traditional, centralized framework based on deep-Q networks. 

[1] S. Tripathi, C. Puligheddu, S. Pramanik, A. Garcia-Saavedra and C. F. Chiasserini, “Fair and Scalable Orchestration of Network and Compute Resources for Virtual Edge Services,” in IEEE Transactions on Mobile Computing, doi: 10.1109/TMC.2023.3254999.

If you want to always stay updated about our project, subscribe to our newsletter and follow us on Twitter and LinkedIn!