Virtual Open Systems Scientific Publications
vFPGAmanager: A Hardware-Software Framework for Optimal FPGA Resources Exploitation in Network Function Virtualization
Event
European Conference on Networks and Communications (EuCNC 2019), Valencia, Spain, June 18-21.
Keywords
acceleration, FPGA, virtualization, vFPGAmanager, cloud, computing, SR-IOV, network functions virtualization, virtual network functions.
Authors
Spyros Chiotakis, Sébastien Pinneterre, Michele Paolino.
Acknowledgement
This work was partly funded by the European Commission under the European Union's Horizon 2020 program – grant agreement number 761557 Next Generation Platform as a Service (NGPaaS) project). The paper solely reflects the view of the authors. The Commission is not responsible for the contents of this paper or any use made thereof.
Abstract
The emergence of Network Function Virtualization (NFV) has turned dedicated hardware devices (routers, firewalls, wide area network accelerators) to virtual machines (VMs). However, the execution of these Virtualized Network Functions (VNF) in general purpose hardware brings slower performance and higher energy consumption. Field Programmable Gate Arrays (FPGAs) are a promising solution to improve performance and energy consumption thanks to their reconfigurable fabric and acceleration capabilities. The FPGA capabilities are in general made available to computing systems through the PCIe bus. Today, thanks to the SR-IOV technology, a single PCIe device can offer multiple virtual interfaces (Virtual Functions or VFs) to the applications (or VMs). This enables static allocation of FPGA portions to each virtual machine, achieving best acceleration performance given by the direct communication of guests application with the FPGA fabric.
Despite that, SR-IOV has a strong impact on the FPGA resources usage. In fact, the number of available VFs (i.e., interfaces for connection to the VMs) is limited. This is a problem, especially in containerized or cloud native environments where thousands of guests need to be run concurrently. In addition, static assignment of FPGA resources might not always be the best strategy, especially when the VNFs that are attached to VFs underutilize the FPGA resource. In that case, a VNF that rarely uses the FPGA prevents a new VNF from accessing the accelerators.
This paper aims to address these limitations by assigning one FPGA’s VF to multiple VNFs. To do this, we extended FPGA Virtualization Manager (vFPGAmanager), an SR-IOV based acceleration framework for FPGA. The CPU-FPGA throughput performance of two containers using different hardware accelerators through the same VF has been measured. The results show a penalty of 30% when two VNFs request acceleration at the same time (worst case scenario). In the best case, i.e., when the VNFs don’t concurrently request FPGA acceleration, we have close to native performance.
Introduction
Network operators came together in 2012 to create a road-map to solve the problem of accumulated proprietary hardware dedicated appliances that are needed to implement basic network functions (router, firewall, wide area network acceleration). The idea out of this proposal was to virtualize these hardware devices and substitute them with Virtual Network Functions (VNFs). This idea, standardized as Network Function Virtualization, allows vendors to reduce equipment costs, scale their software and accelerate time to market since there is no dependency on dedicated hardware.
In this context, hardware accelerators such as Field Programmable Gate Arrays (FPGAs) are of utmost importance to improve power consumption and provide carrier grade performance. In general, today FPGAs are connected to server systems through PCIe to achieve high-throughput and sustain high-speed data streams between the accelerators implemented in FPGA and the server applications executed on the CPU. In virtualized environments, the standard Single Root I/O Virtualization (SR-IOV) is used on top of PCIs to provide VMs with direct access to the FPGA card and achieve close to native performance. To do this, a single PCIe device is provided with multiple virtual interfaces called Virtual Functions (VFs), that can be used independently by each VMs to directly access the hardware thus minimizing virtualization overhead.
However, SR-IOV limits the interaction between VNFs and accelerators to a one to one static assignment. This limits the maximum number of accelerated VNFs to be executed with the number of VFs available in the system (e.g. Xilinx UltraScale+ FPGAs support 252 VFs, Intel Aria 10 support 2048 VFs). This limitation is even more important today, with the cloud native trend that sees hypervisors used together with unikernels and containers to architect each VNF in thousands of micro services running concurrently.
In addition to this, static assignment of FPGA resources might not always be the best strategy, especially when the VNFs that are attached to VFs underutilize the FPGA resource. In that case, a VNF that rarely uses the FPGA prevents a new VNF from accessing the accelerators.
This paper aims to address these limitations by assigning one FPGA’s VF to multiple VNFs. To do this, we extended FPGA Virtualization Manager (vFPGAmanager), an SR-IOV based acceleration framework for FPGA. The CPU-FPGA throughput performance of two containers using different hardware accelerators through the same VF has been measured. The results show a penalty of 30% when two VNFs request acceleration at the same time (worst case scenario). In the best case, i.e., when the VNFs don’t concurrently request FPGA acceleration, we have close to native performance.
The rest of this paper is organized as follows: Section II provides the motives for our experiment. Section III covers background knowledge about PCIe SR-IOV standard and the vFPGAmanager implementation, while Section IV shows the vFPGAmanager extensions developed to enable VFs sharing. Section V presents an overview of the architecture of our experiment from software down to hardware, the benchmark results and the FPGA device utilization. Section VI shows related works and finally section VII concludes the paper with future directions.
Access the full content of this publication
Login or register to access full information
- Vosysmonitor ecrts2017
- Rdma virtualization hpcs2017
- Hpc exascale dsd2017
- Vfpgamanager reconfig2017
- Safe split display icons2018
- Edge vim bmsb2018
- Openflow vswitch fmec18
- Vosysvirtualnet sies2018
- Egvirt als2018
- Vfpgamanager bmsb2018
- Microvm benchmark eucnc2018
- Vosysmonitor safety fruct23
- Egvirt aglamm2018
- Trustedvim wcnc2019
- Geofencing trustedvim eucnc2019
- 5gcity edge virt 5gwf2019
- X86 smm mixed criticality applepies2020
- Vosysmonitorv risc v meco2021
- Sriov vfunction manager cits2023
- Cross compartment virtio loopback esars2024
- Virtio fpga esars itec 2023
- Virtio loopback perf eval icai2024