Virtual Open Systems Scientific Publications
Event
The 27th edition of the (EuCNC 2018) | Ljubljana, Slovenia | June 18-21, a successful series of technical conferences in the field of telecommunications, sponsored by IEEE ComSoc and EURASIP, and financially supported by the European Commission, focusing on communication networks and systems, and reaching services and applications.
Contributed slides presentation
The slides presented at this conference are made publicly available by Virtual Open Systems.
Keywords
Micro-vm, container, docker, virtual machine, unikernel, KVM, Network Function Virtualization, virtualization benchmark.
Authors
Ashijeet Acharya, Jérémy Fanguède, Michele Paolino and Daniel Raho.
Acknowledgment
This research work has been supported by the European research and innovation program Horizon H2020 Next Generation Platform as a Service (NGPaaS), under the Grant Agreement number 761557. This work reflects only the authors' view and the European Commission is not responsible for any use that may be made of the information it contains.
Abstract
The Network Functions Virtualization paradigm has emerged as a new concept in networking which aims at cost reduction and ease of network scalability by leveraging on virtualization technologies and commercial-off-the-shelf hardware to disintegrate the software implementation of network functions from the underlying hardware.
Recently, lightweight virtualization techniques have emerged as efficient alternatives to traditional Virtual Network Functions (VNFs) developed as VMs. At the same time ARMv8 servers are gaining traction in the server world, mostly because of their interesting performance per watt characteristics.
In this paper, the CPU, memory and Input/Output (I/O) performance of such lightweight techniques are compared with that of classic virtual machines on both x86 and ARMv8 platforms. More in particular, we selected KVM as hypervisor solution, Docker and rkt as container engines and finally Rumprun and OSv as unikernels. On x86, our results for CPU and memory related workloads highlight a slightly better performance for containers and unikernels whereas both of them perform almost twice as better as KVM for network I/O operations. This highlights performance issues of the Linux tap bridge with KVM but that can easily be overcome by using a user space virtual switch such as VOSYSwitch and OVS/DPDK. On ARM, both KVM and containers produce similar results for CPU and memory workloads, but have an exception for network I/O operations where KVM proves to be the fastest. We also showcase the several shortcomings of unikernels on ARM which account for their lack of stable support for this architecture.
Introduction
Virtualization has been developed as a mature technology over the years in the cloud based environment to help in providing an efficient way to make use of the available computing resources and improve the quality of services. Recently, it has appeared as a vital component in the upbringing of new networking solutions like Software Defined Networking (SDN) and Network Functions Virtualization (NFV). Current Network Operators’ networks are comprised of diverse network functions and variety of proprietary hardware appliances. Often, launching a new network service requires new complicated hardware design coupled with the cost of space and energy demands for installation.
NFV tries to address these issues by making use of standard virtualization technologies to decouple network hardware from the software implementation of network functions for faster network service provisioning. NFV implements these network functions as Virtual Network Functions (VNFs) in software which can be run on a large range of standard industry servers and can be ported or instantiated easily on different locations without the need to install specialized hardware equipments at every site.
In order to keep up with the demands of the market, virtualization technologies like hypervisors have been making major development improvements equally for both x86 and ARM architectures as they have introduced hardware extensions to support virtualization. However, VMs have not proved to be as efficient in specific use cases where smooth scaling and density of applications comes into play. These shortcomings are mostly due to the large size of the guest OS which waste a huge amount of system resources. To overcome this, new virtualization techniques such as container-based virtualization and unikernels have emerged as efficient lightweight alternatives to traditional VMs for specific cases of small or medium size application deployments.
Containers in contrary to VMs, avoid deploying a full blown copy of an operating system (OS) for each instance and instead depend on sharing the same base OS among themselves. Every application running inside a container is provided with the required set of binaries and libraries to support its execution. Unikernels focus on the idea of reducing the size of the traditional VMs by eliminating the unnecessary components and are built by combining only the specialized application image and OS software parts that are needed to make it run as intended.
Containers and unikernels both benefit from their small sizes by providing faster boot times, reduced attack surface, smaller resource footprints and room for optimization. Figure 1 focuses on highlighting these architectural differences among the three technologies.
In this paper we compare the performance of hypervisors, containers and unikernels on the x86 and ARMv8 architectures. However, unikernels benckmarks do not appear in the ARMv8 results because they are not yet mature enough on this architecture. The hypervisor we selected is the well known solution KVM, the container engines we chose are Docker and rkt and finally we have Rumprun and OSv as our unikernel candidates. This study focuses on how these solutions compare to each other in terms of system and network I/O specific metrics which prove vital in context of NFV.
Access the full content of this publication
Login or register to access full information
- Vosysmonitor ecrts2017
- Rdma virtualization hpcs2017
- Hpc exascale dsd2017
- Vfpgamanager reconfig2017
- Safe split display icons2018
- Edge vim bmsb2018
- Openflow vswitch fmec18
- Vosysvirtualnet sies2018
- Egvirt als2018
- Vfpgamanager bmsb2018
- Vosysmonitor safety fruct23
- Egvirt aglamm2018
- Trustedvim wcnc2019
- Geofencing trustedvim eucnc2019
- Vfpgamanager eucnc2019
- 5gcity edge virt 5gwf2019
- X86 smm mixed criticality applepies2020
- Vosysmonitorv risc v meco2021
- Sriov vfunction manager cits2023
- Cross compartment virtio loopback esars2024
- Virtio fpga esars itec 2023
- Virtio loopback perf eval icai2024