LikeArtifacts for Developing Performance Aware NFV Applications using DPDK
Summarized analysis of selection between key Performance technologies in NFV
Delivering end to end Solutions
What Does the Customer Really Need ?
Recently in Customer Workshops there seems to be lot of queries to quantify the Performance solution of NFVI and to position our Cloud in comparison with other market propositions such as RedHat, Mirantis and Cloud Band etc.
To Craft the required solution we decided to persuade customer by focusing on devising solution that is best fit instead of just showing a product functions. Whether to use DPDK ,EVS ,SR-IOV ? How to solve the compromise of not using the OVS vSwitch and still gain all the advantages in the cloud
What are the Options and their analysis
First we will explain what DPDK can achieve and then what it cannot, as you can see below is the DPDK architecture as you can find from Ice house Open Stack. Clearly the Kernel will be bypassed by introducing a Abstraction layer that make is possible for User space i.e. Instance workloads to reach NIC using Drivers as seen below
Traditionally we can see that it is the Kernel that process packets in OS and then it can reach NIC normally it requires all packets pass all Layers of OSI before it can hit the Deck.
A virtual memory have its memory comprise of user Space (Where NFV Program runs) and Kernel (that controls and process it using System Call API which in Computer programming known as Interrupts) which means more processing time . Similarly default page tables have size of 4K bytes. Huge Pages can go to 1GB, which is exactly what DPDK uses. Bigger page tables mean more TLB hits and less time spent in page replacement from local disk.
So to sum up DPDK is a complete user space implementation with the ability to take advantage of compiler optimizations using the complete instruction set of the hardware and OS
DPDK, in the IT world what is it?
Data Plane Development Kit (DPDK) was initially started by Intel under the BSD open source licensing and then in 2013 it became an independent open source community via DPDK.org. DPDK is a data plane software development kit that can be used to optimize packet processing. DPDK consist of a set of data plane libraries and network interface controller (NIC) drivers that can be used to develop fast packet processing applications on x86 platforms; but, it is not limited to just x86.
The main DPDK libraries are:
- Environment Abstraction Layer (EAL) —interface to gain access to lower layer resources .It actually hides environment from Applications
- Memory Manager— API to allocate memory because pool is created in huge memory and uses ring to store objects. Ring is basically entities to mark memory like heaps which are free and occupied.
- Ring Manager— FIFO API for ring structure with lockless, multi-consumer table.
- Memory Pool Manager — allocates pools of objects in memory.
- Timer Manager — timer services for DPDK execution units with ability to execute functions asynchronously.
- Poll Mode Drivers (PMD) — for 1GbE, 10GbE, and 40GbE and virtualized virtio Ethernet controllers.
- Queue Manager: Implements safe lockless queues, instead of using spinlocks, that allow different software components to process packets, while avoiding unnecessary wait
- Packet Flow Classification: DPDK Flow Classifier implements hash based flow to optimize processing
Finally DPDK uses polling method using TLB huge pages so that Polling can Optimize processing time at maximum compared to traditional interrupts without compromising for any additional load on CPU and hence enable to bypass kernel
What we Can Achieve the Results ?
High Performance that can be quantified as follows
– Improve 600% time’s performance PPS/Core (256Byte)
– improve 300% times Gbps/Core
– 100% Single trip latency improvement
– Rich Network Features VxLAN, Security ,Qos , Gi-LAN Service Chain ,RBAC and Security
What we can achieve is huge but still not the Line speed forwarding and that is why DPDK is mainly target to improve processing performance in Virtualized application i.e. IMS , Circuit Switched Core not for Throughput intense applications like Packet Core , Firewall and Caching application . The compromise for not using OVS Switch architecture will be solved by introducing Service Chaining and enabling DC Gateways hence primarily not using the L3 functionality of OVS Switches to build the complete solution .This is also the Direction ETSI NFV Phase2 Master White paper explains to industry to improve the performance of environment. Do check details of Performance Paper on ETSI website.
In fact during last year ETSI have encouraged more the EPA (Enhanced Platform Awareness) Architectures and infact in Open Stack favors already include “Extra_specs” in Key-Value pairs that can identify specific features desired to accelerate performance but because intention here is just to remain till the OS and Driver layer so EPA can be discussed later as it requires VNF , NFVO , VIM, Hyper visor all together to realize it .
@Courtesy of Open Stack docs.openstack.org
Intel White paper
I am a Senior Architect with a passion to architect and deliver solutions addressing business adoption of the Cloud and Automation/Orchestration covering both Telco and IT Applications industry.
My work in carrier Digital transformation involve Architecting and deploying Platforms for both Telco and IT Applications including Clouds both Open stack and container platforms, carrier grade NFV ,SDN and Infra Networking , DevOps CI/CD , Orchestration both NFVO and E2E SO , Edge and 5G platforms for both Consumer and Enterprise business. On DevOps side i am deeply interested in TaaS platforms and journey towards unified clouds including transition strategy for successful migration to the Cloud
Please write to me on firstname.lastname@example.org