Difference: Virt (5 vs. 6)

Revision 62010-02-17 - AnastasiosNanos

Line: 1 to 1
 
META TOPICPARENT name="WebGuestLeftBar"

Virtualization

Changed:
<
<

Intro

>
>

Intro

  Virtualization is the art of subdividing resources provided by modern computers in order to achieve maximum performance, isolated execution, maximum
Line: 13 to 13
 try to experiment on Virtualized I/O, especially in Network Device Virtualization.
Changed:
<
<

Storage and Network I/O

>
>

Storage and Network I/O

 

We believe that modern High Performance Interconnection Networks provide

Line: 35 to 35
 Machines by utilizing commodity hardware and innovative resource-sharing virtualization architectures.
Changed:
<
<

MyriXen (ongoing)

>
>

High-performance I/O in Virtualized Environments

Device access in Virtualization environments is often realized by specific layers within the hypervisor which allow VMs to interface with the hardware. A common practice for such an interface is a split driver model. These specific layers host a backend driver while guest VM kernels host a frontend driver exposing a generic device API to guest kernels or user-space. The backend exports a communication mechanism to the frontend along with interrupt routing, page flipping and shared memory techniques.

On the other hand, intelligent interconnects provide NICs that offload protocol processing and achieve fast message exchange, suitable for HPC applications. These NICs feature specific hardware such as DMA engines, volatile memory, I/O or network processors and an interface to the physical medium. To avoid the overhead associated with user-space -- kernel-space communication, HPC interconnects often utilize a user-level networking approach. Thus, the NIC can export a virtual instance of a network interface directly to an application. Our work is focused on integrating these semantics into the VMM split driver model.

To evaluate our framework we experiment on optimizing the data exchange path between an application running in a Xen VM and a 10Gbps interface. To provide intelligence to the network controller we choose to apply our approach on a Myrinet Myri-10G NIC and a custom 10GbE interface which consists of an I/O processor, a number of DMA engines and a commodity 10GbE NIC.

The split driver model poses difficulties for user-level direct NIC access in VMs. To enable VMM-bypass techniques, we need to let VMs have direct access to certain NIC resources. The building block of our framework is the backend which allows the frontend to communicate with the NIC's core driver. The frontend driver communicates with the backend via an event channel mechanism. Contrary to Xen's netfront / netback architecture, our framework utilizes the backend in conjunction with the NIC's core driver to grant pages to the VM user space and install mappings that can simulate the normal case while the netfront driver uses these channels as a data path (to send or receive packets).

MyriXen (ongoing)

  Data access in HPC infrastructures is realized via user-level networking and OS-bypass techniques through which nodes can communicate
Line: 59 to 97
 in clusters of VMs provided by Cloud Computing infrastructures with near-native performance.
Added:
>
>

Summary

Our framework allows VMs to share an HPC NIC efficiently and exchange messages with the network. It is a thin split driver layer running on top of the NIC's core driver and consists of the backend driver in the driver domain and the frontend drivers in the VMs. Our current agenda consists of evaluating our prototype in order to estimate our framework's efficiency. In the future, we plan to experiment on fine-tuning the NIC's intelligence and propose a high-performance interconnection architecture for Virtualized environments based on commodity components.

 See the Virtualization Section in our Bibliography Portal for selected publications concerning Virtualization techniques.
 
This site is powered by the TWiki collaboration platform Powered by Perl

No permission to view TWiki.WebBottomBar