Difference: HPSN (5 vs. 6)

Revision 62008-03-11 - VangelisKoukis

Line: 1 to 1

High Performance Systems and Interconnects

Nowadays, the increasing development of network subsystems in cluster and high performance computing has shifted the main concern surrounding data transfer; the bottleneck of cluster communication no longer lies in the network architecture, but rather in the latency induced by the operating system. Recent years research has demonstrated that user-level networking can overcome traditional networking schemes (found in clusters) that demand high CPU usage and large memory consumption. High speed interconnects that utilize low-level message passing system software allow userspace networking to minimize the use of host memory bandwidth and increase the speed of data transfer schemes. In order to achieve high speed user-level data exchange between cluster nodes, one has to account for different i/o architectures in systems software (such as modern network interfaces running custom firmware that drives data through shorter paths in the memory hierarchy subsystem).
Recent advances in interconnection technologies have made clustered systems built out of commodity components an attractive option for providing scalable computational and storage infrastructure in a cost-effective manner. Such systems comprise numerous hardware (processors, memory, peripheral buses, interconnect NICs, storage controllers and media) and software (embedded firmware, OS kernel, NIC and block device drivers, file systems, communication middleware, application libraries and software) components. Understanding the intricate interactions between them, streamlining their semantics and improving their performance is key to achieving good performance.

Our research studies the effects of shared architectural resources on SMP clusters and focuses on I/O and scheduling techniques to better adapt the execution of compute and I/O-intensive applications to the underlying architecture.


MemBUS concerns the design and implementation of memory and network bandwidth aware scheduling policies, in order to reduce the impact of memory and network contention and improve system throughput for multiprogrammed workloads on cluster of SMPs.


gmblock is an ongoing effort to implement efficient block-level storage sharing over modern processor- and DMA-enabled interconnects such as Myrinet. Its design focuses on the flow of data in a network block device system, aiming to improve the efficiency of remote block I/O operations by completely removing the CPU and main memory from the critical path. Data are pipelined, directly from the storage medium to the interconnect NIC and link, thus reducing the impact of network I/O on the locally executing processes.



This site is powered by the TWiki collaboration platform Powered by Perl

No permission to view TWiki.WebBottomBar