High Performance Systems and Interconnects

Recent advances in interconnection technologies have made clustered systems built out of commodity components an attractive option for providing scalable computational and storage infrastructure in a cost-effective manner. Such systems comprise numerous hardware (processors, memory, peripheral buses, interconnect NICs, storage controllers and media) and software (embedded firmware, OS kernel, NIC and block device drivers, file systems, communication middleware, application libraries and software) components. Understanding the intricate interactions between them and streamlining their semantics is key to achieving good performance.

Our research studies the effects of shared architectural resources on SMP clusters and focuses on I/O and scheduling techniques to better adapt the execution of compute and I/O-intensive applications to the underlying architecture.


MemBUS concerns the design and implementation of memory and network bandwidth aware scheduling policies, in order to reduce the impact of memory and network contention and improve system throughput for multiprogrammed workloads on cluster of SMPs.


gmblock is an ongoing effort to implement efficient block-level storage sharing over modern processor- and DMA-enabled interconnects such as Myrinet. Its design focuses on the flow of data in a network block device system, aiming to improve the efficiency of remote block I/O operations by completely removing the CPU and main memory from the critical path. Data are pipelined directly from the storage medium to the interconnect NIC and link, thus reducing the impact of network I/O on the locally executing processes.


Edit | Attach | Watch | Print version | History: r7 < r6 < r5 < r4 < r3 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r7 - 2008-03-12 - VangelisKoukis

No permission to view TWiki.WebTopBar

This site is powered by the TWiki collaboration platform Powered by Perl

No permission to view TWiki.WebBottomBar