Difference: HPSN (1 vs. 7)

Revision 72008-03-12 - VangelisKoukis

Line: 1 to 1
 

High Performance Systems and Interconnects

Changed:
<
<
Recent advances in interconnection technologies have made clustered systems built out of commodity components an attractive option for providing scalable computational and storage infrastructure in a cost-effective manner. Such systems comprise numerous hardware (processors, memory, peripheral buses, interconnect NICs, storage controllers and media) and software (embedded firmware, OS kernel, NIC and block device drivers, file systems, communication middleware, application libraries and software) components. Understanding the intricate interactions between them, streamlining their semantics and improving their performance is key to achieving good performance.
>
>
Recent advances in interconnection technologies have made clustered systems built out of commodity components an attractive option for providing scalable computational and storage infrastructure in a cost-effective manner. Such systems comprise numerous hardware (processors, memory, peripheral buses, interconnect NICs, storage controllers and media) and software (embedded firmware, OS kernel, NIC and block device drivers, file systems, communication middleware, application libraries and software) components. Understanding the intricate interactions between them and streamlining their semantics is key to achieving good performance.
  Our research studies the effects of shared architectural resources on SMP clusters and focuses on I/O and scheduling techniques to better adapt the execution of compute and I/O-intensive applications to the underlying architecture.
Line: 11 to 11
 

gmblock

Changed:
<
<
gmblock is an ongoing effort to implement efficient block-level storage sharing over modern processor- and DMA-enabled interconnects such as Myrinet. Its design focuses on the flow of data in a network block device system, aiming to improve the efficiency of remote block I/O operations by completely removing the CPU and main memory from the critical path. Data are pipelined, directly from the storage medium to the interconnect NIC and link, thus reducing the impact of network I/O on the locally executing processes.
>
>
gmblock is an ongoing effort to implement efficient block-level storage sharing over modern processor- and DMA-enabled interconnects such as Myrinet. Its design focuses on the flow of data in a network block device system, aiming to improve the efficiency of remote block I/O operations by completely removing the CPU and main memory from the critical path. Data are pipelined directly from the storage medium to the interconnect NIC and link, thus reducing the impact of network I/O on the locally executing processes.
 

Publications

Revision 62008-03-11 - VangelisKoukis

Line: 1 to 1
Deleted:
<
<
META TOPICPARENT name="HPC"
 

High Performance Systems and Interconnects

Changed:
<
<
Nowadays, the increasing development of network subsystems in cluster and high performance computing has shifted the main concern surrounding data transfer; the bottleneck of cluster communication no longer lies in the network architecture, but rather in the latency induced by the operating system. Recent years research has demonstrated that user-level networking can overcome traditional networking schemes (found in clusters) that demand high CPU usage and large memory consumption. High speed interconnects that utilize low-level message passing system software allow userspace networking to minimize the use of host memory bandwidth and increase the speed of data transfer schemes. In order to achieve high speed user-level data exchange between cluster nodes, one has to account for different i/o architectures in systems software (such as modern network interfaces running custom firmware that drives data through shorter paths in the memory hierarchy subsystem).
>
>
Recent advances in interconnection technologies have made clustered systems built out of commodity components an attractive option for providing scalable computational and storage infrastructure in a cost-effective manner. Such systems comprise numerous hardware (processors, memory, peripheral buses, interconnect NICs, storage controllers and media) and software (embedded firmware, OS kernel, NIC and block device drivers, file systems, communication middleware, application libraries and software) components. Understanding the intricate interactions between them, streamlining their semantics and improving their performance is key to achieving good performance.

Our research studies the effects of shared architectural resources on SMP clusters and focuses on I/O and scheduling techniques to better adapt the execution of compute and I/O-intensive applications to the underlying architecture.

MemBUS

MemBUS concerns the design and implementation of memory and network bandwidth aware scheduling policies, in order to reduce the impact of memory and network contention and improve system throughput for multiprogrammed workloads on cluster of SMPs.

gmblock

gmblock is an ongoing effort to implement efficient block-level storage sharing over modern processor- and DMA-enabled interconnects such as Myrinet. Its design focuses on the flow of data in a network block device system, aiming to improve the efficiency of remote block I/O operations by completely removing the CPU and main memory from the critical path. Data are pipelined, directly from the storage medium to the interconnect NIC and link, thus reducing the impact of network I/O on the locally executing processes.

 

Publications

Revision 52008-03-11 - VangelisKoukis

Line: 1 to 1
 
META TOPICPARENT name="HPC"

High Performance Systems and Interconnects

Line: 7 to 7
 

Publications

Changed:
<
<
  • E. Koukis, A. Nanos and N. Koziris, “Synchronized Send Operations for Efficient Streaming Block I/O over Myrinet,” Proceedings of the Workshop on Communication Architecture for Clusters (CAC 2008), held in conjunction with the 22nd International Parallel and Distributed Processing Symposium (IPDPS 2008), Miami, FL, USA, 14-18 April, 2008, to appear
>
>
  \ No newline at end of file

Revision 42008-03-11 - VangelisKoukis

Line: 1 to 1
 
META TOPICPARENT name="HPC"

High Performance Systems and Interconnects

Line: 7 to 7
 

Publications

Deleted:
<
<

-- ArisSotiropoulos - 06 Mar 2008

 \ No newline at end of file
Added:
>
>
  • E. Koukis, A. Nanos and N. Koziris, “Synchronized Send Operations for Efficient Streaming Block I/O over Myrinet,” Proceedings of the Workshop on Communication Architecture for Clusters (CAC 2008), held in conjunction with the 22nd International Parallel and Distributed Processing Symposium (IPDPS 2008), Miami, FL, USA, 14-18 April, 2008, to appear
  • E. Koukis and N. Koziris, “Efficient Block Device Sharing over Myrinet with Memory Bypass,” Proceedings of the 21th International Parallel and Distributed Processing Symposium (IPDPS 2007), p. 29, Long Beach, CA, USA, 26-30 March, 2007
  • E. Koukis and N. Koziris, “Memory and Network Bandwidth Aware Scheduling of Multiprogrammed Workloads on Clusters of SMPs,” Proceedings of the 12th International Conference on Parallel and Distributed Systems (ICPADS 2006), pp. 345-354, Minneapolis, MN, USA, 12-15 July, 2006
  • E. Koukis and N. Koziris, “Memory Bandwidth Aware Scheduling for SMP Cluster Nodes,” Proceedings of the 13th Euromicro Conference on Parallel, Distributed and Network-based Processing (PDP '05), pp. 187-196, Lugano, Switzerland, 6-11 Feb. 2005
 \ No newline at end of file

Revision 32008-03-09 - AnastasiosNanos

Line: 1 to 1
 
META TOPICPARENT name="HPC"

High Performance Systems and Interconnects

Nowadays, the increasing development of network subsystems in cluster and high performance computing has shifted the main concern surrounding data transfer; the bottleneck of cluster communication no longer lies in the network architecture, but rather in the latency induced by the operating system. Recent years research has demonstrated that user-level networking can overcome traditional networking schemes (found in clusters) that demand high CPU usage and large memory consumption. High speed interconnects that utilize low-level message

Changed:
<
<
passing system software allow userspace networking to minimize the use of host memory bandwidth and increase the speed of data transfer schemes. In order to achieve high speed user-level data exchange between cluster nodes, one has to account for different i/o architectures in systems software (such as modern network interfaces running custom firmare that drives data through shorter paths in the memory hierarchy subsystem).
>
>
passing system software allow userspace networking to minimize the use of host memory bandwidth and increase the speed of data transfer schemes. In order to achieve high speed user-level data exchange between cluster nodes, one has to account for different i/o architectures in systems software (such as modern network interfaces running custom firmware that drives data through shorter paths in the memory hierarchy subsystem).
 

Publications

Revision 22008-03-08 - AnastasiosNanos

Line: 1 to 1
 
META TOPICPARENT name="HPC"

High Performance Systems and Interconnects

Added:
>
>
Nowadays, the increasing development of network subsystems in cluster and high performance computing has shifted the main concern surrounding data transfer; the bottleneck of cluster communication no longer lies in the network architecture, but rather in the latency induced by the operating system. Recent years research has demonstrated that user-level networking can overcome traditional networking schemes (found in clusters) that demand high CPU usage and large memory consumption. High speed interconnects that utilize low-level message passing system software allow userspace networking to minimize the use of host memory bandwidth and increase the speed of data transfer schemes. In order to achieve high speed user-level data exchange between cluster nodes, one has to account for different i/o architectures in systems software (such as modern network interfaces running custom firmare that drives data through shorter paths in the memory hierarchy subsystem).
 

Publications

Revision 12008-03-06 - ArisSotiropoulos

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="HPC"

High Performance Systems and Interconnects

Publications

-- ArisSotiropoulos - 06 Mar 2008

 
This site is powered by the TWiki collaboration platform Powered by Perl

No permission to view TWiki.WebBottomBar