Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
gmblock | ||||||||
Line: 6 to 6 | ||||||||
To meet the I/O needs of HPC applications running on top of them, cluster filesystems are deployed, enabling access to a common filesystem namespace and concurrent I/O operations on shared data. Most high-performance cluster filesystems they are shared-disk filesystems [IBM GPFS, Redhat GFS, Oracle OCFS2], meaning that all participating access nodes need block-level access to a shared storage pool with Direct-Attached Storage semantics (e.g., as SCSI/SAS or devices). Traditionally, FibreChannel-based Storage Area Networks (SANs) have been used to meet this requirement in enterprise environments. However, reasons of cost-effectiveness, redundancy and reliability have shifted the focus from deploying dedicated SANs to providing block-level access to shared storage over the same interconnect used for IPC. This is made possible with the use of a Network Block Device, or nbd, layer, which allows cluster nodes to contribute part of their local storage in order to form virtual, shared, block-level storage pools. | ||||||||
Changed: | ||||||||
< < | The gmblock project encompasses our work on designing and implementing scalable block-lavel storage sharing over Myrinet, so that shared disk filesystems may be deployed over a shared-nothing architecture. In these case every cluster node assumes a dual role; it is both a compute and a storage node. This has several distinct advantages: | |||||||
> > | The gmblock project encompasses our work on designing and implementing scalable block-lavel storage sharing over Myrinet, so that shared disk filesystems may be deployed over a shared-nothing architecture. In this case every cluster node assumes a dual role; it is both a compute and a storage node. This has several distinct advantages: | |||||||
|