|   | 
		
			|  META TOPICPARENT  | 
			 name="ActivitiesProjects"  | 
		  
 gmblock  | 
|   | 
To meet the I/O needs of HPC applications running on top of them, cluster filesystems are deployed, enabling access to a common filesystem  namespace and concurrent I/O operations on shared data. Most high-performance cluster filesystems they are shared-disk filesystems [IBM GPFS , Redhat GFS ,  Oracle OCFS2 ], meaning that all participating access nodes need block-level access to a shared  storage pool with Direct-Attached Storage semantics (e.g., as SCSI/SAS or devices). Traditionally, FibreChannel-based Storage Area Networks  (SANs) have been used to meet this requirement in enterprise environments. However, reasons of cost-effectiveness, redundancy and  reliability have shifted the focus from deploying dedicated SANs to providing block-level access to shared storage over the same interconnect used for IPC. This is made possible with the use of a Network Block Device, or nbd, layer, which allows cluster nodes to  contribute part of their local storage in order to form virtual, shared, block-level storage pools. | 
|
< < | The gmblock project encompasses our work on designing and implementing scalable block-lavel storage sharing over Myrinet, so that shared  disk filesystems may be deployed over a shared-nothing architecture. In these case every cluster node assumes a dual role; it is both a compute and a storage node. This has several distinct advantages: | 
> > | The gmblock project encompasses our work on designing and implementing scalable block-lavel storage sharing over Myrinet, so that shared  disk filesystems may be deployed over a shared-nothing architecture. In this case every cluster node assumes a dual role; it is both a compute and a storage node. This has several distinct advantages: | 
|   |  
-  Cost-effectiveness: No need to equip every cluster node with both a NIC and a FibreChannel HBA. The SAN can be eliminated altogether and resources redirected to acquiring more compute nodes. Instead of having two maintain two distinct networks, the cluster interconnect carries storage traffic.
  -  Scalability: The number of links to storage increases with the number of nodes. Adding a new compute node to the system increases the aggregate I/O bandwidth. 
    |