Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
gmblock | ||||||||
Line: 18 to 18 | ||||||||
The GM message-passing infrastructure is extended to support the creation and mapping of large message buffers that do not reside in RAM but in the SRAM onboard the Myrinet NIC, instead. The GM firmware is enhanced to allow them to be used transparently in message send and receive operations. The buffers are mapped to the server's VM space and Linux's VM subsystem is extended to support “direct I/O” from and to these areas. The net result is that, when the userspace server issues a read() or write() system call, the DMA engines on the storage medium are programmed by the Linux kernel to exchange data with the Myrinet NIC directly over the peripheral bus, without any CPU or main memory involvement, in a completely transparent way [IPDPS 2007]. The concept is applicable to any interconnect that features a programmable NIC and exports on-board memory to the PCI physical address space. | ||||||||
Changed: | ||||||||
< < | To compensate for the limited amount of memory available on the Myrinet NIC and allow for larger size requests to make progress without any CPU involvement, gmblock was extended [CAC 2008] to support synchronized send operations. Their semantics allow the storage medium and Myrinet NIC to coordinate the flow of data in a peer-to-peer manner, pipelining data from the storage medium to the Lanai SRAM and to the fiber link. A performance evaluation of gmblock compared to showed significant increase in throughput, reduced pressure on the memory and peripheral buses and much improved execution times for concurrently executing compute-intensive applications compared to a TCP/IP- and a GM-based nbd implementation. | |||||||
> > | To compensate for the limited amount of memory available on the Myrinet NIC and allow for larger size requests to make progress without any CPU involvement, gmblock was extended [CAC 2008] to support synchronized send operations. Their semantics allow the storage medium and Myrinet NIC to coordinate the flow of data in a peer-to-peer manner, pipelining data from the storage medium to the Lanai SRAM and to the fiber link. Performance evaluation of gmblock showed significant increase in throughput, reduced pressure on the memory and peripheral buses and much improved execution times for concurrently executing compute-intensive applications compared to a TCP/IP- and a GM-based nbd implementation. | |||||||
Publications |