Parallel Systems

The main objective of this activity is to combine the advantages of distributed memory architectures (scalability) and the advantages of the shared memory programming paradigm (easy and secure programming). Other relevant activities are the automatic extraction of parallelism and the subsequent mapping of algorithms onto different types of parallel architectures.

In this context, our research has focused on applying transformations to nested for-loops, in order to efficiently execute them in Non-Uniform Memory Access (NUMA) machines, such as the SCI-Clusters of the laboratory. In particular, we apply a transformation called tiling, or supernode transformation in order to minimize the communication latency effect on the total parallel execution time of the algorithms. Tiling method groups neighboring computation points of the nested loop into blocks called tiles or supernodes thus increasing the computation grain and decreasing both the communication volume and frequence. Applying the tiling techniques, we have developped a tool, which accepts C-like nested loops and partitions them into groups/tiles with small inter-communication requirements. The tool automatically generates efficient message passing code (using MPI) to be executed on SMPs or clusters. Future work contains comprarisons of certain variations of tiling (shape, size, etc) and code generation tecniques based on experimental results taken from the application of the tool on an SCI-cluster.

In addition, we explore several methods such as overlapping of communication and computations, in order to further reduce the total execution time of the transformed code. The targeted communication platforms include SCI, GM message passing over Myrinet interconnect, etc.

Publications

  • G. Goumas, A. Sotiropoulos, N. Koziris " Minimizing Completion Time for Loop Tiling with Computation and Communication Overlapping ", Proceedings of the 2001 International Parallel and Distributed Processing Symposium (IPDPS2001), IEEE Press, San Francisco, California, April 2001 ( Best paper award) ( pdf)
This paper proposes a new method for the problem of minimizing the execution time of nested for-loops using a tiling transformation. In our approach, we are interested not only in tile size and shape according to the required communication to computation ratio, but also in overall completion time. We select a time hyperplane to execute different tiles much more efficiently by exploiting the inherent overlapping between communication and computation phases among successive, atomic tile executions. We assign tiles to processors according to the tile space boundaries, thus considering the iteration space bounds. Our schedule considerably reduces overall completion time under the assumption that some part from every communication phase can be efficiently overlapped with atomic, pure tile computations. The overall schedule resembles a pipelined datapath where computations are not anymore interleaved with sends and receives to non-local processors. Experimental results in a cluster of Pentiums by using various MPI send primitives show that the total completion time is significantly reduced.

  • N. Koziris, A. Sotiropoulos, G. Goumas " A Pipelined Schedule to Minimize Comple-tion Time for Loop Tiling with Computation and Communication Overlapping ", Journal of Parallel and Distributed Computing, Volume 63, Issue 11, November 2003, pp. 1138-1151 ( pdf)
  • M. Athanasaki, A. Sotiropoulos, G. Tsoukalas, N. Koziris, P. Tsanakas " Hyperplane Grouping and Pipelined Schedules: How to Execute Tiled Loops Fast on Clusters of SMPs ", The Journal of Supercomputing, Volume 33, Issue 3, September 2005, pp. 197 - 226 ( pdf)
  • A. Sotiropoulos, G. Tsoukalas, N. Koziris " Efficient Utilization of Memory Mapped NICs onto Clusters using Pipelined Schedules ", Proceedings of the 2nd IEEE/ACM International Symposium on Cluster Computing and the Grid, Berlin, Germany, May 2002 ( pdf)
  • A. Sotiropoulos, G. Tsoukalas, N. Koziris " Enhancing the Performance of Tiled Loop Execution onto Clusters using Memory Mapped Interfaces and Pipelined Schedules ", Proceedings of the 2002 Workshop on Communication Architecture for Clusters (CAC2002) held in conjunction with Int'l Parallel and Distributed Processing Symposium (IPDPS2002), Fort Lauderdale, Florida, April 2002 ( pdf)
  • M. Athanasaki, A. Sotiropoulos, G. Tsoukalas, N. Koziris " Pipelined Scheduling of Tiled Nested Loops onto Clusters of SMPs using Memory Mapped Network Interfaces ", Proceedings of the ACM/IEEE Supercomputing 2002: High Performance Networking and Computing Conference (SC2002), Baltimore, Maryland, November 2002 ( pdf)
  • A. Sotiropoulos, G. Tsoukalas, N. Koziris " A Pipelined Execution of Tiled Nested Loops onto a Cluster of PCs using PCI-SCI NICs ", Proceedings of the 2001 SCI-Europe Conference, Dublin, Ireland, October 2001 ( ps)
  • M. Athanasaki, A. Sotiropoulos, G. Tsoukalas, N. Koziris " A Pipelined Execution of Tiled Nested Loops on SMPs with Computation and Communication Overlapping ", Proceedings of the Workshop on Compile/Runtime Techniques for Parallel Computing, held in conjunction with 2002 International Conference on Parallel Processing (ICPP-2002), Vancouver, Canada, August 2002 ( pdf)
  • A. Sotiropoulos, N. Koziris " A Pipelined Schedule for Loop Tiling to Minimize Overall Completion Time ", Proceedings of the 8th Panhellenic Conference on Informatics, Nicosia, Cyprus, November 2001 ( ps)
Edit | Attach | Watch | Print version | History: r5 < r4 < r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r5 - 2008-03-06 - ArisSotiropoulos
 

No permission to view TWiki.WebTopBar

This site is powered by the TWiki collaboration platform Powered by Perl

No permission to view TWiki.WebBottomBar