Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
PDSG Research | ||||||||
Added: | ||||||||
> > | Here is the new website presenting CSLab research. | |||||||
Parallel, High-Performance SystemsWe are interested in how advances in technology and architecture can lead to better performance in parallel, high-perfomance systems. The main focus of our work is to develop optimizations for applications, ranging from extremely parallel ones (e.g. SPMV) to inherently serial ones (e.g. Dijkstra's algorithm). These applications are studied on a variety of systems, like PC clusters, multicore and multithread platforms as well as emerging architectures like GPGPUs and the Cell B/E. At the same time we are also looking at the underlying hardware structures and we try to improve their performance. In this context, current research efforts range from algorithms for resource (e.g. caches) partitioning between multiple cores to the development of efficient I/O and scheduling techniques. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
PDSG ResearchParallel, High-Performance Systems |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
PDSG ResearchParallel, High-Performance Systems | ||||||||
Line: 47 to 47 | ||||||||
Changed: | ||||||||
< < |
| |||||||
> > | ||||||||
List of conferences |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
PDSG ResearchParallel, High-Performance Systems | ||||||||
Line: 47 to 47 | ||||||||
Changed: | ||||||||
< < |
| |||||||
> > |
| |||||||
List of conferences |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
PDSG ResearchParallel, High-Performance Systems | ||||||||
Line: 47 to 47 | ||||||||
Added: | ||||||||
> > |
| |||||||
List of conferences |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
PDSG ResearchParallel, High-Performance Systems | ||||||||
Line: 47 to 47 | ||||||||
Added: | ||||||||
> > |
List of conferencesHere is a list of conferences that are of interest to the Distributed Systems group at CSLab. | |||||||
PDSG People
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
PDSG ResearchParallel, High-Performance Systems | ||||||||
Line: 25 to 25 | ||||||||
Added: | ||||||||
> > | List of conferencesHere is a list of conferences that are of interest to the Parallel and High-Performance Systems group at CSLab. | |||||||
AcknowledgementsResearch in these areas is performed using Simics, running on an academic site license kindly provided by Virtutech. Intel Hellas has generously contributed to the infrastructure of our group. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
PDSG ResearchParallel, High-Performance Systems | ||||||||
Line: 40 to 40 | ||||||||
Added: | ||||||||
> > | ||||||||
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
PDSG ResearchParallel, High-Performance Systems | ||||||||
Line: 24 to 24 | ||||||||
Added: | ||||||||
> > | AcknowledgementsResearch in these areas is performed using Simics, running on an academic site license kindly provided by Virtutech. Intel Hellas has generously contributed to the infrastructure of our group. | |||||||
Distributed Systems |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Changed: | ||||||||
< < | PDSG Research | |||||||
> > | PDSG Research | |||||||
Changed: | ||||||||
< < | ||||||||
> > | Parallel, High-Performance Systems | |||||||
Added: | ||||||||
> > | We are interested in how advances in technology and architecture can lead to better performance in parallel, high-perfomance systems. The main focus of our work is to develop optimizations for applications, ranging from extremely parallel ones (e.g. SPMV) to inherently serial ones (e.g. Dijkstra's algorithm). These applications are studied on a variety of systems, like PC clusters, multicore and multithread platforms as well as emerging architectures like GPGPUs and the Cell B/E. At the same time we are also looking at the underlying hardware structures and we try to improve their performance. In this context, current research efforts range from algorithms for resource (e.g. caches) partitioning between multiple cores to the development of efficient I/O and scheduling techniques. | |||||||
Added: | ||||||||
> > |
Distributed Systems
| |||||||
PDSG People
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
PDSG Research | ||||||||
Deleted: | ||||||||
< < |
| |||||||
Changed: | ||||||||
< < | ||||||||
> > | ||||||||
Deleted: | ||||||||
< < | ||||||||
PDSG People
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
PDSG Research | ||||||||
Added: | ||||||||
> > |
| |||||||
Changed: | ||||||||
< < |
ResearchCoursesCSLab<-- Old version, knikas @ 24/01/2009>
Bibliography PortalActivities / Projects
People
| |||||||
> > | PDSG People
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
PDSG Research | ||||||||
Line: 22 to 22 | ||||||||
| ||||||||
Changed: | ||||||||
< < |
| |||||||
> > |
| |||||||
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
PDSG Research | ||||||||
Line: 11 to 11 | ||||||||
Added: | ||||||||
> > | ||||||||
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
PDSG Research | ||||||||
Line: 20 to 20 | ||||||||
| ||||||||
Changed: | ||||||||
< < |
| |||||||
> > | ||||||||
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Changed: | ||||||||
< < | CSLab Research | |||||||
> > | PDSG Research | |||||||
ResearchCoursesCSLab<-- Old version, knikas @ 24/01/2009>
Bibliography PortalActivities / Projects--> |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
CSLab Research | ||||||||
Line: 23 to 23 | ||||||||
| ||||||||
Added: | ||||||||
> > |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
CSLab Research | ||||||||
Deleted: | ||||||||
< < | The main objective of this activity is to combine the advantages of distributed memory architectures (scalability) and the advantages of the shared memory programming paradigm (easy and secure programming). Other relevant activities are the automatic extraction of parallelism and the subsequent mapping of algorithms onto different types of parallel architectures. | |||||||
Deleted: | ||||||||
< < | In this context, our research has focused on applying transformations to nested for-loops, in order to efficiently execute them in Non-Uniform Memory Access (NUMA) machines, such as the SCI-Clusters of the laboratory. In particular, we apply a transformation called tiling, or supernode transformation in order to minimize the communication latency effect on the total parallel execution time of the algorithms. Tiling method groups neighboring computation points of the nested loop into blocks called tiles or supernodes thus increasing the computation grain and decreasing both the communication volume and frequence. Applying the tiling techniques, we have developped a tool, which accepts C-like nested loops and partitions them into groups/tiles with small inter-communication requirements. The tool automatically generates efficient message passing code (using MPI) to be executed on SMPs or clusters. Future work contains comprarisons of certain variations of tiling (shape, size, etc) and code generation tecniques based on experimental results taken from the application of the tool on an SCI-cluster. | |||||||
Changed: | ||||||||
< < | In addition, we explore several methods such as overlapping of communication and computations, in order to further reduce the total execution time of the transformed code. The targeted communication platforms include SCI, GM message passing over Myrinet interconnect, etc. | |||||||
> > | ResearchCoursesCSLab<-- Old version, knikas @ 24/01/2009>
Bibliography PortalActivities / Projects
People
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
CSLab ResearchThe main objective of this activity is to combine the advantages of distributed memory architectures (scalability) and the advantages of the shared memory programming paradigm (easy and secure programming). Other relevant activities are the automatic extraction of parallelism and the subsequent mapping of algorithms onto different types of parallel architectures. In this context, our research has focused on applying transformations to nested for-loops, in order to efficiently execute them in Non-Uniform Memory Access (NUMA) machines, such as the SCI-Clusters of the laboratory. In particular, we apply a transformation called tiling, or supernode transformation in order to minimize the communication latency effect on the total parallel execution time of the algorithms. Tiling method groups neighboring computation points of the nested loop into blocks called tiles or supernodes thus increasing the computation grain and decreasing both the communication volume and frequence. Applying the tiling techniques, we have developped a tool, which accepts C-like nested loops and partitions them into groups/tiles with small inter-communication requirements. The tool automatically generates efficient message passing code (using MPI) to be executed on SMPs or clusters. Future work contains comprarisons of certain variations of tiling (shape, size, etc) and code generation tecniques based on experimental results taken from the application of the tool on an SCI-cluster. In addition, we explore several methods such as overlapping of communication and computations, in order to further reduce the total execution time of the transformed code. The targeted communication platforms include SCI, GM message passing over Myrinet interconnect, etc. | ||||||||
Deleted: | ||||||||
< < |
Research Areas
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
CSLab ResearchThe main objective of this activity is to combine the advantages of distributed memory architectures (scalability) and the advantages of the shared memory programming paradigm (easy and secure programming). Other relevant activities are the automatic extraction of parallelism and the subsequent mapping of algorithms onto different types of parallel architectures. | ||||||||
Line: 7 to 7 | ||||||||
In addition, we explore several methods such as overlapping of communication and computations, in order to further reduce the total execution time of the transformed code. The targeted communication platforms include SCI, GM message passing over Myrinet interconnect, etc.
Research Areas | ||||||||
Changed: | ||||||||
< < |
| |||||||
> > | ||||||||
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Changed: | ||||||||
< < | Welcome to the CSLab webThis is CSLab collaboration wiki implemented with TWiki. Everyone use it to collaborate with other people. Several sections have already been created, but feel free to add or change anything you want. | |||||||
> > | CSLab ResearchThe main objective of this activity is to combine the advantages of distributed memory architectures (scalability) and the advantages of the shared memory programming paradigm (easy and secure programming). Other relevant activities are the automatic extraction of parallelism and the subsequent mapping of algorithms onto different types of parallel architectures. | |||||||
Added: | ||||||||
> > | In this context, our research has focused on applying transformations to nested for-loops, in order to efficiently execute them in Non-Uniform Memory Access (NUMA) machines, such as the SCI-Clusters of the laboratory. In particular, we apply a transformation called tiling, or supernode transformation in order to minimize the communication latency effect on the total parallel execution time of the algorithms. Tiling method groups neighboring computation points of the nested loop into blocks called tiles or supernodes thus increasing the computation grain and decreasing both the communication volume and frequence. Applying the tiling techniques, we have developped a tool, which accepts C-like nested loops and partitions them into groups/tiles with small inter-communication requirements. The tool automatically generates efficient message passing code (using MPI) to be executed on SMPs or clusters. Future work contains comprarisons of certain variations of tiling (shape, size, etc) and code generation tecniques based on experimental results taken from the application of the tool on an SCI-cluster. | |||||||
Changed: | ||||||||
< < | Research Interests | |||||||
> > | In addition, we explore several methods such as overlapping of communication and computations, in order to further reduce the total execution time of the transformed code. The targeted communication platforms include SCI, GM message passing over Myrinet interconnect, etc.
Research Areas | |||||||
Deleted: | ||||||||
< < | ||||||||
Changed: | ||||||||
< < |
UsersProjectsLab Management | |||||||
> > |
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the CSLab web | ||||||||
Added: | ||||||||
> > | This is CSLab collaboration wiki implemented with TWiki. Everyone use it to collaborate with other people. Several sections have already been created, but feel free to add or change anything you want. | |||||||
Deleted: | ||||||||
< < | Hello World! Hey, kkourt, great job! Test Yo, this is ArisSotiropoulos, aka sotirop! | |||||||
Changed: | ||||||||
< < | CSLab Web Utilities | |||||||
> > | Research Interests | |||||||
Added: | ||||||||
> > | UsersProjectsLab Management |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the CSLab webHello World! Hey, kkourt, great job! Test | ||||||||
Added: | ||||||||
> > | Yo, this is ArisSotiropoulos, aka sotirop! | |||||||
CSLab Web Utilities |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the CSLab webHello World! | ||||||||
Changed: | ||||||||
< < | ||||||||
> > | Hey, kkourt, great job! Test | |||||||
CSLab Web Utilities |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the CSLab webHello World! |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the CSLab web | ||||||||
Changed: | ||||||||
< < | Available Information
| |||||||
> > | Hello World! | |||||||
CSLab Web Utilities |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the CSLab webAvailable Information |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Changed: | ||||||||
< < | Welcome to the home of TWiki.CSLab. This is a web-based collaboration area for ... | |||||||
> > | Welcome to the CSLab web | |||||||
Changed: | ||||||||
< < | ||||||||
> > | Available Information
| |||||||
Changed: | ||||||||
< < | Site Tools of the CSLab Web | |||||||
> > | CSLab Web Utilities | |||||||
Deleted: | ||||||||
< < | No permission to view TWiki.WebSiteTools Notes: No permission to view TWiki.YouAreHere No permission to view TWiki.SiteMap |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the home of TWiki.CSLab. This is a web-based collaboration area for ... | ||||||||
Changed: | ||||||||
< < | Maintenance of the CSLab web | |||||||
> > | Site Tools of the CSLab Web
| |||||||
Notes: | ||||||||
Changed: | ||||||||
< < |
| |||||||
> > |
| |||||||
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the home of TWiki.CSLab. This is a web-based collaboration area for ... | ||||||||
Line: 18 to 18 | ||||||||
| ||||||||
Changed: | ||||||||
< < | No permission to view TWiki.TWikiWebsTable | |||||||
> > |
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Welcome to the home of TWiki.CSLab. This is a web-based collaboration area for ... | ||||||||
Line: 8 to 8 | ||||||||
Changed: | ||||||||
< < |
| |||||||
> > |
| |||||||
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Added: | ||||||||
> > | Welcome to the home of TWiki.CSLab. This is a web-based collaboration area for ...
Maintenance of the CSLab webNotes:
|