Difference: WebHome (1 vs. 35)

Revision 352014-07-07 - AnastasiosNanos

Line: 1 to 1
 

PDSG Research

Added:
>
>
Here is the new website presenting CSLab research.
 

Parallel, High-Performance Systems

We are interested in how advances in technology and architecture can lead to better performance in parallel, high-perfomance systems. The main focus of our work is to develop optimizations for applications, ranging from extremely parallel ones (e.g. SPMV) to inherently serial ones (e.g. Dijkstra's algorithm). These applications are studied on a variety of systems, like PC clusters, multicore and multithread platforms as well as emerging architectures like GPGPUs and the Cell B/E. At the same time we are also looking at the underlying hardware structures and we try to improve their performance. In this context, current research efforts range from algorithms for resource (e.g. caches) partitioning between multiple cores to the development of efficient I/O and scheduling techniques.

Revision 342010-01-20 - IoannisKonstantinou

Line: 1 to 1
 

PDSG Research

Parallel, High-Performance Systems

Revision 332010-01-19 - IoannisKonstantinou

Line: 1 to 1
 

PDSG Research

Parallel, High-Performance Systems

Line: 47 to 47
 
Changed:
<
<
  • Large Scale Data Management
>
>
 

List of conferences

Revision 322010-01-19 - IoannisKonstantinou

Line: 1 to 1
 

PDSG Research

Parallel, High-Performance Systems

Line: 47 to 47
 
Changed:
<
<
  • Large Scale Data Processing
>
>
  • Large Scale Data Management
 

List of conferences

Revision 312010-01-15 - ChristinaBoumpouka

Line: 1 to 1
 

PDSG Research

Parallel, High-Performance Systems

Line: 47 to 47
 
Added:
>
>
  • Large Scale Data Processing
 

List of conferences

Revision 302009-10-07 - VasileiosKarakasis

Line: 1 to 1
 

PDSG Research

Parallel, High-Performance Systems

Line: 47 to 47
 
Added:
>
>

List of conferences

Here is a list of conferences that are of interest to the Distributed Systems group at CSLab.

 

PDSG People

Academic Staff

Research Staff

PhD Students

Revision 292009-10-06 - VasileiosKarakasis

Line: 1 to 1
 

PDSG Research

Parallel, High-Performance Systems

Line: 25 to 25
 
Added:
>
>

List of conferences

Here is a list of conferences that are of interest to the Parallel and High-Performance Systems group at CSLab.

 

Acknowledgements

Research in these areas is performed using Simics, running on an academic site license kindly provided by Virtutech. Intel Hellas has generously contributed to the infrastructure of our group.

Revision 282009-07-08 - KonstantinosNikas

Line: 1 to 1
 

PDSG Research

Parallel, High-Performance Systems

Line: 11 to 11
 
    • Graph Algorithms

  • Parallelizing for Multicore Platforms
Changed:
<
<
    • Transactional Memory
    • Helper Threading
>
>
 

Added:
>
>
 
  • Interconnection Networks

Revision 272009-07-03 - KaterinaDoka

Line: 1 to 1
 

PDSG Research

Parallel, High-Performance Systems

Line: 40 to 40
 
Added:
>
>
 

Revision 262009-03-10 - KonstantinosNikas

Line: 1 to 1
 

PDSG Research

Parallel, High-Performance Systems

Line: 24 to 24
 
Added:
>
>

Acknowledgements

Research in these areas is performed using Simics, running on an academic site license kindly provided by Virtutech. Intel Hellas has generously contributed to the infrastructure of our group.

 

Distributed Systems

Revision 252009-02-20 - KonstantinosNikas

Line: 1 to 1
Changed:
<
<

PDSG Research

>
>

PDSG Research

 
Changed:
<
<
Research Areas Activities / Projects Bibliography Portal
Computer Architecture SMT processors
Caches for CMPs
Emerging Architectures
Multicore Systems
SMT processors
Transactional Memory
High Performance Computing SPMV
Stencil Computations
 
High Performance Systems and Interconnects MemBUS
gmblock
 
Virtualization   Virtualization
Grid Computing GridTorrent
GREDIA
Grid4All
GridNews
 
P2P Networks and Distributed Systems HiPPIS
PASSION Project
XOROS
 
>
>

Parallel, High-Performance Systems

 
Added:
>
>
We are interested in how advances in technology and architecture can lead to better performance in parallel, high-perfomance systems. The main focus of our work is to develop optimizations for applications, ranging from extremely parallel ones (e.g. SPMV) to inherently serial ones (e.g. Dijkstra's algorithm). These applications are studied on a variety of systems, like PC clusters, multicore and multithread platforms as well as emerging architectures like GPGPUs and the Cell B/E. At the same time we are also looking at the underlying hardware structures and we try to improve their performance. In this context, current research efforts range from algorithms for resource (e.g. caches) partitioning between multiple cores to the development of efficient I/O and scheduling techniques.
 
Added:
>
>

Distributed Systems


 

PDSG People

Academic Staff

Research Staff

PhD Students

Revision 232009-01-24 - KonstantinosNikas

Line: 1 to 1
 

PDSG Research

Added:
>
>

Research Areas

Activities / Projects

Bibliography Portal

 
Changed:
<
<

Research

Courses

CSLab

<-- Old version, knikas @ 24/01/2009>

Research Areas

Bibliography Portal

Activities / Projects

-->

People

>
>

PDSG People

Academic Staff

Research Staff

PhD Students

Revision 222008-10-15 - VasileiosKarakasis

Line: 1 to 1
 

PDSG Research

Line: 22 to 22
 
Changed:
<
<
  • Vassilios Karakasis
>
>
  • Vasileios Karakasis
 

Revision 202008-03-14 - IoannisKonstantinou

Line: 1 to 1
 

PDSG Research

Line: 20 to 20
 
Changed:
<
<
  • Ioannis Konstantinou
>
>
 

Revision 182008-03-12 - NikosAnastopoulos

Line: 1 to 1
 

CSLab Research

Line: 23 to 23
 
Added:
>
>

Revision 172008-03-07 - KorniliosKourtis

Line: 1 to 1
 

CSLab Research

Deleted:
<
<
The main objective of this activity is to combine the advantages of distributed memory architectures (scalability) and the advantages of the shared memory programming paradigm (easy and secure programming). Other relevant activities are the automatic extraction of parallelism and the subsequent mapping of algorithms onto different types of parallel architectures.
 
Deleted:
<
<
In this context, our research has focused on applying transformations to nested for-loops, in order to efficiently execute them in Non-Uniform Memory Access (NUMA) machines, such as the SCI-Clusters of the laboratory. In particular, we apply a transformation called tiling, or supernode transformation in order to minimize the communication latency effect on the total parallel execution time of the algorithms. Tiling method groups neighboring computation points of the nested loop into blocks called tiles or supernodes thus increasing the computation grain and decreasing both the communication volume and frequence. Applying the tiling techniques, we have developped a tool, which accepts C-like nested loops and partitions them into groups/tiles with small inter-communication requirements. The tool automatically generates efficient message passing code (using MPI) to be executed on SMPs or clusters. Future work contains comprarisons of certain variations of tiling (shape, size, etc) and code generation tecniques based on experimental results taken from the application of the tool on an SCI-cluster.
 
Changed:
<
<
In addition, we explore several methods such as overlapping of communication and computations, in order to further reduce the total execution time of the transformed code. The targeted communication platforms include SCI, GM message passing over Myrinet interconnect, etc.
>
>

Research

Courses

CSLab

<-- Old version, knikas @ 24/01/2009>

Research Areas

Bibliography Portal

Activities / Projects

-->

People

Revision 162008-03-04 - ArisSotiropoulos

Line: 1 to 1
 

CSLab Research

The main objective of this activity is to combine the advantages of distributed memory architectures (scalability) and the advantages of the shared memory programming paradigm (easy and secure programming). Other relevant activities are the automatic extraction of parallelism and the subsequent mapping of algorithms onto different types of parallel architectures.

Revision 152008-03-04 - ArisSotiropoulos

Line: 1 to 1
 

CSLab Research

The main objective of this activity is to combine the advantages of distributed memory architectures (scalability) and the advantages of the shared memory programming paradigm (easy and secure programming). Other relevant activities are the automatic extraction of parallelism and the subsequent mapping of algorithms onto different types of parallel architectures.

Revision 142008-03-03 - GiorgosVerigakis

Line: 1 to 1
 

CSLab Research

The main objective of this activity is to combine the advantages of distributed memory architectures (scalability) and the advantages of the shared memory programming paradigm (easy and secure programming). Other relevant activities are the automatic extraction of parallelism and the subsequent mapping of algorithms onto different types of parallel architectures.

In this context, our research has focused on applying transformations to nested for-loops, in order to efficiently execute them in Non-Uniform Memory Access (NUMA) machines, such as the SCI-Clusters of the laboratory. In particular, we apply a transformation called tiling, or supernode transformation in order to minimize the communication latency effect on the total parallel execution time of the algorithms. Tiling method groups neighboring computation points of the nested loop into blocks called tiles or supernodes thus increasing the computation grain and decreasing both the communication volume and frequence. Applying the tiling techniques, we have developped a tool, which accepts C-like nested loops and partitions them into groups/tiles with small inter-communication requirements. The tool automatically generates efficient message passing code (using MPI) to be executed on SMPs or clusters. Future work contains comprarisons of certain variations of tiling (shape, size, etc) and code generation tecniques based on experimental results taken from the application of the tool on an SCI-cluster.

In addition, we explore several methods such as overlapping of communication and computations, in order to further reduce the total execution time of the transformed code. The targeted communication platforms include SCI, GM message passing over Myrinet interconnect, etc.

Deleted:
<
<

Research Areas

Revision 132008-03-03 - GiorgosVerigakis

Line: 1 to 1
 

CSLab Research

The main objective of this activity is to combine the advantages of distributed memory architectures (scalability) and the advantages of the shared memory programming paradigm (easy and secure programming). Other relevant activities are the automatic extraction of parallelism and the subsequent mapping of algorithms onto different types of parallel architectures.
Line: 7 to 7
 In addition, we explore several methods such as overlapping of communication and computations, in order to further reduce the total execution time of the transformed code. The targeted communication platforms include SCI, GM message passing over Myrinet interconnect, etc.

Research Areas

Changed:
<
<
  • Computer Architecture
>
>
 
  • Parallel Systems
  • Distributed Systems
  • P2P Networks

Revision 122008-03-03 - GiorgosVerigakis

Line: 1 to 1
Changed:
<
<

Welcome to the CSLab web

This is CSLab collaboration wiki implemented with TWiki. Everyone use it to collaborate with other people. Several sections have already been created, but feel free to add or change anything you want.
>
>

CSLab Research

The main objective of this activity is to combine the advantages of distributed memory architectures (scalability) and the advantages of the shared memory programming paradigm (easy and secure programming). Other relevant activities are the automatic extraction of parallelism and the subsequent mapping of algorithms onto different types of parallel architectures.
 
Added:
>
>
In this context, our research has focused on applying transformations to nested for-loops, in order to efficiently execute them in Non-Uniform Memory Access (NUMA) machines, such as the SCI-Clusters of the laboratory. In particular, we apply a transformation called tiling, or supernode transformation in order to minimize the communication latency effect on the total parallel execution time of the algorithms. Tiling method groups neighboring computation points of the nested loop into blocks called tiles or supernodes thus increasing the computation grain and decreasing both the communication volume and frequence. Applying the tiling techniques, we have developped a tool, which accepts C-like nested loops and partitions them into groups/tiles with small inter-communication requirements. The tool automatically generates efficient message passing code (using MPI) to be executed on SMPs or clusters. Future work contains comprarisons of certain variations of tiling (shape, size, etc) and code generation tecniques based on experimental results taken from the application of the tool on an SCI-cluster.
 
Changed:
<
<

Research Interests

>
>
In addition, we explore several methods such as overlapping of communication and computations, in order to further reduce the total execution time of the transformed code. The targeted communication platforms include SCI, GM message passing over Myrinet interconnect, etc.

Research Areas

 
Deleted:
<
<
 
Changed:
<
<

Users

Projects

Lab Management

>
>
  • Distributed Systems
  • P2P Networks

Revision 112008-02-21 - ArisSotiropoulos

Line: 1 to 1
 

Welcome to the CSLab web

Added:
>
>
This is CSLab collaboration wiki implemented with TWiki. Everyone use it to collaborate with other people. Several sections have already been created, but feel free to add or change anything you want.
 
Deleted:
<
<
Hello World! Hey, kkourt, great job! Test Yo, this is ArisSotiropoulos, aka sotirop!
 
Changed:
<
<

CSLab Web Utilities

>
>

Research Interests

 
Added:
>
>

Users

Projects

Lab Management

Revision 102008-02-21 - ArisSotiropoulos

Line: 1 to 1
 

Welcome to the CSLab web

Hello World! Hey, kkourt, great job! Test

Added:
>
>
Yo, this is ArisSotiropoulos, aka sotirop!
 

CSLab Web Utilities

Revision 92008-02-21 - VangelisKoukis

Line: 1 to 1
 

Welcome to the CSLab web

Hello World!

Changed:
<
<
>
>
Hey, kkourt, great job! Test
 

CSLab Web Utilities

Revision 82008-02-20 - KorniliosKourtis

Line: 1 to 1
 

Welcome to the CSLab web

Hello World!

Revision 72008-02-20 - KorniliosKourtis

Line: 1 to 1
 

Welcome to the CSLab web

Changed:
<
<

Available Information

  • ...
  • ...
  • ...
>
>
Hello World!
 

CSLab Web Utilities

Revision 62005-03-28 - TWikiContributor

Line: 1 to 1
 

Welcome to the CSLab web

Available Information

Revision 52005-03-28 - TWikiContributor

Line: 1 to 1
Changed:
<
<
Welcome to the home of TWiki.CSLab. This is a web-based collaboration area for ...
>
>

Welcome to the CSLab web

 
Changed:
<
<
>
>

Available Information

  • ...
  • ...
  • ...
 
Changed:
<
<

Site Tools of the CSLab Web

>
>

CSLab Web Utilities

 
Deleted:
<
<

No permission to view TWiki.WebSiteTools

Notes:

No permission to view TWiki.YouAreHere

No permission to view TWiki.SiteMap

Revision 42002-04-14 - PeterThoeny

Line: 1 to 1
 Welcome to the home of TWiki.CSLab. This is a web-based collaboration area for ...

Changed:
<
<

Maintenance of the CSLab web

  •    (More options in WebSearch)
  • WebChanges: Find out recent modifications to the TWiki.CSLab web.
  • WebIndex: Display all TWiki.CSLab topics in alphabetical order. See also the faster WebTopicList
  • WebNotify: Subscribe to be automatically notified when something changes in the TWiki.CSLab web.
  • WebStatistics: View access statistics of the TWiki.CSLab web.
  • WebPreferences: Preferences of the TWiki.CSLab web.
>
>

Site Tools of the CSLab Web

Warning
Can't INCLUDE TWiki.WebSiteTools repeatedly, topic is already included.
  Notes:
Changed:
<
<
  • You are currently in the TWiki.CSLab web. The color code for this web is a (SPECIFY COLOR) background, so you know where you are.
  • If you are not familiar with the TWiki collaboration tool, please visit WelcomeGuest in the TWiki.TWiki web first.
>
>

Warning
Can't INCLUDE TWiki.YouAreHere repeatedly, topic is already included.
 

Warning
Can't INCLUDE TWiki.SiteMap repeatedly, topic is already included.

Revision 32002-04-07 - PeterThoeny

Line: 1 to 1
 Welcome to the home of TWiki.CSLab. This is a web-based collaboration area for ...

Line: 18 to 18
 
  • You are currently in the TWiki.CSLab web. The color code for this web is a (SPECIFY COLOR) background, so you know where you are.
  • If you are not familiar with the TWiki collaboration tool, please visit WelcomeGuest in the TWiki.TWiki web first.
Changed:
<
<

No permission to view TWiki.TWikiWebsTable

>
>

Warning
Can't INCLUDE TWiki.SiteMap repeatedly, topic is already included.

Revision 22001-11-24 - PeterThoeny

Line: 1 to 1
 Welcome to the home of TWiki.CSLab. This is a web-based collaboration area for ...

Line: 8 to 8
 
  •    (More options in WebSearch)
  • WebChanges: Find out recent modifications to the TWiki.CSLab web.
Changed:
<
<
  • WebIndex: Display all TWiki.CSLab topics in alphabetical order.
>
>
 
  • WebNotify: Subscribe to be automatically notified when something changes in the TWiki.CSLab web.
  • WebStatistics: View access statistics of the TWiki.CSLab web.
  • WebPreferences: Preferences of the TWiki.CSLab web.

Revision 12001-08-08 - PeterThoeny

Line: 1 to 1
Added:
>
>
Welcome to the home of TWiki.CSLab. This is a web-based collaboration area for ...

Maintenance of the CSLab web

  •    (More options in WebSearch)
  • WebChanges: Find out recent modifications to the TWiki.CSLab web.
  • WebIndex: Display all TWiki.CSLab topics in alphabetical order.
  • WebNotify: Subscribe to be automatically notified when something changes in the TWiki.CSLab web.
  • WebStatistics: View access statistics of the TWiki.CSLab web.
  • WebPreferences: Preferences of the TWiki.CSLab web.

Notes:

  • You are currently in the TWiki.CSLab web. The color code for this web is a (SPECIFY COLOR) background, so you know where you are.
  • If you are not familiar with the TWiki collaboration tool, please visit WelcomeGuest in the TWiki.TWiki web first.

Warning
Can't INCLUDE TWiki.TWikiWebsTable repeatedly, topic is already included.
 
This site is powered by the TWiki collaboration platform Powered by Perl

No permission to view TWiki.WebBottomBar