You can find my full CV here.

Research Interests

My research interests lie in the fields of Computer Systems, Virtualization, Operating Systems, High Performance Computing, and Interconnects.

My diploma thesis, conducted at the Computing Systems Laboratory, under the supervision of Professor Nectarios Koziris, was focused on integrating HPC Interconnect semantics in virtualized environments, using a simple lightweight RDMA protocol over Ethernet, and Xen hypervisor's split driver model. [abstract]

Work Experience

I worked for ~3 years as a (part-time) system administrator at the Computing Center of the School Electrical and Computer Engineering of the National Technical University of Athens.

My duties involved maintaining, testing and deploying various servers and services, including web servers, RDBMS, DNS, email servers, LDAP, Kerberos, OpenVPN, etc, as well as network maintenance.

I briefly worked as a site reliability engineer at the DPG Web Development company. I was responsible for the design, deployment and maintenance of an infrastructure of Linux servers, hosting high-traffic web portals

For the past ~5 years, I worked as a cloud / systems engineer at GRNET, on its IaaS cloud project (Synnefo / ~okeanos). I also joined the GRNET Network Operations Center servers team, as a system engineer.

Free Software Community Contributions

The objective of this study is the analysis and evaluation of the behavior of modern HPC cluster interconnects in virtualized environments. This work is based on previous research conducted by the Computing Systems Laboratory: we evaluate a simple interconnect based on an RDMA mechanism over programmable 10GbE interfaces, and its modified implementation, which integrated this interconnect in the Xen virtualization platform.

Both the native and the virtualized implementations of the protocol are thoroughly evaluated, in order to identify and eliminate possible bottlenecks both in hardware, and in the protocol's implementation. To obtain further insight into the implications of software overheads, we port the virtualized implementation of the protocol to the host's kernel. Specifically, to profile and instrument the various phases of a network packet's lifecycle, we implement the interconnect's protocol using the Xen's split driver model. To this end, we acquired some interesting results: a significant amount of time spent is due to the frontend--backend communication mechanism; moreover, for large messages, the time spent in copying pages across domains is non-negligible. Using simple optimizations, we were able to amortize these overheads and, thus, reduce the total time spent in the software stack.

Compared to Xen's generic ethernet interface, our approach is able to reduce the CPU overhead of protocol processing by directly transferring data from VM's memory to the network. To achieve this, we register pages prior to communication, a common approach used in HPC cluster interconnects. Preliminary results using simple micro-benchmarks show that the kernel-level implementation sustains 681 MiB/sec for large messages, while limiting the privileged guests CPU utilization to 34%. In terms of latency our approach is able to achieve 28us vs. 70us in the TCP/IP case.