Working on SPICE at RedHat

Since June 3rd, 2019, I joined RedHat SPICE team, working remotely from my place in Grenoble area, France.

With team mates spread all over Italy, UK, Poland, Israel, Brazil and the US, I will work on SPICE, RedHat solution for remote virtual desktop: you run virtual machines in a powerful server, and you access them transparently over the LAN or Internet network.

SPICE offers features such as:

  • USB redirection (plug your mouse/keyboard/USB stick in your computer, and it shows up as plugged in the VM),
  • file drag-and-drop, to seamlessly transfer files from your computer to the VM, as well as shared directories,
  • shared clipboard for transparent copy-and-paste

Happy to join RedHat

Public release of my Qemu-snapshot work

in Code | code

On 2022-01-17, I found out that Virtual Open Systems released two years ago (July 2019) the work I did as part of the Exanode European Research project during more or less two years (May 2017-May 2019):

Virtual Open Systems developed a QEMU extension for virtual machine periodic checkpointing. A repository including all the changes is available at this address. The code is released under GNU GPLv2. Virtual Open Systems is working on a companion page in its website that instructs on how to compile and reproduce the periodic checkpointing of an ARMv8 virtual machine. The page will be reachable from this address.

Unfortunately, they squashed all the commits together, making it hard to share for review and/or rebase … And the work has never been submitted upstream …

  • Commit on Virtual Open System gitlab instance
  • Copy of the commit in my Github account with minor improvements (split of the original UFFD commits, markdown formatting of the squashed commit message).

exanode qemu Virtual Open Systems

VOSYS September Newsletter

in virtualization | newsletter

Virtual Open Systems September newsletter was published today, with two articles about my work:

Mixed-critical virtualization: VOSySmcs, mixed-criticality graphic support

VOSySmcs consists of a full fledged software stack to support a modern generation of car virtual cockpit where the In-Vehicle Infotainment (IVI) system and the Instrument Digital Cluster are consolidated and interact on a single platform. Indeed, traditional gauges and lamps are replaced by digital screens offering opportunities for new functions and interactivity. Vehicle information, entertainment, navigation, camera/video and device connectivity are being combined into displays. However, this different information does not have the same level of criticality and the consolidation of mixed-critical applications represent a real challenge.

In this context, VOSySmcs includes a mixed-criticality graphic support that enables the integration of safety-critical and non-critical information on a single display, while providing rendering guarantees for the safety-critical output. In addition, VOSySmcs supports GPU virtualization in order to provide hardware acceleration capacity for the Virtual Machines running in the non-critical partition such as Linux, Android, etc.


Computation acceleration: OpenCL inside VMs and containers

As part of the ExaNoDe H2020 research project, Virtual Open Systems develops a software API remoting solution for OpenCL. OpenCL is an open standard maintained by the Khronos Group, used for offloading computation tasks into accelerators, such as GPUs and FPGAs.

Software API remoting is a para-virtualization technique that allows accessing a host native library from the inside of a virtual machines. It operates by intercepting API function calls from the application in the guest system, and forwarding them to a helper process on the host through the use of shared memory pages. API remoting for containers can be achieved similarly, by replacing the host-to-VM communication layer (based on Virtio) with Linux inter-process communication mechanisms.

To comply with the high performance requirements of OpenCL usage, it is important to reduce as much as possible the overhead of the API remoting layer. Hence, the work has focused on passing the data buffers (that may account for several gigabytes of memory) with zero copies, that to guest physical pages lookup and remapping.


GPU virtualization solutions for HPC @ Compas'18

in Presentation | intro


The 4th of July, I was in Toulouse, France, to present our work on GPU virtualization solution for HPC, at the French COMPAS conference on parallelism, archictures and systems. The presentation was about OpenCL accelerator API remoting for HPC computin, and GPU hardware-assisted pass-through. The poster was about virtual machine live and incremental checkpointing.

OpenCL API Remoting and Qemu live and incremental checkpointing is part of our ExaNoDe activities.

GPU virtualization solutions for HPC

Kevin Pouget, Alvise Rigo, Daniel Raho (Virtual Open Systems)

  • OpenCL API Remoting
  • GPU Hardware-Assisted Pass-through
Continue Reading

ExaNode/ExaNeSt @ eXdci'18 ExascaleHPC workshop

in Presentation | talk


Today, my colleage Radoslav Dimitrov is at Ljubljana, Slovenia, at the eXdci European HPC Summit Week to present our work on virtualisation at the second ExascaleHPC joint-Workshop between ExaNoDe, ExaNeSt, ECOSCALE and EuroEXA projects.

The talk is entitled:

Virtualization technologies in modern HPC systems

It presents two aspects of our virtualization work:

  • Software switches
  • API Remoting in OpenCL and MPI
Continue Reading

VOSYS March Newsletter

in virtualization | newsletter

Virtual Open Systems March newsletter was published today, with an article about my work:

Checkpointing for HPC: High performance live checkpointing

At the 2018 HiPEAC ExascaleHPC workshop organized in the context of the ExaNoDe EC project, Virtual Open Systems has presented the progress of its implementation of live and incremental checkpointing, for Qemu-KVM.

The live aspect of this work reduce the virtual machine (VM) downtime to a few milliseconds, while the RAM is copied to disk in background; and the incremental aspect allows to checkpoint only the pages actually modified since the previous checkpoint. Periodic virtual machine checkpointing improves the reliability of HPC and cloud-computing environments, as it prevents the loss of volatile data in case of hardware failure. The live aspect makes it virtually transparent for the user, whose VM keeps running unaltered during the checkpointing. The incremental aspect further reduces the checkpoint impact on the system, as only part of the RAM is saved, and also reduces the footprint of the checkpoints on the disk.

The challenges behind both aspects of the checkpointing are related to the tracking and handling of the memory pages being modified by the guest system. Virtual Open Systems developed a novel approach to track these changes in Qemu, which guarantees the consistency of every checkpoint, regardless the activity of the guest system.

Qemu checkpointing

ExaNoDe @ DSD 2017

in Paper

Today, I presented the ExaNoDe positioning paper at the Euromicro DSD conference in Vienna. VOSYS is leading the dissemination work package of ExaNoDe, and coordinated the writing of the paper.


The paper is entitled Paving the way towards a highly energy-efficient and highly integrated compute node for the Exascale revolution: the ExaNode approach:

Power consumption and high compute density are the key factors to be considered when building a compute node for the upcoming Exascale revolution. Current architectural design and manufacturing technologies are not able to provide the requested level of density and power efficiency to realise an operational Exascale machine. A disruptive change in the hardware design and integration process is needed in order to cope with the requirements of this forthcoming computing target. This paper presents the ExaNoDe H2020 research project aiming to design a highly energy efficient and highly integrated heterogeneous compute node targeting Exascale level computing, mixing low-power processors, heterogeneous co-processors and using advanced hardware integration technologies with the novel UNIMEM Global Address Space memory system.

Continue Reading