Quick note: libvirt log files

Today I wanted to start my VMs and work but bumped into a rather nasty issue: libvirt refused to launch my VMs and kept saying:

libvirtError: Unable to read from monitor: Connection reset by peer

Google didn’t help much, I’m guessing because this error is way too generic. However, I bumped into a post that suggested checking libvirt’s logs and the complete path to them.

As a person with a bit of background as a system administrator, I’m quite embarassed I didn’t think of it before. I can’t even stress enough the importance of log files. Especially when you bump into an error like the one above, the log files can give you precious insight into the problem.

And sure enough, the log file said:

Virtio-9p Failed to initialize fs-driver with id:fsdev-fs0 and export path:/home/tina/staging

I moved the staging directory some other place last night and forgot about it. 🙂 Putting it back into this location (alternatively, I could’ve edited the path used by the VM) solved the problem.

By the way, libvirt keeps its logs in

/var/log/libvirt/qemu/<image name>.log


Working environment setup: QEMU and KVM

A little background story first, since I haven’t posted anything recently.

During this internship I will be working on a USB-over-IP sharing system, called USB/IP (http://usbip.sourceforge.net/). You can attach a USB device on one machine and access it remotely over a network. But if you’ve used network device sharing solutions such as NFS, you’ve probably noticed that you can only perform generic operations such as read and write, but can’t format, mount or unmount a remote device. This is where USB/IP comes in: you can really use a USB device as if it was locally attached.

In order for this to be possible, you need kernel-level support. Indeed, USB/IP is comprised of several kernel modules. Currently, these modules reside in the staging directory of Linux kernel tree (i.e. mostly functional but not quite up to the standards). One of the goals of my internship is to cleanup these drivers.

I began by reading a paper about the architecture and protocol used by USB/IP and yesterday I thought about setting up my work environment. For this, I needed 2 machines.

But before I got started, I thought: which would be an efficient way to develop code for this project? Use my physical machine and a virtual one? Not really a good idea for kernel development: things might go bad, your kernel will panic and everything will halt. Well then, use two virtual machines? It’s a better idea, but where do you actually write code? On one of the VMs or on the physical machine and transfer it afterwards?

So I asked my mentor, Andy Grover, which would be a suitable workflow for this project. In response, he wrote an interesting article on his blog about this. If you’re interested in kernel development, do give it a read. 🙂

Ok, after this quite lenghty introduction, I finally reached the subject I was talking about in the title: QEMU and KVM.

I must admit: although I have heard about QEMU and KVM in the past and briefly knew what they were doing, I have never used any of them before today. Therefore, I turned to their manuals.

Creating a virtual machine in QEMU was a rather painfully long process. I bumped into a Debian image refusing to boot – the reason behind that was that the manual was written quite a while ago and in the provided example, the machine was allocated 256 MB of RAM, making me use 512 RAM for my tests. You can probably guess that any self-respecting operating system today would require a bit more.

When I finally managed to get over this, it complained that my processor was not 64-bit capable. Mhm. Turns out QEMU uses a different executable for 64-bit virtual machines.

Fast forward few hours (until Debian finally installed all its packages), I noticed booting a virtual machine in QEMU took literally minutes. Yes, I am not kidding. But what really made me give up on this solution was the way networking worked; basically, you had to do some voodoo magic combined with aztec incantations to have Internet access, which was pretty crucial for my project. As much as I like tinkering, this was a bit more than I wished for.

And so I got to QEMU combined with KVM. It is a bit confusing the relationship between these two. In a nutshell, KVM offers kernel-level virtualization infrastructure and greatly boosts QEMU’s performance when your VMs are the same architecture as the physical machine.

From this point on, it was pretty much smooth sailing. I managed to create 2 Debian VMs and boot them with kernel 3.11.

Another cool thing I learned about today is VirtFS which lets you share folders between the host and the guests. Luckily, setting it up for my VMs was not an issue.

At last, the VMs (the names are from World of Warcraft – I’m an intermitted player 🙂 ).