LinuxCon 2014

As part of my internship, I had a travel allowance for attending a conference or workshop so I chose to attend one of the largest Linux conferences: LinuxCon Europe, which took place in Düsseldorf, Germany.

I landed in Düsseldorf on Friday, along with my friends. We were a pretty large group of Romanian people, with 4 OPW interns (I believe we were the group with the largest number of females in the whole conference 🙂 ). The weekend was reserved for exploring Düsseldorf and all it has to offer. I am not going to lie – german beer is as good as they say!

On Monday, the conference began. I attended the starting keynotes and thought the new project Linux Foundation announced, Dronecode, is really cool! Building a community around an open source platform for drones will bring them closer to the end user and better prepare them for specific tasks, as dictated by the community needs.

After the keynotes I went to the first talk: Enhancing Real-Time Capabilities with the PRU, by Ron Birkett from Texas Instruments. I am interested in embedded systems and currently working on a microkernel for such systems so this talk look like it could give some insight into how real-time characteristics can be achieved. This PRU (Programmable Realtime Unit) in question is a piece of hardware composed of RISC cores, memory and some other components. Its aim is to eliminate all non-determinism sources in a system. As a side note, it’s worth saying real-time characteristics are not really related to executing a task extremely fast, but by being able to guarantee a deadline for the execution of that task. One source of non-determinism are caches. The needed block of memory might or might not be in the cache when it is needed. Because of this, you can’t guarantee a certain task will get executed in a given amount of time. The PRU keeps its own memory logic for dealing with this.  Similarly, interrupts are a source of non-determinism as you can’t predict when they come. That’s why the PRU doesn’t use interrupts, but polling. These are only two of the tricks a PRU does. As one of my colleagues puts it, this is a very orthodox way of obtaining real-time characteristics and a source of inspiration for our future endeavours into real-time operating systems.

The next presentation I wanted to attend was 12  Lessons Learnt in Boot Time Reduction but, apparently, many people wanted to see it so the room got full before I could get it. That wasn’t much of a problem as I wanted to go around the booths as well.

One of my objectives for LinuxCon was to find internship opportunities for next summer. Unfortunately, this didn’t go as I expected. Part of the booths were just for exposing stuff. From the people at company booths I learned they were interested in hiring but looking for full-time employees and knew little about internship programs in the respective companies. On the bright side, I did get some contacts and going to enquire about internships. I found this situation rather peculiar but, on the other hand, it is an event targeted towards professionals and not so much towards newbies.

Later in the day there was the Kernel Internship Report presentation, where OPW interns presented their work. I had exactly 5 minutes to speak about a three months internship so I had to carefully choose the most relevant parts. In the end, I think all interns did a great job presenting their results.

Both Thursday and Wednesday were full, with lots of talks to attend and people to talk to. It was nice getting to hear talking people whose names I’ve seen many times on the kernel mailing list or in tech articles. After LinuxCon, I feel inspired and motivated to continue working on open source software. And I plan on attending next year as well. Looking forward to the event in Dublin!


The many identities of an USB device

After working for 2 months (whoa, 2 months already!?) at USB/IP, I can say I’ve become familiar with the way USB devices are seen be an operating system. Well, this belief was shattered today.

A problem of USB/IP at this point is that both the client and the server can use the shared device at the same time. If, for example, you share a USB mass storage device, and both involved parties mount it and read data from it, there’s shouldn’t be an issue. But what if they both decide to write to the device?

Obviously, concurrent access without any supervising entity is not desirable. In order to avoid this, the server would have to be unable to use the shared device (since the client intends to use the device in the first place).

Thanks to Sarah Sharp and Alan Stern, I found out about a mechanism called port claiming. While a port is claimed, whenever you plug in a USB device or change the configuration of an existing one, usbcore will prevent kernel drivers to bind to the device’s interfaces. No interface drivers, no usability – sounds good enough.

This mechanism is intended to be used from userspace, from various applications that want custom configurations for a device. So a ioctl() call is used, having as arguments the usbfs file corresponding to the USB hub the device will be connected at and the port. Sounds simple enough, right? Wrong. (at least for me)

After some research on how to enable this usbfs, I found out it was depracated and the files it contained were put in sysfs and /dev. What I knew was that I was looking for *some* file that would somehow correspond to a hub. sysfs didn’t help much, I couldn’t find something like that. On the other hand, in /dev/bus/usb there was a folder that contained char devices. The structure resembled the tree USB architecture. (I read the files in usbfs look like /proc/bus/usb/BBB/DDD – where BBB and DDD are numbers – and, indeed, /dev/bus/usb looked the same. however, my brain was helpful enough to make this connection a long time after)

Alan confirmed that this was, indeed, the file I was looking for. The plan is to claim the port from the driver (hence, from kernel space) but I wanted to see how this worked from a userspace program that called the above mentioned ioctl(). And this is where the fun begin.

I should mention that my laptop has 2 physical ports: one has another 4-port USB hub attached (with keyboard, mouse and headphones), the other has a small mass storage device. The plan was to claim the port the storage device was using.

lsusb provided the following output (the first physical port is omitted):

Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 021: ID 058f:6387 Alcor Micro Corp. Flash Drive
Bus 001 Device 003: ID 5986:0364 Acer, Inc

My first thought was the hub is the device 001:001. Now I had to determine what port the storage device was connected to. Could it be that 021? Removing and plugging the device back incremented that number, so it can’t be. This is the first identity: how it’s seen on the logical USB bus. It doesn’t matter which physical port it is plugged to.

Then I thought about the bus IDs in /sys/bus/usb/devices. My device had bus ID 1-1.2, so port 2 must be the port I’m looking for. However, after claiming port 2 from hub /dev/bus/usb/001/001, drivers would still bind to my device just fine. Something was not quite right.

A more graphical display of the devices associated with the physical port in question:

Screenshot from 2014-02-16 15:28:52

Hm, there are 2 hubs. Oh wait, the first one must be the virtual one created by the USB subsystem. So the “real” hub I’m looking for must be 001:002. Et voila, the port is indeed claimed.

This is the second identity of a USB device, and is the same when plugging the device in the same port. It is a string of hub ports.

Now that I think about it, things seem obvious. However, the documentation on this is really not newbie-friendly. Maybe that’s why everytime I figure something out I feel like I’ve accomplished something. 🙂

USB Core & friends

It’s been a while since I’ve last written something and I thought it’s about time I did this again. 🙂

The reason I haven’t been writing recently is that I’ve been progressing very slowly. I managed to have a non-trivial patch accepted (yey!) that adds functionality for an option that was left out of USB/IP on the last major refactoring.

The second patch I’ve been working on is converting the usbip-host driver from an interface driver to a device driver. To put things into context, USB/IP architecture looks pretty much like this:

The driver in question is Stub Driver in the image. As it can be seen, the USB architecture is a layered one, going from the host controller driver (HCD), to USB Core driver and, finally, interface drivers (PDD – Per Device Driver).

At this point it is worth mentioning how a USB device is organized. Each USB device comes with some configurations (that specify the power and bandwidth requirments). Only one configuration can be active at a given time, and the USB Core driver decides which one to use when the device is plugged in.

Within each configuration there are multiple interfaces. For example, in the case of USB speakers, you may have one interface that deals with the audio data and another one that manages the buttons. And within each interface there are multiple endpoints. You can think of endpoint as sinks – it can receive or transmit data. When you write something to your USB mass storage device, you are actually sending data to an endpoint. Endpoint can be used for data transfer or control purposes. There is a special endpoint, endpoint 0, which is not part of any interface and it is used for device management.

The stub driver only managed an interface. Since USB/IP exports a whole device, and not only an interface, it would make sense that this driver be a device driver. So, the driver moves a layer down the hierarchy. And this is where things get tricky.

I began by implementing the callbacks required by struct usb_device_driver – nothing special here. However, binding my device to this new driver took me a bit to get working. Why? dmesg was silent, there were no errors thrown but the device still won’t bind to it.

Long story short, after quite a bit of googling and studying of the way devices and drivers interact, I found out that every USB device was bound at the core level to a driver called, inspiredly, usb. Unbinding the device from this one and binding to the stub fixed things.

After this adventure I have managed to successfully bind the device at core level and make it visible from the other host, ready for export.

In an ideal case, the patch would be ready to submit. Only that when I actually try attaching the device from another host, I get a cute stack trace on the server side. It seems that it receives a weird endpoint from the client. I am still working on tracking down the bug; perhaps the HCD should be modified as well.

And now, back to reading the USB spec and debugging. 🙂

Quick note: libvirt log files

Today I wanted to start my VMs and work but bumped into a rather nasty issue: libvirt refused to launch my VMs and kept saying:

libvirtError: Unable to read from monitor: Connection reset by peer

Google didn’t help much, I’m guessing because this error is way too generic. However, I bumped into a post that suggested checking libvirt’s logs and the complete path to them.

As a person with a bit of background as a system administrator, I’m quite embarassed I didn’t think of it before. I can’t even stress enough the importance of log files. Especially when you bump into an error like the one above, the log files can give you precious insight into the problem.

And sure enough, the log file said:

Virtio-9p Failed to initialize fs-driver with id:fsdev-fs0 and export path:/home/tina/staging

I moved the staging directory some other place last night and forgot about it. 🙂 Putting it back into this location (alternatively, I could’ve edited the path used by the VM) solved the problem.

By the way, libvirt keeps its logs in

/var/log/libvirt/qemu/<image name>.log

Working environment setup: QEMU and KVM

A little background story first, since I haven’t posted anything recently.

During this internship I will be working on a USB-over-IP sharing system, called USB/IP ( You can attach a USB device on one machine and access it remotely over a network. But if you’ve used network device sharing solutions such as NFS, you’ve probably noticed that you can only perform generic operations such as read and write, but can’t format, mount or unmount a remote device. This is where USB/IP comes in: you can really use a USB device as if it was locally attached.

In order for this to be possible, you need kernel-level support. Indeed, USB/IP is comprised of several kernel modules. Currently, these modules reside in the staging directory of Linux kernel tree (i.e. mostly functional but not quite up to the standards). One of the goals of my internship is to cleanup these drivers.

I began by reading a paper about the architecture and protocol used by USB/IP and yesterday I thought about setting up my work environment. For this, I needed 2 machines.

But before I got started, I thought: which would be an efficient way to develop code for this project? Use my physical machine and a virtual one? Not really a good idea for kernel development: things might go bad, your kernel will panic and everything will halt. Well then, use two virtual machines? It’s a better idea, but where do you actually write code? On one of the VMs or on the physical machine and transfer it afterwards?

So I asked my mentor, Andy Grover, which would be a suitable workflow for this project. In response, he wrote an interesting article on his blog about this. If you’re interested in kernel development, do give it a read. 🙂

Ok, after this quite lenghty introduction, I finally reached the subject I was talking about in the title: QEMU and KVM.

I must admit: although I have heard about QEMU and KVM in the past and briefly knew what they were doing, I have never used any of them before today. Therefore, I turned to their manuals.

Creating a virtual machine in QEMU was a rather painfully long process. I bumped into a Debian image refusing to boot – the reason behind that was that the manual was written quite a while ago and in the provided example, the machine was allocated 256 MB of RAM, making me use 512 RAM for my tests. You can probably guess that any self-respecting operating system today would require a bit more.

When I finally managed to get over this, it complained that my processor was not 64-bit capable. Mhm. Turns out QEMU uses a different executable for 64-bit virtual machines.

Fast forward few hours (until Debian finally installed all its packages), I noticed booting a virtual machine in QEMU took literally minutes. Yes, I am not kidding. But what really made me give up on this solution was the way networking worked; basically, you had to do some voodoo magic combined with aztec incantations to have Internet access, which was pretty crucial for my project. As much as I like tinkering, this was a bit more than I wished for.

And so I got to QEMU combined with KVM. It is a bit confusing the relationship between these two. In a nutshell, KVM offers kernel-level virtualization infrastructure and greatly boosts QEMU’s performance when your VMs are the same architecture as the physical machine.

From this point on, it was pretty much smooth sailing. I managed to create 2 Debian VMs and boot them with kernel 3.11.

Another cool thing I learned about today is VirtFS which lets you share folders between the host and the guests. Luckily, setting it up for my VMs was not an issue.

At last, the VMs (the names are from World of Warcraft – I’m an intermitted player 🙂 ).



It has been quite a while since I’ve last blogged, feels nice to be back. 🙂

The main purpose of this blog is tracking my progress on the project I will soon begin working at: driver staging code cleanup in the Linux Kernel, with Andy Grover as a mentor. This is part of the OPW organized by GNOME Foundation.

Besides this, I will probably write about some other technical subjects or problems I bump into, as long as they’re related to FLOSS.

Overall, this blog’s contents will be technical. Hope you’ll find cool things here.