Friday, July 5, 2013

Running a Stage Prop Using a BeagleBone Black

As long term readers of this blog might remember, I am involved with the theatre and have built electronics-driven stage props before. However, the last time I used an Arduino to drive an H-bridge IC for a stepper motor, so pretty much low level electrical engineering, with a bit of Arduino coding.

This semester however, I'm involved in a play with the Provisional Players, a student theatre group at the University of Tübingen. The play is Around the World in 80 Days, our own adaption of the novel by Jules Verne. And as the protagonists are traveling around the world, their progress is tracked by moving dots on a world map. The map is fairly large, about 2 meters wide, and we're using about 150 LEDs to mark the red dots. Naturally, driving 150 LEDs directly from a single microcontroller would require 150 outputs, so that was out of the question. Phileas Fogg and Passepartout only travel into one direction, though, so pretty soon we had the idea to use shift registers to turn on LEDs instead. To control the shift registers, we wanted a straightforward interface, because we'll only have one technician handle lights, sound and the world map during the show, and we wanted to make her life as easy as possible. The prototype set-up that I just got to work (with a couple of days to spare, go me!) looks like this:

In the beginning of the project, it wasn't very clear how we would drive the LEDs, so having 60 GPIO pins available sounded like a smart idea. I decided to use a BeagleBone Black (BBB) that I had lying around after almost demoing it at the SambaXP conference in May. As I didn't build the circuits around the shift register, I won't dwell on that topic. What I want to talk about a bit more is the software side of things. The task of the system is to run LEDs that illustrate the protagonists' travels around the world.

Fogg's trip around the world, courtesy to Wikipedia.

The play has many, many scenes, and unfortunately the map is unlikely to be visible from the tech booth driving it. So the interface not only needed to allow turning on the appropriate LEDs but also provide visual feedback to the technician driving it. Also, experience with the software running our stage lights has told us that the "go back one light change" feature is used more than we'd like to admit, so a "back" button for the map would be great as well. As there's a network connection for the BBB, I decided to go for a web interface. The BBB already comes with some demo web pages, some of which also demo a JavaScript library called bonescript. That was what I wanted to use as well.

However, on closer inspection it turned out that bonescript was originally designed as a node.js module, and running it from the browser sometimes just didn't work right, and getting it integrated into a website that looked the way I wanted wasn't trivial either. A look at the bonescript page showed that the main use case seemed to be running from the cloud9 web-based IDE that is also shipped on the BBB image. For me, that turned out to be even less useful. Half of the time, my program refused to start up at all, and if it did, most of the time it'd refuse to terminate, forcing me to ssh into the BBB and killing the stray node.js job from the command line. Clearly, that wasn't a viable solution for our show.

But of course, bonescript being a nodejs module meant that I could simply write a node.js webapp that would then call the appropriate bonescript functions to trigger the hardware actions needed. And that's just what I did. The code is available on GitHub if you want to have a look, but basically it works like this:
A node.js application is providing an http server with a bunch of endpoints to trigger advancing the LEDs, and figuring out what travel destinations are available. The http server also serves out the UI files.
There's a separate module that keeps the list of destinations along with the pin that needs to be signaled to advance the shift register, and how many signals need to be sent per destination.
I also have a fakebonescript library that instead of talking to real hardware just prints to stdout so I could develop on my laptop instead of having to have the BBB available all the time.
On the user interface side of things, I use KnockoutJS to keep track of the world map state and handle user interface updates. KnockoutJS also comes with a simple templating system that allows me to build the UI for the travel destinations dynamically. For layouting, I use Twitter's Bootstrap framework. This allows me to build a simple UI with all the required "travel", "back" and "clear" buttons in less than 100 lines of custom HTML and JavaScript code.

For a total of slightly under 350 lines of code I had to write myself, I've got a BeagleBone Black running ~150 LEDs that light up to show the route Phileas Fogg and his servant Passepartout travel around the world, and the total hardware cost of the setup is less than 100 EUR if I also count the BBB that will be reusable for other projects. I'm still amazed myself how nice and easy it is to do really cool electronic projects these days.

Thursday, March 21, 2013

Model vs. Reality

My twitter feed currently is ablaze with discussions about sexism in tech and science. As a member of the social group that goes through life with difficulty set to "easy", of course none of this has happened to me. I don't know what it feels like to go to a conference where people constantly comment on my looks or gender instead of my work. As I lack the experience, and I also lack solid data, I don't want to write about that in this post.

However, as a computational biologist, I do have some experience with model vs. reality clashes, and I believe that might be the reason why people on the internet are surprised about the existence of female scientists or engineers. People also tend to get upset when they realize their mental model doesn't match reality, which might explain some of the emotional upset males show in the discussions I mentioned in the first paragraph.

Now, instead of taking the easy way out and blaming this on internet stupidity, I want to put another theory out there. People often don't get statistics. In my day-to-day work, I frequently run into publications where there is at best a loose correlation between the data and the model supposed to explain said data. And I believe that's what is happening here as well.

So by the time-proven method of pulling data out of my hat, I propose that when you graph the "ability to do science" against the sex chromosome layout of individuals, you'll get something similar to this:

So far, so good. The problem is that despite "ability to do science" not clustering for any sane definition of a cluster, the mental model of many people seems to look like this:

If you have settled for a given model, there is quite some inertia to stay with your chosen model, even if the data doesn't back it up. If reality dares to come up with conflicting data, blame reality! The ripples of a lot of mental models running into reality hard are currently washing over my twitter feed. The inertia of sticking with your model makes it hard to realize it, but in the end when reality and your model disagree, it is easier to change your model. In my example graph, a lot of lines I could put in there would likely have a similar quadratic error. To me, this is a warning sign that my model probably is bad. In the example, the conclusion should be that not only "women are bad at science, men are great" (the red line) is wrong, but also every other attempt at constructing a linear correlation between the parameters. "Ability to do science" and "sex chromosome layout" are orthogonal characteristics*. Also, why is there an arrow on the x axis, when we're looking at discrete parameters?

* According to my hat, of course.

Monday, May 14, 2012

Playing with POSIX pipes in Python

Recently I was faced with an external program that I wanted to call from my script that only writes its output to a file, not to stdout. Faced with having to call this program a lot of times in parallel, I decided to fake up its output files via POSIX FIFO pipes.
Unfortunately the python API around FIFOs is pretty close to the POSIX API, so it feels a bit un-pythonish. The following post illustrates my approach to getting around this limitation.

Workload

In order to simulate my workload, I came up with the following simple script called pipetest.py that takes an output file name and then writes some text into that file.
#!/usr/bin/env python

import sys

def main():
    pipename = sys.argv[1]
    with open(pipename, 'w') as p:
        p.write("Ceci n'est pas une pipe!\n")

if __name__ == "__main__":
    main()

The Code

In my test, this "file" will be a FIFO created by my wrapper code. The implementation of the wrapper code is as follows, I will go over the code in detail further down this post:

#!/usr/bin/env python

import tempfile
import os
from os import path
import shutil
import subprocess

class TemporaryPipe(object):
    def __init__(self, pipename="pipe"):
        self.pipename = pipename
        self.tempdir = None

    def __enter__(self):
        self.tempdir = tempfile.mkdtemp()
        pipe_path = path.join(self.tempdir, self.pipename)
        os.mkfifo(pipe_path)
        return pipe_path

    def __exit__(self, type, value, traceback):
        if self.tempdir is not None:
            shutil.rmtree(self.tempdir)

def call_helper():
    with TemporaryPipe() as p:
        script = "./pipetest.py"
        subprocess.Popen(script + " " + p, shell=True)
        with open(p, 'r') as r:
            text = r.read()
        return text.strip()

def main():
        call_helper()

if __name__ == "__main__":
    main()

Code in Detail

So let's look at the code in more detail. The code I'm using relies on a bunch of libs from the python standard library, and is working with Python 2.6 and up.

  • tempfile is used to get a temporary directory for me to create the FIFO in.
  • os has the os.mkfifo() call.
  • os.path handles the path crunching required.
  • shutil is used to remove the temporary directory after use.
  • subprocess is used to run the workload script.

TemporaryPipe class

Next comes the nifty part, a context manager object handling the creation and removal of the temporary FIFO pipe. Let's look at the class in detail.

class TemporaryPipe(object):
    def __init__(self, pipename="pipe"):
        self.pipename = pipename
        self.tempdir = None
The class definition and the constructor don't really hide anything interesting, though it's worth noting that self.tempdir is set to None. That will make the clean-up easier further down.

__enter__

    def __enter__(self):
        self.tempdir = tempfile.mkdtemp()
        pipe_path = path.join(self.tempdir, self.pipename)
        os.mkfifo(pipe_path)
        return pipe_path
The __enter__(self) function is the set-up code for the context manager. Here, a temporary directory is created. Afterwards, os.mkfifo() creates the FIFO. Finally, the pipe's path is returned.

__exit__

    def __exit__(self, type, value, traceback):
        if self.tempdir is not None:
            shutil.rmtree(self.tempdir)
The __exit__(self, type, value, traceback) function is always called when the context manager's block is exited. Thus, it's the ideal place to run the clean-up, in our case removing the temporary directory and the pipe contained within it. shutil.rmtree() takes care of this just fine. If mkdtemp() failed, we don't have to bother, of course. Our clean-up doesn't require any extra knowledge of the things we're cleaning up, so we're free to ignore all those parameters.

The call_helper Function

def call_helper():
    with TemporaryPipe() as p:
        script = "./pipetest.py"
        subprocess.Popen(script + " " + p, shell=True)
        with open(p, 'r') as r:
            text = r.read()
        return text.strip()
Because TemporaryPipe is a context manager, it's useable from a with statement. This means that in the block inside the with TemporaryPipe() as p block, there is a temporary directory containing a FIFO pipe. Because __enter__() returns the pipe's path, that will be assigned to p within the block.
subprocess.Popen() is now used to run the workload script, going via a shell to evaluate the hashtag. This probably isn't the smartest idea performance-wise, but this is proof-of-concept code after all.
After the workload script was run, another with statement opens a new block using the pipe's path, opening the FIFO for reading. The text is read out and the newline stripped. Now, the return statement returns the read text, and also causes the pipe's context manager to call the __exit__() function to clean up.

Conclusions

I'm pretty content with the way the call_helper() function reads. The complexity of setting up and then cleaning up the FIFO is hidden away in the TemporaryPipe class. I spent a bit of time coming up with this, so I thought I'd share this solution with other people. Now I just need to add this to my utility library and write tests for it.

Saturday, March 31, 2012

Samba4 DNS sprint, day 5 summary

Another long and only partially successful day is behind me, and my allocated time for this sprint is over. I said "partially successful", because I did not manage to get GSS-TSIG working. This is mostly due to the fact that I don't understand how to hook it up to GENSEC/gss on the Samba side. The API is a bit confusing to the uninitiated. What I did get done was to get to a point where incoming TKEY messages are parsed and checked, and pretty much handled correctly. We currently bail out of there with a BADKEY error, pretending the client's key didn't work. If someone with a reasonable grasp of GENSEC would explain what I need to do there to get the GSSAPI blob from the client authenticated, I would expect GSS-TSIG is very, very close.

Because it's the end of the week let me take a look at the high and low points of this sprint over the week:

  • High point: On Tuesday morning, I finally got forwarding sorted out. Ever since Tuesday, all DNS requests on my dev machine were handled by my local samba server.
  • Low point: I wasted most of Tuesday trying to debug my HMAC-MD5 signing code. Debugging crypto is hard, because the only debug tool available is "stare at the code and think very hard". This might be the weapon of choice of the kernel community, but certainly not my preferred way of doing things.
  • High point: On Wednesday morning, I managed to fix signing of TSIG requests.
  • Low point: This got me work on TSIG some more instead of moving on to GSS-TSIG, and ultimately failed because signing of TSIG replies doesn't work correctly yet, another day wasted.
  • Low point: After reading up on TKEY and GSS-TSIG, I realized that I didn't really understand what I had to do in Samba to get this sorted out. This ended up being a major stumbling block, in fact I'm still stuck there.
  • High point: During my tries to find a useful test for TKEY, I set up a Win7 client for my domain, and after a tiny fix to get PTR records handled in the update code, that machine would correctly register forward and reverse zones (without crypto, but also without complaining), and was perfectly happy using samba's DNS service for it's needs.

So to sum up, forwarding turned out to be a neater feature than I initially expected it to be, and allows me to run samba as my main name server for the local network. On the negative side, all that fancy crypto stuff isn't working yet. I do feel that none of these is really far off anymore. Maybe another pair or two of eyes would help there. I've updated the Samba Wiki DNS page to reflect the current status.

Friday, March 30, 2012

Samba DNS sprint, day 4 summary.

I'm still a but stuck with TKEY/TSIG, unfortunately. While looking at the GSS-TSIG implementation we have in libaddns, I realized that I could simplify my time handling. That ended up fixing my TSIG issues from yesterday. That is, I can now correctly generate the client/request side of a HMAC-MD5 TSIG. The server side still seems broken, at least I can't get dig to accept my reply signature, and if I query bind the server reply differs from what I would calculate fore it. Oh well.

I've looked at plain TKEY, but for now it doesn't really seem worth the effort. So I've decided to work on GSS-TSIG directly instead. I don't really know how to deal with the Gensec side of this, though, so it's a bit hard to keep the momentum going for this. I'm beginning to fear that I won't get this implemented this week. Not because any part of it was particularly hard, but because there's tons of little things that all take a couple of minutes. And of course sitting in front of the computer alone lone ranger style isn't the most fun way to develop software.

For tomorrow, I hope to get a bit more done than today. I'll be working on a little gss-tsig test utility based on libaddns that I can use to test my server implementation. That should at least allow me to figure out what's going on at specific steps. I still might need some help on the Gensec side.

Thursday, March 29, 2012

Samba DNS sprint, day 3 summary

Some progress on the TSIG front, but I'm stuck with the exact signing method for a packet. For some reason dig and I disagree on what the HMAC-MD5 of a specific query should be. The RFC is a bit vague, and the BIND code of that area seems to be in assembler. (Ok, it's C, but their coding conventions differ so much from ours that I probably have to spend a week getting my brain to adjust to that)

So I'm not continuing on hmac-md5 support, but will instead look at GSS-TSIG directly today. That's the must-have feature, and the whole week would be wasted if I didn't get that in.

TL;DR: HMAC-MD5-TSIG stupid, working on GSS-TSIG now.

Tuesday, March 27, 2012

Samba4 DNS sprint, day 2 summary

I actually spent my time working out some smaller kinks in the DNS server that I ran into while using it as the only DNS server on my development machine. I also started with restructuring my dns processing code a bit so I can handle TSIGs in a sensible way. I've got dig set up to send TSIGs with an all-0 hmac key, so for tomorrow I should be ready to go.

Oh, and I pushed my dns forwarder work to master, and it passed autobuild. Life is good.