NNDB’s Not a Database

My latest project is called NNDB.

I’ve worked with databases for quite a long time now, and for a while I’ve been thinking about how they work under the hood. I know very little about it, but I thought I could learn a bit by trying to implement something similar myself.

I’m interested in how queries work against joined tables, how to implement indices and so on.

I’ve also been feeling that I want to do some C++ as an open source project. I do it all day at work, and for some problems it feels like the right tool for the job.

NNDB is sort-of like an in-memory database, but it works with C++ types for its columns, instead of a fixed set like varchar, int etc. You can put your own value-typed classes in the columns, and all values are type-checked at compile time.

It’s always struck me as strange that with a traditional code+SQL setup you have to keep your SQL in sync with your code manually. Of course, there are lots of trendy Object-Relational-Mapping thingies that solve that problem, but I felt it could be approached from another direction: instead of generating code to match your data, or generating SQL to match your code, why not specify your data structure in code?

In NNDB you define a table something like this:

typedef nndb::Values< unsigned long, std::string, std::string, MyDate >

class PersonTable : public nndb::Table
    enum Columns

Actually, defining your own class is unnecessary, but it’s nice to have an enum to name your columns, and making a class gives you a nice place to put it.

To insert a row you do something like this:

PersonTable person_table;
person_table.Insert( PersonValues( 0,
    "Andy", "Balaam", MyDate( 12000000 ) ) );

You can do simple queries with WHERE and ORDER BY clauses, and I’m working on indexes.

After that will come JOINs, and anything else that takes my fancy.

I don’t anticipate NNDB being useful to anyone – it’s really for me to understand why things are as they are in the world of databases. However, you never know – it may turn out to be a fast and convenient way to store data in the C++ world. I think some of the applications that use databases don’t really need the kind of concurrent multi-user network-accessible features they have, but really just want to search, join and store reliably, and NNDB might one day grow into something that can find a niche.

To explore more, check out the complete example.

Talk in code

Last week we had an extended discussion at work about how we were going to implement a specific feature.

This discussion hijacked our entire Scrum sprint planning meeting (yes, I know, we should have time-boxed it). It was painful, but the guy who was going to implement it (yes, I know, we should all collectively own our tasks) needed the discussion: otherwise it wasn’t going to get implemented. It certainly wasn’t going to get broken into short tasks until we knew how we were going to do it.

Anyway, asides aside, I came out of that discussion bruised but triumphant. We had a plan not only on how to write the code, but also how to test it. I believe the key thing that slowly led the discussion from a FUD-throwing contest into a constructive dialogue was the fact that we began to talk in code.

There are two facets to this principle:

1. Show me the code

As Linus once said, “Talk is cheap. Show me the code.“.

If you are at all disagreeing about how what you’re doing will work, open up the source files in question. Write example code – modify the existing methods or sketch a new one. Outline the classes you will need. Code is inherently unambiguous. White board diagrams and hand-waving are not.

Why wouldn’t you do this? Fear you might be wrong? Perhaps you should have phrased your argument a little less strongly?

Is this slower than drawing boxes on a whiteboard? Not if you include time spent resolving the confusion caused by the ambiguities inherent in line drawings.

Does UML make whiteboards less ambiguous? Yes, if all your developers can be bothered to learn it. But why learn a new language when you can communicate using the language you all speak all day – code?

2. Create a formal language to describe the problem

If your problem is sufficiently complex, you may want to codify the problem into a formal (text-based) language.

In last week’s discussion we were constantly bouncing back and forth between different corner cases until we started writing them down in a formal language.

The language I chose was an adaptation of a Domain-specific language I wrote to test a different part of our program. I would love to turn the cases we wrote down in that meeting into real tests that run after every build (in fact I am working on it) but their immediate value was to turn very confusing “what-if”s into concrete cases we could discuss.

Before we started using the formal language, the conversations went something like this:

Developer: “If we implement it like that, this bad thing will happen.”

Manager: “That’s fine – it’s a corner case that we can tidy up later if we need it.”

Developer: (Muttering) “He clearly doesn’t understand what I mean.”


After we started using the formal language they went something like this:

Developer: “If we implement it like that, this bad thing will happen.”

Me: “Write it down, I tell you.”

Developer: (Typing) “See, this will happen!”

Manager: “That’s fine – it’s a corner case that we can tidy up later if we need it.”

Developer: (Muttering) “Flipping managers.”


The conversation progresses if all parties believe the others understand what they are saying. It is not disagreement that paralyses conversations – it is misunderstanding.

To avoid misunderstanding, talk in code – preferably a real programming language, but if that’s too verbose, a text-based code that is unambiguous and understood by everyone involved.

Note on whiteboards

You can’t copy and paste them, and you can’t (easily) keep what you did with them, and you can’t use them to communicate over long distances.

And don’t even try and suggest an electronic whiteboard. In a few years they may solve all of the above problems, but not now. They fail the “can I draw stuff?” test at the moment.

Even when electronic whiteboards solve those problems, they won’t solve the fact that lines and boxes are more ambiguous and less detailed than code in text form.

If you all know and like UML, that makes your diagrams less ambiguous, but still they often don’t allow enough detail: why bother?

An actual difficult bug fixed

Of course, I am bound to get a bug report immediately I have posted this telling me my fix breaks everything, but for the moment I am chuffed that I found, tested, and fixed a genuinely difficult bug.

I am particularly proud because I wrote an automated test to ensure it can never happen again, and I used that test to make the debugging process much easier than it otherwise would be. The code that reads, processes and stores listings in FreeGuide is a spider’s web of interfaces and helper classes (because of the arguably over-engineered plugins framework used for all the moving parts), and tracking this down with plain old-fashioned debugging would have been a huge job.

Anyway, I bet you are dying to hear what the bug was, aren’t you?

When you have a programme already stored in FreeGuide, and then a new one comes along that overlaps it, the old programme is deleted. For example if we start off with:

... 19:00 Top Gear ................... 20:00 Charlie and Lola .............

but then later download listings again and get these programmes:

... 19:00 Pocoyo .. 19:15 Round the World ...... 

Then Top Gear will disappear, and be replaced by the 2 new programmes. In fact, any old programme that overlaps any new incoming programme will be automatically deleted.

At least, that is what is supposed to happen. In fact, the real situation is a little more complex because the programmes are stored in separate files (.ser files) for different days and times. Actually, there are 4 files for each day, named things like “day-2008-09-15-A.ser”, where the suffix A, B, C and D indicate which part of each day a file is for.

So imagine what happens when the first set of programmes comes in looking like this:

19:00 Programme 1A ......... 21:15 Programme 1B .. 21:30 Programme 1C ........... 22:00

and then the second comes in like this:

19:00 Programme 2A......................................... 21:45 Programme 2C .. 22:00

So obviously the old 3 programmes should be completey deleted, and the new 2 should be what you see.

But you don’t. In fact what you see is programme 1B and programme 2C, before 1B and between the two. Weird huh?

“Why?” I hear you ask. Well, it’s simple when you consider how the programmes are split into files.

Programme 1A goes into file day-2008-09-14-D.ser, and programmes 1B and 1C go into day-2008-09-15-A.ser.

[Side note: this is true in this case because the bug reporter is in the GMT -0400 timezone and the file boundaries are quarters of a day in GMT.]

Then, when the new programmes come along, 2A goes into 14-D – wiping out 1A, and 2C goes into 15-A – wiping out 1C but not 1B.

Then, when the files get read back in again later, 2A is read from 14-D, but then 1B is read from 15-A, wiping out 2A, and finally 2C is read in as well from 15-A, so we end up with 1B and 2C.

How to fix it? Well, what I did was leave everything as it is, and then do the final read in the reverse order. This means we read in 1B and 2C, but then we read 2A later, and it wipes out 1B, leaving 2A and 2C as we would expect.

Neat fix eh? It works because this kind of wrongness in the .ser files will only exist when a programme hanging off the end should have wiped out something in a later file. Because programmes are classified into files by their start time, they can only hang off the end of a file, not the beginning, so reading the files in backwards will always read the hanging-over file last, wiping out anything which should have been wiped out earlier.

There is a little bug/feature remaining, but it only applies when you get some really weird listings from your provider. If you had a programme like 1A (19:00 – 21:15), and downloaded new listings, which ONLY contained a programme overlapping it, but falling into a later file (so maybe it starts at 21:00), and didn’t contain any programme starting at 19:00, then the backwards reading would mean you would never see your new programme because it would be wiped out by 1A.

This is a very unusual case though, since normally if you get a new programme at 21:00, you will also get new programmes leading up to it, if only to reflect the fact that 1A is now a different length. So this is really a theoretical bug, which explains why I’ve decided not to fix it…

Anyway, by the time I’d fiddled with my test for this to get the bug to trigger (which took a long time – working out which bits to fake out and which to test at all was tricky), the actual fix was easily implemented (1 line of code I chose to break out into 3), and then validated in a single click.

Just in case I hadn’t mentioned it, I love tests.

FreeGuide 0.10.8

I am still working slowly on moving FreeGuide forward. Somehow it seems my itches for FreeGuide are all about making it less annoying for people who are trying it the first time. I guess this is motivated by my desire for world domination.

Anyway, we are one small step closer to my mum being able to use FreeGuide – when the “Choose channels” step (i.e. the XMLTV grabber configuration) goes wrong, you can now see a real genuine error message, and hopefully figure out what went wrong.

Actually, it always used to work that way but the error-catching got refactored away at some point. Anyway, I am slowly taking the ground back…

As I do more and more test-driven development at work I am becoming completely addicted. For this FreeGuide code I wrote a couple of unit tests but they are not within a proper framework, and can’t be launched easily as a test suite. I am considering JUnit.

I also want to set up some component-level tests e.g. for downloading listings for each country and checking everything works as expected. It’s brilliant fun having tests in place, but when you have as little time as I have for FreeGuide at the moment, it’s difficult to decide to spend a long time working on a test framework when I could be fixing a “real user problem” or adding a cool new feature.

But I’ve got the testing bug badly, so watch this space.

Templated test code?

At work at the moment, as part of an initiative to get with the 21st century, we are waking up to testing our code.

Thus, I am writing a lot of unit tests for old code, which can be soul-destroyingly repetitive and very pointless-feeling (even though really I do see a great value in the end result – tested code is refactorable code).

Often, tests have a lot in common with each other, so it feels right to reduce code repetition, and factor things into functions etc. The Right Way of doing this is to leave your tests as straightforward as possible, with preferably no code branches at all, just declarative statements.

Contemplating writing unit tests for the same method on 20+ very similar classes, using a template function “feels” right, for normal code values of “feel”. However, for test code, maybe it’s wrong?

My question is: is it ok to write a test function like this?:

void test_all_thingies()

template< class T >
void test_One_Thingy()
    T thingy;
    TEST_ASSERT( thingy.isSomething() );

Worse still, is this ok?

void test_all_thingies()
    test_One_Thingy<Thingy1>( "Thingy1 expected output" );
    test_One_Thingy<Thingy2>( "Thingy2 expected output" );
    test_One_Thingy<Thingy3>( "Thingy3 expected output" );
    test_One_Thingy<Thingy4>( "Thingy4 expected output" );

template< class T >
void test_One_Thingy( std::string expected_output )
    T thingy;
    TEST_ASSERT( thingy.getOutput() == expected_output );

Reasons for: otherwise I’m going to be writing huge amounts of copy-pasted code (unless someone can suggest a better way?).

Reasons against: how clear is it going to be which class failed the test when it fails?

Update: fixed unescaped diagonal brackets.