Analog literals

I love this: C++ Multi-Dimensional Analog Literals.

I quote:

Have you ever felt that integer literals like “4” don’t convey the true size of the value they denote? If so, use an analog integer literal instead:

unsigned int b = I---------I;

It goes on to explain that you can use 2- and 3-dimensional “analog literals”. Genius. Read the article. Try to read the code :)

Isn’t C++ … erm … powerful?

You’ll notice that there are 9 dashes used to denote 4. This is because the trick it is using uses operator--. I’m sure the original author did this in his/her sleep and thought it was too trivial to post (or posted it before?) but I thought: if we can use operator! instead, can’t we create analog literals that use the same number of symbols as the number we want?

The answer is yes, and it’s pretty simple:

notliterals.h:

class NotLiteral
{
public:
	NotLiteral( unsigned int ival )
	: val_( ival )
	{
	}

	NotLiteral operator!() const
	{
		return NotLiteral( val_ + 1 );
	}

	operator unsigned int() const
	{
		return val_;
	}

	unsigned int val_;
};


const NotLiteral NL( 0 );

test_notliterals.cpp:

#include "notliterals.h"
#include <cassert>

int main()
{
	assert( !!!!NL == 4 );
	assert( !!NL == 2 );

	assert( !!!!!!!!!!!!!!!NL == 15 );
}

With this simpler form, it’s almost believable that there might be some kind of useful application?

Extending this to 3 dimensions is left as an exercise for the reader. For 2 dimensions, if you just want the area (not the width and height), how about this?:

	assert( !!!
	        !!!
	        !!!NL == 9 );

Update: By the way, if you don’t like all the emphasis! of! using! exclamation! marks! you can do the same thing with the unary one’s complement operator, ~. Just replace “!” everywhere above with “~” and you’re done. Unfortunately, you can’t do the same with – or + because the parser recognises “–” as the decrement operator instead of seeing that it is clearly two calls to the unary negation operator.

Separate regular expressions, or one more complex one?

I have asked myself this question several times, so I thought it was about time I did a test and found an answer.

If the user of your program can supply you with a list of regular expressions to match against some text, should you combine those expressions into one big one, or treat them separately?

In my case I need an OR relationship, so combining them just means putting a pipe symbol between them.*

So: one expression made by ORing, or looping through several – which is better? There’s only one way to find out:

import re, sys

line_with_match_foo = "This line contains foo."
line_with_match_baz = "This line contains baz."
line_without_match = "This line does not contain it."

re_strings = ( "foo", "bar1", "bar2", "baz", "bar3", "bar4", )

piped_re = re.compile( "|".join( re_strings ) )

separate_res = list( re.compile( r ) for r in re_strings )

NUM_ITERATIONS = 1000000

def piped( line ):
    for i in range( NUM_ITERATIONS ):
        if piped_re.search( line ):
            print "match!" # do something

def separate( line ):
    for i in range( NUM_ITERATIONS ):
        for s in separate_res:
            if s.search( line ):
                print "match!" # do something
                break # stop looping because we matched

arg = sys.argv[1]

if arg == "--piped-nomatch":
    piped( line_without_match )
elif arg == "--piped-match-begin":
    piped( line_with_match_foo )
elif arg == "--piped-match-middle":
    piped( line_with_match_baz )
elif arg == "--separate-nomatch":
    separate( line_without_match )
elif arg == "--separate-match-begin":
    separate( line_with_match_foo )
elif arg == "--separate-match-middle":
    separate( line_with_match_baz )

And here are the results:

$ time python re_timings.py --piped-nomatch > /dev/null

real    0m0.987s
user    0m0.943s
sys     0m0.032s
$ time python re_timings.py --separate-nomatch > /dev/null

real    0m3.695s
user    0m3.641s
sys     0m0.037s

So when no regular expressions match, the combined expression is 3.6 times faster.

$ time python re_timings.py --piped-match-middle > /dev/null

real    0m1.900s
user    0m1.858s
sys     0m0.033s
$ time python re_timings.py --separate-match-middle > /dev/null

real    0m3.543s
user    0m3.439s
sys     0m0.042s

And when an expression near the middle of the list matches, the combined expression is 1.8 times faster.

$ time python re_timings.py --piped-match-begin > /dev/null

real    0m1.847s
user    0m1.797s
sys     0m0.035s
$ time python re_timings.py --separate-match-begin > /dev/null

real    0m1.649s
user    0m1.597s
sys     0m0.032s

But in the (presumably much rarer) case where all lines match the first expression in the list, the separate expressions are marginally faster.

A clear win for combing the expressions, unless you think it’s likely that most lines will match expressions early in the list.

Note also if you combine the expressions the performance is similar when the matching expression is at different positions in the list (whereas in the other case list order matters a lot), so there is probably no need for you or your user to second-guess what order to put the expressions in, which makes life easier for everyone.

I would guess the results would be similar in other programming languages. I certainly found it to be similar in C# on .NET when I tried it a while ago.

By combining the expressions we ask the regular expression engine to do the heavy lifting for us, and it is specifically designed to be good at that job.

Open questions:

1. Have I made a mistake that makes these results invalid?

2. * Can arbitrary regular expressions be ORed together simply by concatenating them with a pipe symbol in between?

3. Can we do something similar if the problem requires us to AND expressions?

Talk in code

Last week we had an extended discussion at work about how we were going to implement a specific feature.

This discussion hijacked our entire Scrum sprint planning meeting (yes, I know, we should have time-boxed it). It was painful, but the guy who was going to implement it (yes, I know, we should all collectively own our tasks) needed the discussion: otherwise it wasn’t going to get implemented. It certainly wasn’t going to get broken into short tasks until we knew how we were going to do it.

Anyway, asides aside, I came out of that discussion bruised but triumphant. We had a plan not only on how to write the code, but also how to test it. I believe the key thing that slowly led the discussion from a FUD-throwing contest into a constructive dialogue was the fact that we began to talk in code.

There are two facets to this principle:

1. Show me the code

As Linus once said, “Talk is cheap. Show me the code.“.

If you are at all disagreeing about how what you’re doing will work, open up the source files in question. Write example code – modify the existing methods or sketch a new one. Outline the classes you will need. Code is inherently unambiguous. White board diagrams and hand-waving are not.

Why wouldn’t you do this? Fear you might be wrong? Perhaps you should have phrased your argument a little less strongly?

Is this slower than drawing boxes on a whiteboard? Not if you include time spent resolving the confusion caused by the ambiguities inherent in line drawings.

Does UML make whiteboards less ambiguous? Yes, if all your developers can be bothered to learn it. But why learn a new language when you can communicate using the language you all speak all day – code?

2. Create a formal language to describe the problem

If your problem is sufficiently complex, you may want to codify the problem into a formal (text-based) language.

In last week’s discussion we were constantly bouncing back and forth between different corner cases until we started writing them down in a formal language.

The language I chose was an adaptation of a Domain-specific language I wrote to test a different part of our program. I would love to turn the cases we wrote down in that meeting into real tests that run after every build (in fact I am working on it) but their immediate value was to turn very confusing “what-if”s into concrete cases we could discuss.

Before we started using the formal language, the conversations went something like this:

Developer: “If we implement it like that, this bad thing will happen.”

Manager: “That’s fine – it’s a corner case that we can tidy up later if we need it.”

Developer: (Muttering) “He clearly doesn’t understand what I mean.”

Repeat

After we started using the formal language they went something like this:

Developer: “If we implement it like that, this bad thing will happen.”

Me: “Write it down, I tell you.”

Developer: (Typing) “See, this will happen!”

Manager: “That’s fine – it’s a corner case that we can tidy up later if we need it.”

Developer: (Muttering) “Flipping managers.”

Summary

The conversation progresses if all parties believe the others understand what they are saying. It is not disagreement that paralyses conversations – it is misunderstanding.

To avoid misunderstanding, talk in code – preferably a real programming language, but if that’s too verbose, a text-based code that is unambiguous and understood by everyone involved.

Note on whiteboards

You can’t copy and paste them, and you can’t (easily) keep what you did with them, and you can’t use them to communicate over long distances.

And don’t even try and suggest an electronic whiteboard. In a few years they may solve all of the above problems, but not now. They fail the “can I draw stuff?” test at the moment.

Even when electronic whiteboards solve those problems, they won’t solve the fact that lines and boxes are more ambiguous and less detailed than code in text form.

If you all know and like UML, that makes your diagrams less ambiguous, but still they often don’t allow enough detail: why bother?