How to ask technical questions in person

In a healthy team performing a technical task, there will be a lot of questions. Those questions will sometimes be asked by those with less technical knowledge, but (in a healthy team) there will be plenty of questions going back the other way too. Questions that come up in the normal course of your work are the main way knowledge is going to be transferred amongst your team members. They are to be encouraged, and you should never feel like a nuisance, or a technical wimp, because you’re asking them.

It’s easy to imagine, when you’re asking someone a question, that whether you get a good answer depends mainly on the person you’re asking.

In fact, an awful lot depends on you.

To get a good answer to a question, you need to prepare, to manage the Zone, and above all, to make the person you are asking feel clever.

What follows are some tips on how to ask a technical person a technical question when you’re in the same room as them. Some of them may also apply to asking on the phone, or even via instant messaging.

1. Prepare

You’re about to ask someone a question.

Breathe.

Think about what you are hoping to find out. This might sound obvious, but when you’re in the thick of trying to solve a problem, you may find you’re actually quite confused about what you’re doing. If you can, you want to avoid wasting the other person’s time with your confusion. Even worse, you might confuse them, and then they won’t feel clever.

This doesn’t need to take long – it may take half a second, but just let it cross your mind – what do I want to find out?

Think about the other person: what are they doing right now? Is it related? What do they know about? From what angle will they be approaching your question? Again, this will probably take almost no time at all, but it will help you ask your question well.

Now you need to zoom out. You are deep into a problem, so deep in fact that you’re stuck on a very specific part of it. This part of it is burning into your consciousness, obscuring the wider picture of what you’re doing. The person you’re asking is going to need that wider picture, so remind yourself what it is.

Once you’ve zoomed out, you will need to engage with the Zone.

2. Manage the Zone

The biggest problem you have when asking a technical person a technical question is that they may well be in the Zone. The Zone is a state of intense concentration in which you get really good work done (Rands elaborates in A Nerd in a Cave). The thing about the Zone is that it is difficult (sometimes even painful) to come out of.

If the person you’re asking is in the Zone, they will need to be coaxed out. When you start talking, even if they turn and look at you and say something encouraging, they are somewhere far away, solving a problem. Your problem is an unwelcome intrusion.

The first lesson about the Zone is: don’t say much at first.

When you start talking the other person doesn’t know whether you’re going to offer them a cup of tea, talk politics, or ask them how to implement quicksort. They haven’t disengaged at all from the problem they are working on. You need to let them know you are going to ask them a technical question, and start the painful process of pulling them out of their problem, and into yours.

Often the best thing to do is pick a single word or phrase that lets them know the general area you are talking about: for example, “import”, or “the C++ standard”, or “garbage collection”. If you say something like that, and watch their eyes, you can almost see the glaze of the Zone lifting.

If you can see they’re still elsewhere, give them a second – maybe they’ll even say something like, “I just want to finish this line.” If they do, let them – step back, take the pressure off – they will be much better able to answer you when the thing they need to remember is written down.

Now your job is to zoom them in on your problem, at their pace. It’s very important to watch and listen to the other person while you do this. Give them a problem area, and check they understood what you meant. Now give them closer context – maybe the directory you’re talking about, or the DLL you’re working on. Now tell them what class you’re editing, or what file.

As soon as you can see it’s needed, be quick to show them the exact thing you’re doing on your screen or paper – that way they can add any extra context they need themselves. Be very wary of drawing vague diagrams on paper or a whiteboard – they confuse more often than they enlighten – much better to show them the real code or document you are working on. They can handle it.

All the time, watch how they are reacting – are they remembering what you’re talking about, or completely in the dark? You may need to choose a different way into the problem. Remember, at this stage it’s your job to explain what you mean, not their job to guess.

If you do this well, they’re slightly in the Zone again, but this time focussed on your problem area. Now, and only now, hit them with your question. They’ll know exactly what you’re talking about, and feel really clever, especially if they know the answer.

There are a couple other things you need to know about the Zone. As we know, it can be painful to be pulled out of the Zone. That means that the person you’re talking to, no matter how nice they are, and how much they like you, may feel a distinct feeling of irritation when you start talking. To make matters worse, they were concentrating, so probably frowning already. Even if they don’t feel any irritation at all, they may look like they do. What can you do about this? Not a lot, except don’t take it personally – it’s normally momentary, and experienced technical people will recognise it’s a false feeling, almost a physical reaction, and put it to one side immediately.

Finally, understanding the Zone explains why preparation is so important. It’s very likely you are in the Zone yourself at the beginning of this process. You are submerged in a difficult problem, and quite possibly deeply irritated by your inability to solve it. To ask a question effectively (and thus get a good answer) you need to pull yourself out of the Zone and back into human interaction mode. This includes ensuring your annoyance is fully dissipated, or you’re going to say something you shouldn’t.

Throughout all this you need to make it your mission to make the other person feel clever.

3. Make them feel clever

Why would you want to make the person you’re asking feel clever?

I certainly don’t mean flatter them. If they’re deep in their work they don’t want you to waste their time telling them how good they are at something.

Think about it this way: how likely is it that you’ll get an answer if you’ve made them feel stupid? I don’t mean you won’t get an answer because they’re too annoyed with you to bother – I mean they’re feeling stupid because you’ve confused them, and so they have no idea what the answer is.

Bear in mind that it’s really easy to confuse someone. In your technical problem area there are thousands, or possibly even millions, of tiny micro-contexts that make sense within themselves, but sound like gibberish if you’re outside that context.

Turning to someone and saying something like, “How do I Frobnish the Pernicator?” without first reminding them that Pernicators get created every time the TunableBeadnest is RePreppered is very hard work for them: they need to ask you a series of questions before they can work out what you’re asking. Making them do this work makes them feel like they aren’t clever enough to hold the entire knowledge of your arena in the front of their mind simultaneously. Of course, no-one is able to do this, but everyone feels like they ought to be. Don’t make them feel stupid.

Even worse is to say something like, “How do I call this method?” In this case, even if they had total front-of-brain memory of the entire arena, they still couldn’t answer this question, without interrogating you about what you meant.

You goal should be that the first thing they have to say in the whole interaction is when they give you the answer, or at least ask a question that shows they have grasped what you’re asking, and makes them feel clever.

Of course, making them feel clever also makes them more inclined to talk to you next time you have a question, but that’s merely a side benefit. The real reason you want to make them feel clever is that when they feel like that, it means you have given them the information they need, in the order they want, with timing that works for them, so that they can give you a great answer.

Anatomy of an interpreter: the Parser

Posts in this series: Lexer, Parser, Evaluator

Subs has reached version 1.3.4, which means that it can successfully run all the tests from chapter 1 of SICP. This is very exciting.

Last time I explained a bit about the Lexer, which takes in a stream of characters and emits a stream of tokens: individual elements of code such as a bracket, a keyword or a symbol.

Generally, parsers emit some kind of tree structure – they understand the raw tokens as a hierarchical structure which (conceptually, at least) will be executed from the bottom up, with each branch-point in the tree being an operation of some kind.

Our parser takes in a stream of tokens, and emits a stream of parsed trees.

Parsing Scheme is very easy, because (except for a couple of exceptions I haven’t implemented yet) there is essentially one rule: start with an open bracket, see a list of things, and then find a close bracket. Of course, one of the “things” you see may itself be another bracketted list, so after parsing you get a tree structure of nested lists.

The parser in Subs looks like this:

class Parser
{
public:
    Parser( ILexer& lexer );
    std::auto_ptr<Value> NextValue();
private:
    ILexer& lexer_;
};

We supply a Lexer in the constructor, which we know will provide us with tokens when we need them via its NextToken() method. The Parser’s NextValue() method returns a pointer to a Value, which is the base class for all the “things” in the Subs interpreter.

There are lots of types of things that inherit from the Value class, but the “parse tree” (the output of the parser) will only consist of a very small subset of them:

  • CombinationValue
  • DecimalValue
  • IntegerValue
  • StringValue
  • SymbolValue

The CombinationValue class forms the tree structure. Its declaration looks like this:

class CombinationValue : public Value, public std::vector<Value*>
{
    // ...
};

It is simply a list of other Values.

Note that it “owns” those Values in the sense that it deletes them when it is deleted. I have recently made the jump to make Subs depend on BOOST, so it’s on my TODO list to make containers like this use the BOOST smart containers to manage this job for me.

DecimalValue, IntegerValue and StringValue are relatively self-explanatory: they contain numbers and strings that were found as literals in the source code.

SymbolValue is essentially everything else – if the code that recognises the type of a token can’t recognise it as a bracket, a number or a string, we assume it is a symbol, and tuck it away in a SymbolValue to be understood later.

The core of the Parser looks like this (with some error-checking removed):

std::auto_ptr<Value> next_value( ILexer& lexer, Token token )
{
    if( token.Name() == "(" )
    {
        auto_ptr<CombinationValue> ret( new CombinationValue );
        while( true )
        {
            token = lexer.NextToken();
            if( token.Name() == ")" )
            {
                break;
            }
            // Recursive call
            ret->push_back( next_value( lexer, token ).release() );
        }
        return auto_ptr<Value>( ret.release() );
    }
    else
    {
        return ValueFactory::CreateValue( token );
    }
}

(Full code here: Parser.cpp) It’s a simple recursive function that creates a CombinationValue whenever it finds a bracket, and otherwise uses a ValueFactory to create an individual value.

Side note: the wisdom of using recursion could certainly be questioned, since it limits the depth of bracketting we can handle to the size of the C++ stack, but the only other way to get the same result would be to keep our own manual stack of unfinished combinations, and it just seems perverse to re-implement language features like that. What might well be more interesting would be to consider whether we can actually evaluate parts of the tree as we go, without parsing it all at once. This might make the whole setup scale rather better, but would most likely be quite complex. The implementation presented here will work fine for almost any imaginable program – remember we would need not just code whose execution is deeply nested, but whose expression in code had thousands of levels of nesting before the parser would fail.

The ValueFactory uses some basic rules such as “starts and ends with a quote” or “consists of only numbers and a decimal point” to recognise what type of Value to create, and if no rules match it defaults to a SymbolValue.

When we have completed a bracketted expression, we return a complete tree of numbers, strings and symbols, and it is ready to be evaluated, which you can think of as simply expanding the tree we already have into the full expression of itself, and then reducing it back down again to an answer.

Next time, the Evaluator and the famous eval-apply loop.

Anatomy of an interpreter: the Lexer

Posts in this series: Lexer, Parser, Evaluator

I have been having a lot of fun recently writing my Scheme interpreter Subs. I have never implemented a full programming language before, so I am learning fast (mostly through mistakes) and wanted to write down some of the stuff I am discovering.

Note: if you want to learn more about what Scheme is I recommend Scheme (Wikipedia) and the book SICP, which is the inspiration for all this.

I am writing everything from scratch, just because it’s fun (certainly not because I think it is in any way better to do it that way…). As we will see, that gives me opportunities to do things in different ways from the normal way such things are done. So far, every time I find out I’ve deviated from the normal way I’ve quickly discovered why I am wrong, and had to learn the true path.

Text-based programming languages, whether interpreted or compiled, need a lexer. A lexer takes in characters and spits out “tokens”, which are groups of characters that represent a single thing, such as a bracket, a variable name or a number. (Those tokens are then passed on to the parser, which I will cover in a different post.)

Scheme (and other Lisp variants) are fairly easy to lex because they don’t have much syntax – you just need to be able to understand round brackets, numbers and strings, and a couple of special cases that I won’t go into because I haven’t actually implemented them yet. (Mind you, I haven’t implemented strings yet either…)

When I started Subs I took my normal approach of doing whatever I wanted without any research or even much thought, and wrote something that I called a lexer, but which was really something else. It took in a stream of characters, read it one “word” at a time (using whitespace as separators), broke up the word if it contained bracket characters, and emitted a tree structure with each branch representing a bracketted list. It just seemed sensible, while I was watching the brackets flow by, to understand them and create a tree structure.

However, for a number of reasons, that approach turned out to be wrong.

First, reading a “word” at a time made things much harder than simply stepping through each character. It made my initial implementation slightly faster, but as soon as I realised I cared about white space (e.g. keeping track of what line we are on) it had to go. When it went it also turned out to be easier to deal with unusual code layout – for example “a(b” should be lexed as 3 tokens, but would be handed to us as a single word.

Second, and more importantly, creating a tree structure at this point was a waste of time. Creating tree structures is normally the job of a parser, and mixing these responsibilities gave me some pointless inefficiency: the lexer emitted a tree of tokens, which the parser then translated into another tree (of fully-understood code objects). It turned out that walking the tree of tokens and copying that structure in the parser was at least as hard as just taking in a flat stream of tokens and constructing the tree just once.

So, I re-wrote Lexer into something that is starting to become worthy of the name. The most interesting parts of its signature look like this:

class Lexer
{
public:
    Lexer( std::istream& instream );
    Token NextToken();
};

It takes in a reference to a stream, which will provide the characters, and when NextToken is called, it reads enough characters to determine what the next token will be, and returns it.

Side note: Subs is written using Test-Driven Development. I re-implemented the Lexer and Parser from scratch (naming the new classes NewLexer and NewParser until they were ready), modified the code that used them to use the new interfaces, ran the tests, and immediately knew that the new classes worked as well as the old ones. That level of confidence is incredibly freeing. I can’t imagine how I would ever have convinced myself the new classes were ready had I not had that safety net of 100s of tests that ensure the interpreter correctly responds to each type of input.

Currently the Token class it returns is pretty much just a string, with some information attached about where that string was in the original text. In researching this article I realised that most lexers attach more information than that to their tokens – they understand its basic type such as integer, decimal, string, bracket or symbol. At the moment in Subs, this work is done in the parser (so for the lexer each token is just a string), but I can see why it is helpful to do this work in the lexer, because for most types we have the information anyway. For example, in order to recognise that ‘”foo (bar”‘ (where the double-quotes are in the real code) is a single token we must understand that it is a string. Since we know it at this stage, we might as well record it in the Token object so we don’t have to work it out again later. When Subs supports strings, I will probably add a “type” field to the token and move this work from the parser.

On a more general programming point, following on from comments I made in a previous post, it is worth noting that the structure of the lexer (and, as we will see later, the parser) uses a technique called “streams”. What this means is that we write functions like NextToken that process a small part of the total problem, and return their answer. If we chain together functions like that (for example the parser’s equivalent function calls NextToken whenever it needs a new token) we can process arbitrarily large input without using lots of memory or slowing down. The lexer is able to process any number of tokens using a very small amount of memory, and will only fall over if it encounters a single token that is ridiculously large.

The stream style can be very useful for some problems, not only because it can be efficient with memory, but also because it can help break problems into neat, small pieces that are easier to implement correctly. It is also useful for writing code in a functional style, because it allows us to avoid having any internal state (e.g. we could easily implement NextToken to take in the stream it should read from and avoid any member variables at all in Lexer), by pushing state out into the input and output, instead of having it in our program. In this case that means instead of reading, storing and then processing a program, our lexer can simply process a few characters and emit a token without knowing anything about the surrounding code or wider context. This makes it much easier to test, and (potentially) easier to do clever things with, like prove mathematically that it is correct (!)

Next time, the parser.

Subs Scheme Lisp Interpreter

Why would you write a Lisp interpreter?

I find that question difficult to answer, but the joy of open source is that I don’t have to answer it.

Subs is a Scheme interpreter written in C++, and growing out of the excitement I have felt while reading Structure and Interpretation of Computer Programs.

Subs is very incomplete, and will probably never be otherwise, but it is exciting how quickly you can write a functional Lisp interpreter. I plan to go through SICP section by section and ensure all the examples work in Subs. So far I am failing on the last example in section 1.1.1 (because I don’t support new-lines inside statements yet!) but in reality most of the work is done to support quite a lot of stuff (NOT including mutable variables…yuck).

One possible explanation, if such a thing were necessary, is that one day I want to write a new programming language that has all the simplicity and metaprogramming capability of Lisp, with the native performance and deployability of C++, and the syntactic elegance of Python. I’ve got a lot to learn first.

Scalable Graph Coverage

If you’re interested in dealing with large directed graphs of dependent objects and want some tips on how to process them in a way that scales in terms of memory usage, you may be interested in the article I wrote for Overload, which is a journal of the ACCU (an organisation promoting “professionalism in programming”).

The article is Scalable Graph Coverage, or, in my original title, “Comparing scalable algorithms for walking all nodes of a dependency graph”.

It contains lots of code, written in C++, using BOOST Graph Library. The code demonstrates some of the algorithms that are available for choosing batches of nodes to be processed together to reduce the number of nodes that are loaded several times, and without running out of memory.