Lose the Source Luke?

We were writing a new service to distribute financial pricing data around the trading floor as a companion to our new desktop pricing tool. The plugin architecture allowed us to write modular components that could tap into the event streams for various reasons, e.g. provide gateways to 3rd party data streams.

Linking New to Old

One of the first plugins we wrote allowed us to publish pricing data to a much older in-house data service which had been sat running in the server room for some years as part of the contributions system. This meant we could eventually phase that out and switch over to the new platform once we had parity with it.

The plugin was a doddle to write and we quickly had pricing data flowing from the new service out to a test instance of the old service which we intended to leave running in the background for soak testing. As it was an in-house tool there was no installer and my colleague had a copy of the binaries lying around on his machine [1]. Also he was one of the original developers so knew exactly what he was doing to set it up.

A Curious Error Message

Everything seemed to be working fine at first but as the data volumes grew we suddenly noticed that the data feed would eventually hang after a few days. In the beginning we were developing the core of the new service so quickly it was constantly being upgraded but now the pace was slowing down the new service was alive for much longer. Given how mature the old service was we assumed the issue was with the new one. Also there was a curious message in the log for the old service about “an invalid transaction ID” before the feed stopped.

While debugging the new plugin code my colleague remembered that the Transaction ID meant the message sequence number that goes in every message to allow for ordering and re-transmission when running over UDP. The data type for that was a 16-bit unsigned integer so it dawned on us that we had probably messed up handling the wrapping of the Transaction ID.

Use the Source Luke

Given how long ago he last worked on the old service he couldn’t quite remember what the protocol was for resetting the Transaction ID so we decided to go and look at the old service source code to see how it handled it. Despite being at the company for a few years myself this all pre-dated me so I left my colleague to do the rummaging.

Not long after my colleague came back over to my desk and asked if I might know where the source code was. Like so many programmers in a small company I was a part-time sysadmin and generally looked after some of servers we used for development duties, such as the one where our Visual SourceSafe repository lived that contained all the projects we’d ever worked on since I joined.

The VCS Upgrade

When I first started at the company there were only a couple of programmers not working on the mainframe and they wrote their own version control system. It was very Heath Robinson and used exclusive file locks to side-step the problem of concurrent changes. Having been used to a few VCS tools by then such as PVCS, Star Versions, and Visual SourceSafe I suggested that we move to a 3rd party VCS product as we needed more optimistic concurrency controls as more people were going to join the team. Given the MSDN licenses we already had along with my own experience Visual SourceSafe (VSS) seemed like a natural choice back then [2].

Around the same time the existing development server was getting a bit long in the tooth so the company forked out for a brand new server and so I set-up the new VSS repository on that and all my code went in there along with all the subsequent projects we started. None of the people that joined after me ever touched any of the old codebase or VCS as it was so mature it hadn’t needed changing in some time and anyway the two original devs where still there to look after it.

The Office Move

A couple of years after I joined, the owners of the lovely building the company had been renting for the last few decades decided they wanted to gut and renovate it as the area in London where we were based was getting a big makeover. Hence we were forced to move to new premises about half a mile away. The new premises were nice and modern and I no longer had the vent from the portable air-conditioning machine from one of the small server rooms pumping out hot air right behind my desk [3].

When moving day came I made sure the new server with all our stuff on got safely transported to the new office’s server room so that we ready to go again on Monday morning. As we stood staring around the empty office floor my colleague pointed to the old development server which had lay dormant in the corner and asked me (rhetorically) whether we should even bother taking it with us. As far as I was concerned everything I’d ever needed had always been on the new server and so I didn’t know what was left that we’d still need.

My colleague agreed and so we left the server to be chucked in the skip when the bulldozers came.

Dormant, But Not Redundant

It turned out their original home-grown version control system had a few projects in it, including the old data service. Luckily one of the original developers who worked on the contributions side still had an up-to-date copy of that and my colleague found a local copy of the code for one of the other services but had no idea how up-to-date it was. Sadly nobody had even a partial copy of the source to the data service we were interested in but we were going to replace that anyway so in the end the loss was far less significant than we originally feared.

In retrospect I can’t believe we didn’t even just take the hard disk with us. The server was a classic tower so took up a far bit of room which was still somewhat at a premium in the new office whereas the disk could probably have sit in a desk drawer or even been fitted as an extra drive in the new midi sized development server.

 

[1] +1 for xcopy deployment which made setting up development and test instances a piece of cake.

[2] There are a lot of stories of file corruption issues with VSS but in the 7 years I’d used it with small teams, even over a VPN, we only had one file corruption issue that we quickly restored from a backup.

[3] We were on the opposite side from the windows too so didn’t even get a cool breeze from those either.

 

Pair Programming Interviews

Let’s be honest, hiring people is hard and there are no perfect approaches. However it feels somewhat logical that if you’re hiring someone who will spend a significant amount of their time solving problems by writing software, then you should probably at least try and validate that they are up to the task. That doesn’t mean you don’t also look for ways to asses their suitability for the other aspects of software development that don’t involve programming, only that being able to solve a problem with code will encompass a fair part of what they’ll be doing on a day-to-day basis [1].

Early Computer Based Tests

The first time I was ever asked to write code on a computer as part of an interview was way back in the late ‘90s. Back then pair programming wasn’t much of a thing in the Enterprise circles I moved in and so the exercise was very hands-off. They left me in the boardroom with a computer (but no internet access) and gave me a choice of exercises. Someone popped in half way through to make sure I was alright but other than that I had no contact with anyone. At the end I chatted briefly with the interviewer about the task but it felt more like a box ticking affair than any real attempt to gain much of an insight into how I actually behaved as a programmer. (An exercise in separating “the wheat from the chaff”.)

I got the job and then watched from the other side of the table as other people went through the same process. In retrospect being asked to write code on an actual computer was still quite novel back then and therefore we probably didn’t explore it as much as we should have.

It was almost 15 years before I was asked to write code on a computer again as part of an interview. In between I had gone through the traditional pencil & paper exercises which I was struggling with more and more [2] as I adopted TDD and refactoring as my “stepwise refinement” process of choice.

My First Pair Programming Interview

Around 2013 an old friend in the ACCU, Ed Sykes, told me about a consultancy firm called Equal Experts who were looking to hire experienced freelance software developers. Part of their interview process was a simple kata done in a pair programming style. While I had done no formal pair programming up to that time [3] it was a core technique within the firm and so any candidates were expected to be comfortable adopting this practice where preferable.

I was interviewed by Ed Sykes, who played a kind of Product Owner role, and Adam Straughan, who was more hands-on in the experience. They gave me the Roman Numerals kata (decimal to roman conversion), which I hadn’t done before, and an hour to solve it. I took a pretty conventional approach but didn’t quite solve the whole thing in the allotted time as I didn’t quite manage to get the special cases to fall out more naturally. Still, the interviewers must have got what they were after as once again I got the job. Naturally I got involved in the hiring process at Equal Experts too because I really liked the process I had gone through and I wanted to see what it was like on the other side of the keyboard. It seemed so natural that I wondered why more companies didn’t adopt something similar, irrespective of whether or not any pair programming was involved in the role.

Whenever I got involved in hiring for the end client I also used the same technique although I tended to be a lone “technical” interviewer rather than having the luxury of the PO + Dev approach that I was first exposed to but it was still my preferred approach by a wide margin.

Pairing – Interactive Interviewing

On reflection what I liked most about this approach as a candidate, compared to the traditional one, is that it felt less like an exam, which I generally suck at, and more like what you’d really do on the job. Putting aside the current climate of living in a pandemic where many people are working at home by themselves, what I liked most was that I had access to other people and was encouraged to ask questions rather than solve the problem entirely by myself. To wit, it felt like I was interviewing to be part of a team of people, not stuck in a booth and expected to working autonomously [4]. Instead of just leaving you to flounder, the interviewers would actively nudge you to help unblock the situation, just like they (hopefully) would do in the real world. Not everyone notices the same things and as long as they aren’t holding the candidate’s hand the whole time that little nudge should be seen as a positive sign about taking on-board feedback rather than failing to solve the problem. It’s another small, but I feel hugely important, part of making the candidate feel comfortable.

The Pit of Success

We’ve all heard about those interviews where it’s less about the candidate and more about the interviewer trying to show how clever they are. It almost feels like the interviewer is going out of their way to make the interview as far removed from normal operating conditions as possible, as if the pressure of an interview is somehow akin to a production outage. If your goal is to get the best from the candidate, and it should be if you want the best chance of evaluating them fairly, then you need to make them feel as comfortable as possible. You only have a short period of time with them so getting them into the right frame of mind should be utmost in your mind.

One of the problems I faced in that early programming test was an unfamiliar computer. You have a choice of whether to try and adapt to the keyboard shortcuts you’re given or reconfigure the IDE to make it more natural. You might wonder if that’s part of the test which wastes yet more time and adds to the artificial nature of the setting. What about the toolset – can you use your preferred unit testing framework or shell? Even in the classic homogenous environment that is The Windows Enterprise there is often still room for personal preference, despite what some organisations might have you believe [5].

Asking the candidate to bring their own laptop overcomes all of these hurdles and gives them the opportunity to use their own choice of tools thereby allowing them to focus more on the problem and interaction with you and less on yak shaving. They should also have access to the Internet so they can google whatever they need to. It’s important to make this perfectly clear so they won’t feel penalised for “looking up the answer” to even simple things because we all do that for real, let alone under the pressure of an interview. Letting them get flustered because they can’t remember something seemingly trivial and then also worrying about how it’ll look if they google it won’t work in your favour. (Twitter is awash with people asking senior developers to point out that even they google the simple things sometimes and that you’re not expected to remember everything all the time.)

Unfortunately, simply because there are people out there that insist on interviewing in a way designed to trip up the candidate, I find I have to go overboard when discussing the setup to reassure them that there really are no tricks – that the whole point of the exercise is to get an insight into how they work in practice. Similarly reassuring the candidate that the problem is open-ended and that solving it in the allotted is not expected also helps to relax them so they can concentrate more on enjoying the process and feel comfortable with you stopping to discuss, say, their design choices instead of feeling the need to get to the end of yet another artificial deadline instead.

The Exercise

I guess it’s to be expected that if you set a programming exercise that you’d want the candidate to complete it; but for me the exercise is a means to a different end. I’m not interested in the problem itself, it’s the conversation we have that provides me with the confidence I need to decide if the candidate has potential. This implies that the problem cannot be overly cerebral as the intention is to code and chat at the same time.

While there are a number of popular katas out there, like the Roman Numerals conversion, I never really liked any of them. Consequently I came up with my own little problem based around command line parsing. For starters I felt this was a problem domain that was likely to be familiar to almost any candidate even if they’re more GUI oriented in practice. It’s also a problem that can be solved in a procedural, functional, or object-oriented way and may even, as the design evolves, be refactored from one style to the other, or even encompass aspects of multiple paradigms. (Many of the classic katas are very functional in nature.) There is also the potential to touch on I/O with the program usage and this allows the thorny subject of mocking and testability to be broached which I’ve found to be a rich seam of discussion with plenty of opinions.

(Even though the first iteration of the problem only requires supporting “-v” to print a version string I’ve had candidates create complex class hierarchies based around the Command design pattern despite making it clear that we’ll introduce new features in subsequent iterations.)

Mechanics

Aside from how a candidate solves a problem from a design standpoint I’m also interested in the actual mechanics of how they program. I don’t mean whether they can touch type or not – I personally can’t so that would be a poor indicator :o) – no, I mean how they use the tools. For example I find it interesting what they use the keyboard or mouse for, what keyboard shortcuts they use, how they select and move text, whether they use snippets or prefer the editor not to interfere. While I don’t think any of the candidate’s choices says anything significant about their ability to solve the problem, it does provide an interesting avenue for conversation.

It’s probably a very weak indicator but programmers are often an opinionated bunch and one area they can be highly opiniated about is the tools they use. Some people love to talk about what things they find useful, in essence what they feel improves or hinders their productivity. This in turn begs the question about what they believe “productivity” is in a software development context.

Reflection

What much of this observation and conversation boils down to is not about whether they do things the same way I do – on the contrary I really hope they don’t as diversity is important – it’s about the “reflective” nature of the person. How much of what they do is through conscious choice and how much is simply the result of doing things by rote.

In my experience the better programmers I have worked with tend to more aware of how they work. While many actions may fall into the realm of unconscious competence when “in the zone” they can likely explain their rationale because they’re are still (subconsciously) evaluating it in the background in case a better approach is suitable.

(Naturally this implies the people I tend to interview are, or purport to be, experienced programmers where that level of experience is assumed to be over 10 years. I’m not sure what you can expect to take away from this post when hiring those just starting out on their journey.)

An Imperfect Process

Right back at the start I said that interviewing is an imperfect process and while I think pairing with someone is an excellent way to get a window into their character and abilities, so much still comes down to a gut feeling and therefore a subjective assessment.

I once paired with someone in an interview and while I felt they were probably technically competent I felt just a tinge of uneasiness about them personally. Ultimately the final question was “would I be happy to work with this person?” and so I said “yes” because I felt I would be nit-picking to say “no”. As it happens I did end up working with this person and a couple of months into the contract I had to have an awkward conversation with my other two colleagues to see if they felt the same way I did about this team mate. They did and the team mate was “swapped out” after a long conversation with the account manager.

What caused us to find working with this person unpleasant wasn’t something we felt could easily and quickly be rectified. They had a general air of negativity about them and had a habit of making disparaging, sweeping remarks which showed they looked down on database administrators and other non-programming roles. They also lacked an attention to detail causing the rest of us to dot their I’s and cross their T’s. Even after bringing this up directly it didn’t get any better; they really just wanted to get on and write new code and leave the other tasks like reviewing, documenting, deploying, etc. to other people.

I doubt there is anything you can do in an hour of pairing to unearth these kind of undesirable traits [6] to a level that you can adequately assess, which is why the gut still has a role to play. (I suspect it was my many years of experience in the industry working with different people that originally set my spider senses tingling.)

Epilogue

The hiring question I may find myself putting to the client is whether they would prefer to accidentally let a good candidate slip away because the interview let them (the candidate) down or accidentally hire a less suitable candidate that appeared to “walk-the-walk” as well as “talk-the-talk” and potentially become a liability. Since doing pairing interviews this question has come up very rarely with a candidate as it’s been much clearer from the pairing experience what their abilities and attitude are.

 

[1] This doesn’t just apply to hiring individuals but can also work for whole teams, see “Choosing a Supplier: The Hackathon”.

[2] See “Afterwood – The Interview” for more on how much I dislike the pen & paper approach to coding interviews.

[3] My first experience was in a Cyber Dojo evening back in September 2010 that Jon Jagger ran at Skills Matter in London. I wrote it up for the ACCU: “Jon Jagger’s Coding Dojo”.

[4] Being a long-time freelancer this mode of operation is not unexpected as you are often hired into an organisation specifically for your expertise; your contributions outside of “coding” are far less clear. Some like the feedback on how the delivery process is working while others do not and just want you to write code.

[5] My In The Toolbox article “Getting Personal” takes a look at the boundary between team conventions and personal freedom for choices in tooling and approach.

[6] I’m not saying this person could not have improved if given the right guidance, they probably could have and I hope they actually have by now; they just weren’t right for this particular environment which needed a little more sensitivity and rigour.


When Mocks Became Production Services

We were a brand new team of 5 (PM + devs) tasked with building a calculation engine. The team was just one part of a larger programme that encompassed over a dozen projects in total. The intention was for those other teams to build some of the services that ours would depend on.

Our development process was somewhat DSM-like in nature, i.e. iterative. We built a skeleton based around a command-line calculator and fleshed it out from there [1]. This skeleton naturally included vague interfaces for some of the services that we knew we’d need and that we believed would be fulfilled by some of the other teams.

Fleshing Out the Skeleton

Time marched on. Our calculator was now being parallelised and we were trying to build out the distributed nature of the system. Ideally we would like to have been integrating with the other teams long ago but the programme RAG status wasn’t good. Every other team apart from us was at “red” and therefore well behind schedule.

To compensate for the lack of collaboration and integration with the other services we needed we resorted to building our own naïve mocks. We found other sources of the same data and built some noddy services that used the file-system in a dumb way to store and serve it up. We also added some simple steps to the overnight batch process to create a snapshot of the day’s data using these sources.

Programme Cuts

In the meantime we discovered that one of the services we were to depend on had now been cancelled and some initial testing with another gave serious doubts about its ability to deliver what we needed. Of course time was marching on and our release date was approaching fast. It was fast dawning on us that these simple test mocks we’d built may well have to become our production services.

One blessing that came out of building the simple mocks so early on was that we now had quite a bit of experience on how they would behave in production. Hence we managed to shore things up a bit by adding some simple caches and removing some unnecessary memory copying and serialization. The one service left we still needed to invoke had found a more performant way for us to at least bulk extract a copy of the day’s data and so we retrofitted that into our batch preparation phase. (Ideally they’d serve it on demand but it just wasn’t there for the queries we needed.)

Release Day

The delivery date arrived. We were originally due to go live a week earlier but got pushed back by a week because an important data migration got bumped and so we were bumped too. Hence we would have delivered on time and, somewhat unusually, we were well under budget our PM said [2]. 

So the mocks we had initially built just to keep the project moving along were now part of the production codebase. The naïve underlying persistence mechanism was now a production data store that needed high-availability and backing up.

The Price

Whilst the benefits of what we did (not that there was any other real choice in the end) were great, because we delivered a working system on time, there were a few problems due to the simplicity of the design.

The first one was down to the fact that we stored each data object in its own file on the file-system and each day added over a hundred-thousand new files. Although we had partitioned the data to avoid the obvious 400K files-per-folder limit in NTFS we didn’t anticipate running out of inodes on the volume when it quickly migrated from a simple Windows server file share to a Unix style DFS. The calculation engine was also using the same share to persist checkpoint data and that added to the mess of small files. We limped along for some time through monitoring and zipping up old data [3].

The other problem we hit was that using the file-system directly meant that the implementation details became exposed. Naturally we had carefully set ACLs on the folders to ensure that only the environment had write access and our special support group had read access. However one day I noticed by accident that someone had granted read access to another group and it then transpired that they were building something on top of our naïve store.

Clearly we never intended this to happen and I’ve said more about this incident previously in “The File-System Is An Implementation Detail”. Suffice to say that an arms race then developed as we fought to remove access to everyone outside our team whilst others got wind of it [4]. I can’t remember whether it happened in the end or not but I had put a scheduled task together than would use CALCS to list the permissions and fail if there were any we didn’t expect.

I guess we were a victim of our success. If you were happy with data from the previous COB, which many of the batch systems were, you could easily get it from us because the layout was obvious.

Epilogue

I have no idea whether the original versions of these services are still running to this day but I wouldn’t be surprised if they are. There was a spike around looking into a NoSQL database to alleviate the inode problem, but I suspect the ease with which the data store could be directly queried and manipulated would have created too much inertia.

Am I glad we put what were essentially our mock services into production? Definitely. Given the choice between not delivering, delivering much later, and delivering on time with a less than perfect system that does what’s important – I’ll take the last one every time. In retrospect I wish we had delivered sooner and not waited for a load of other stuff we built as the MVP was probably far smaller.

The main thing I learned out of the experience was a reminder not to be afraid of doing the simplest thing that could work. If you get the architecture right each of the pieces can evolve to meet the ever changing requirements and data volumes [5].

What we did here fell under the traditional banner of Technical Debt – making a conscious decision to deliver a sub-optimal solution now so it can start delivering value sooner. It was the right call.

 

[1] Nowadays you’d probably look to include a slice through the build pipeline and deployment process up front too but we didn’t get any hardware until a couple of months in.

[2] We didn’t build half of what we set out to, e.g. the “dashboard” was a PowerShell generated HTML page and the work queue involved doing non-blocking polling on a database table.

[3] For regulatory reasons we needed to keep the exact inputs we had used and couldn’t guarantee on being able to retrieve them later from the various upstream sources.

[4] Why was permission granted without questioning anyone in the team that owned and supported it? I never did find out, but apparently it wasn’t the first time it had happened.

[5] Within reason of course. This system was unlikely to grow by more than an order of magnitude in the next few years.

When Mocks Became Production Services

We were a brand new team of 5 (PM + devs) tasked with building a calculation engine. The team was just one part of a larger programme that encompassed over a dozen projects in total. The intention was for those other teams to build some of the services that ours would depend on.

Our development process was somewhat DSM-like in nature, i.e. iterative. We built a skeleton based around a command-line calculator and fleshed it out from there [1]. This skeleton naturally included vague interfaces for some of the services that we knew we’d need and that we believed would be fulfilled by some of the other teams.

Fleshing Out the Skeleton

Time marched on. Our calculator was now being parallelised and we were trying to build out the distributed nature of the system. Ideally we would like to have been integrating with the other teams long ago but the programme RAG status wasn’t good. Every other team apart from us was at “red” and therefore well behind schedule.

To compensate for the lack of collaboration and integration with the other services we needed we resorted to building our own naïve mocks. We found other sources of the same data and built some noddy services that used the file-system in a dumb way to store and serve it up. We also added some simple steps to the overnight batch process to create a snapshot of the day’s data using these sources.

Programme Cuts

In the meantime we discovered that one of the services we were to depend on had now been cancelled and some initial testing with another gave serious doubts about its ability to deliver what we needed. Of course time was marching on and our release date was approaching fast. It was fast dawning on us that these simple test mocks we’d built may well have to become our production services.

One blessing that came out of building the simple mocks so early on was that we now had quite a bit of experience on how they would behave in production. Hence we managed to shore things up a bit by adding some simple caches and removing some unnecessary memory copying and serialization. The one service left we still needed to invoke had found a more performant way for us to at least bulk extract a copy of the day’s data and so we retrofitted that into our batch preparation phase. (Ideally they’d serve it on demand but it just wasn’t there for the queries we needed.)

Release Day

The delivery date arrived. We were originally due to go live a week earlier but got pushed back by a week because an important data migration got bumped and so we were bumped too. Hence we would have delivered on time and, somewhat unusually, we were well under budget our PM said [2]. 

So the mocks we had initially built just to keep the project moving along were now part of the production codebase. The naïve underlying persistence mechanism was now a production data store that needed high-availability and backing up.

The Price

Whilst the benefits of what we did (not that there was any other real choice in the end) were great, because we delivered a working system on time, there were a few problems due to the simplicity of the design.

The first one was down to the fact that we stored each data object in its own file on the file-system and each day added over a hundred-thousand new files. Although we had partitioned the data to avoid the obvious 400K files-per-folder limit in NTFS we didn’t anticipate running out of inodes on the volume when it quickly migrated from a simple Windows server file share to a Unix style DFS. The calculation engine was also using the same share to persist checkpoint data and that added to the mess of small files. We limped along for some time through monitoring and zipping up old data [3].

The other problem we hit was that using the file-system directly meant that the implementation details became exposed. Naturally we had carefully set ACLs on the folders to ensure that only the environment had write access and our special support group had read access. However one day I noticed by accident that someone had granted read access to another group and it then transpired that they were building something on top of our naïve store.

Clearly we never intended this to happen and I’ve said more about this incident previously in “The File-System Is An Implementation Detail”. Suffice to say that an arms race then developed as we fought to remove access to everyone outside our team whilst others got wind of it [4]. I can’t remember whether it happened in the end or not but I had put a scheduled task together than would use CALCS to list the permissions and fail if there were any we didn’t expect.

I guess we were a victim of our success. If you were happy with data from the previous COB, which many of the batch systems were, you could easily get it from us because the layout was obvious.

Epilogue

I have no idea whether the original versions of these services are still running to this day but I wouldn’t be surprised if they are. There was a spike around looking into a NoSQL database to alleviate the inode problem, but I suspect the ease with which the data store could be directly queried and manipulated would have created too much inertia.

Am I glad we put what were essentially our mock services into production? Definitely. Given the choice between not delivering, delivering much later, and delivering on time with a less than perfect system that does what’s important – I’ll take the last one every time. In retrospect I wish we had delivered sooner and not waited for a load of other stuff we built as the MVP was probably far smaller.

The main thing I learned out of the experience was a reminder not to be afraid of doing the simplest thing that could work. If you get the architecture right each of the pieces can evolve to meet the ever changing requirements and data volumes [5].

What we did here fell under the traditional banner of Technical Debt – making a conscious decision to deliver a sub-optimal solution now so it can start delivering value sooner. It was the right call.

 

[1] Nowadays you’d probably look to include a slice through the build pipeline and deployment process up front too but we didn’t get any hardware until a couple of months in.

[2] We didn’t build half of what we set out to, e.g. the “dashboard” was a PowerShell generated HTML page and the work queue involved doing non-blocking polling on a database table.

[3] For regulatory reasons we needed to keep the exact inputs we had used and couldn’t guarantee on being able to retrieve them later from the various upstream sources.

[4] Why was permission granted without questioning anyone in the team that owned and supported it? I never did find out, but apparently it wasn’t the first time it had happened.

[5] Within reason of course. This system was unlikely to grow by more than an order of magnitude in the next few years.