Useful collection of Qt debug visualizers for Visual Studio

I had to reinstall VS2010 at work and because I clearly didn’t think this all the way through, forgot to save my autoexp.dat file before removing the old installation. And of course I didn’t realise what had happened until I had to dig deeper into some Qt GUI code that wasn’t quite working as expected, and of course I was prompted with the raw data. Fortunately a quick search on Google led me to this page Human Machine Teaming Lab | Knowledge / Qt that contains a very comprehensive set of visualisers.

Power series for PCA

The book says estimate the value of the eigenvector, then iterate.
But my vector cycled as transpose(1,1), transpose(-1, 1), transpose(1, 1) which is a bit of a problem.
Oh for precise instructions.
I'll report back when I find a suitable estimate for the starting value.

Power series for PCA

The book says estimate the value of the eigenvector, then iterate.
But my vector cycled as transpose(1,1), transpose(-1, 1), transpose(1, 1) which is a bit of a problem.
Oh for precise instructions.
I'll report back when I find a suitable estimate for the starting value.

The Champion, the Chief and the Manager

Successful product development projects are often characterized by having an enthusiastic product champion with solid domain knowledge, a visible and proud chief engineer, and a clever and supportive project manager. And of course, the most important thing, a group of exceptional developers. From an organizational point of view it makes sense to require that all projects should clearly identify these three roles:

The Champion: The product champion is a person that dreams about the product, has a vision about how it can be used and can answer questions about what is important and what is less important. The product champion is required to have a deep and solid domain knowledge and will often play the role of a customer proxy in the project. This position can only be held by a person that is deeply devoted and has a true passion for the product to be created. The product champion is the main interface between the project and the customer/users. (Sometimes also known as: Product Manager, Project Owner, Customer Proxy…)

The Chief: The chief engineer is a technical expert that has a vision of the complete solution and is always ready to defend this vision. At any time, the chief engineer should be able, and willing to stand up to proudly describe the solution and explain how everything fits together. He/she should feel responsible for technological decisions that the exceptional developers do, but also make sure that the solution is supporting the business strategy. The chief engineer is the main communication channel between this project and other projects. (Sometimes also known as: System Architect, Tech Lead, Shusa, …)

The Manager: The project manager is a person that leads a team to success by managing the resources on a project in an effective and sensible way. He/she will be responsible for actively discovering and removing impediments. The project manager is the main interface between the project and corporate management. (Sometimes also known as: Scrum Master, Team Leader, …)

Of course, for very small projects these three roles can be fulfilled by one person, but for projects of some size there should be three people filling these three roles: one product champion, one chief engineer and one project manager. These three people must work together as a team, form an allround defence (aka kringvern) around the project, while being available to the developers at any time. Their task is to “protect” and “promote” the project to the outside world so that the exceptional developers can focus on doing the job.

I believe that identifying these three roles is the only thing an organization needs to impose in order to increase the chance of success. Then the team of exceptional developers together with their servants decide everything else, including which methodology and technology to use.

Visual Studio 2010 SP1 has been released

For those who are using Visual Studio 2010, the service pack has now been officially released: Visual Studio 2010 Service Pack 1 General Availability - Visual C++ Team Blog - Site Home - MSDN Blogs Edit: The download like doesn’t seem to work for me yet, given that it’s only gone General Availability today it might be worth checking back a little later. Edit again - we have a general availability download link: http://www.

If your VS2010 C++ build is constantly rebuilding a project that hasn’t changed

Check if you’re seeing the following output in the build pane: InitializeBuildStatus: Creating ".unsuccessfulbuild" because "AlwaysCreate" was specified. I’ve just fixed a bunch of these errors in one of our solutions here and all of these were caused by one of two issues: The project file referenced files that were no present in the source tree A custom build step either was supposed to generate a file but didn’t, or the file ended up in the wrong place In order to find out if there are missing files that trigger the perma-rebuild, you’ll also have to enable Visual Studio’s debug output as described in this stackoverflow answer.

How to view undecorated DLL-exported C++ symbols in Visual Studio 2010

Yes, it’s one of those “note to self” posts, but I keep forgetting how to do it. As the first step, you run dumpbin /EXPORTS and redirect the output into a file because the utility that unmangles the names (undname.exe) doesn’t appear to be able to take piped input via stdin. Then, run undname , with being the file that contains the exported symbols. At least that way the symbols become mostly readable.

Boost.Log, preventing the ‘unhandled exception’ in Windows 7 when attempting to log to the event log

I recently ran into a requirements for retrofitting a logging library to an existing project. My first instinct was to throw Pantheios at it as I’ve used it before and It Just Worked. Unfortunately in this case, we needed the ability to log to more than two event sinks and it looked like this was getting a little awkward with Pantheios, which prompted me to look at Boost.Log. After some digging through the documentation and the samples, I managed to get the logging going to the three event sinks we needed.

A couple of noteworthy links

It’s bit of a link roundup from the past couple of months. Most of you probably saw these already as I’d think you’re probably reading the same blogs. C++ links VS2010 SP1 Beta: What’s in it for C++ developers. While I’m not going to chance installing the beta on my main developer workstation, it looks like there are some interesting features in the service pack. I hope that the IDE stability has also been improved.

Sometimes, std::set just doesn’t cut it from a performance point of view

A piece of code I recently worked with required data structures that hold unique, sorted data elements. The requirement for the data being both sorted and unique came from it being fed into std::set_intersection() so using an std::set seemed to be an obvious way of fulfilling these requirements. The code did fulfill all the requirements but I found the performance somewhat wanting in this particular implementation (Visual Studio 2008 with the standard library implementation shipped by Microsoft).

Quick tip if you see ‘bad DLL or entry point msobj80.dll’ when building software with VS2008

Try stopping mspdbsrv.exe (the process that generates the pdb files during a build) if it is still running. My understanding is that it’s supposed to shut down at the end of the compilation but it seems that it can turn into a zombie process and if the latter happens, you can get the above error when linking your binaries. Anyway, I just ran into this issue and stopping the process via the Task Manager resolved the issue for me.

On combining #import and /MP in C++ builds with VS2010

I’m currently busy porting a large native C++ project from VS2008 to VS2010 and one of the issues I keep running into was build times. The VS2008 build uses a distributed build system; Unfortunately the vendor doesn’t support VS2010 yet, so I couldn’t use the same infrastructure. In order to get a decent build speed, I started exploring MSBuild’s ability to build projects in parallel (which is fairly similar to VS2008’s ability to build projects in parallel) and the C++ compiler’s ability to make use of multiple processors/cores, aka the /MP switch.

Using CEDET-1.0 pre7 with Emacs 23.2

It’s been mentioned in several places that GNU Emacs versions sometime after 23.1.50 do come with an integrated version of CEDET. While I think that’s a superb idea it unfortunately managed to break my setup, which relies on a common set of emacs-lisp files that I hold under version control and distribute across the machines I work on. Those machines have different versions of GNU-based Emacsen (pure GNU, Emacs/W32, Carbon Emacs etc) so I can’t rely on the default CEDET.

About

I am a software engineer and occasional development manager with over 25 years experience writing production code, mostly in C++. During that time I’ve worked on anything from Windows device drivers when people said you couldn’t write those in C++, to financial trading applications. I have an interest in programming languages in general and am a firm believer that you cannot call yourself an experienced software engineer if you aren’t able to write good code in multiple programming languages.

Welcome back to the new blog, almost the same as the old blog

The move to the other side of the Atlantic from the UK is almost complete, I’m just waiting for my household items - and more importantly, my computer books etc - to turn up. So it’s time to start blogging again in the next few weeks. Due to some server trouble in the UK, combined with the fact that I do like Serendipity as a blogging system but was never 100% happy with it, I’ve switched to using WordPress on a server here in the US.

Solid C++ Code by Example

Sometimes I see code that is perfectly OK according to the definition of the language but which is flawed because it breaks too many established idioms and conventions of the language. I just gave a 90 minute workshop about Solid C++ Code at the ACCU 2010 conference in Oxford.

When discussing solid code it is important to work on “real” problems, not just toy examples and coding katas because they lack the required complexity to make discussions interesting. So, as a preparation I had developed, from scratch, an NTLM Authentication Library (pal) that can be used by a client to do NTLM authentication when retrieving a protected webpage on an IIS server. Then I picked out a few files, the encoding and decoding of NTLM messages, and tried to write it as solid as possible after useful discussions with ACCU friends and some top coders within my company. Then I “doped” the code, I injected impurities and bad stuff into the code, to produce these handouts. At the ACCU talk/workshop the audience read through the “doped” code and came up with things that could be improved while I did online coding (in Emacs of course) fixing the issues as they popped up. With loads of solid C++ coders in the room, I think we found most of the issues worth caring about, and we ended up with something that can be considered to be solid C++, something that appears to have been developed by somebody who cares about high quality code. Here are the slides that I used to summarize our findings. Feel free to use these slides for whatever you want. Perhaps you would like to run a similar talk in your development team? Contact me if you want the complete source code for the authentication library, or if you want to discuss ideas for running a similar talk yourself. I plan to publish the code on githup soon – so stay tuned.

UPDATE June 2010: The PAL library is now published on github. A much improved slide set is also available on slideshare.

Hard Work Does Not Pay Off

As a programmer, you’ll find that working hard often does not pay off. You might fool yourself and a few colleagues into believing that you are contributing a lot to a project by spending long hours at the office. But the truth is that by working less, you might achieve more – sometimes much more. If you are trying to be focused and “productive” for more than 30 hours a week, you are probably working too hard. You should consider reducing your workload to become more effective and get more done.

This statement may seem counterintuitive and even controversial, but it is a direct consequence of the fact that programming and software development as a whole involve a continuous learning process. As you work on a project, you will understand more of the problem domain and, hopefully, find more effective ways of reaching the goal. To avoid wasted work, you must allow time to observe the effects of what you are doing, reflect on the things that you see, and change your behavior accordingly.

Professional programming is usually not like running hard for a few kilometers, where the goal can be seen at the end of a paved road. Most software projects are more like a long orienteering marathon. In the dark. With only a sketchy map as guidance. If you just set off in one direction, running as fast as you can, you might impress some, but you are not likely to succeed. You need to keep a sustainable pace, and you need to adjust the course when you learn more about where you are and where you are heading.

In addition, you always need to learn more about software development in general and programming techniques in particular. You probably need to read books, go to conferences, communicate with other professionals, experiment with new implementation techniques, and learn about powerful tools that simplify your job. As a professional programmer, you must keep yourself updated in your field of expertise — just as brain surgeons and pilots are expected to keep themselves up to date in their own fields of expertise. You need to spend evenings, weekends, and holidays educating yourself; therefore, you cannot spend your evenings, weekends, and holidays working overtime on your current project. Do you really expect brain surgeons to perform surgery 60 hours a week, or pilots to fly 60 hours a week? Of course not: preparation and education are an essential part of their profession.

Be focused on the project, contribute as much as you can by finding smart solutions, improve your skills, reflect on what you are doing, and adapt your behavior. Avoid embarrassing yourself, and our profession, by behaving like a hamster in a cage spinning the wheel. As a professional programmer, you should know that trying to be focused and “productive” 60 hours a week is not a sensible thing to do. Act like a professional: prepare, effect, observe, reflect, and change.

[This is a reprint of a chapter that I wrote for the newly released O’Reilly book 97 Things Every Programmer Should Know]

Solving a Rubik’s cube in less than 60 seconds

A couple of months ago I bought a Rubik’s cube in a nearby shop and after reading some guides on the net I learned how to solve it. A few hours later I could solve it in about 4 minutes all by myself. After a few days of practice I was down to about 2 minutes, but it was difficult to see how I could improve much further using the beginners method I started out with. My cube and dexterity does not allow me to do more than about 2 moves per second so I realized that I had to reduce the number of moves, rather than speeding up my fingers. After reading several websites about speedsolving techniques I set my self a tough goal – to become a sub-60 cuber. I was determined to study and practice the art of solving the cube until I could solve a Rubik’s cube in less than 60 seconds on average.

I can now often solve it in less than 60 seconds, but I am not stable enough to call myself a sub-60 cuber yet, but I am very close. Give me a few more weeks (or months) and I will get there. While playing with the cube on the bus, at work, at home, in the pub, basically everywhere, all the time, I sometimes meet other geeks that want to learn how to solve the cube fast as well. So I thought I should write up a guide about how to get started.

If you do not know how to solve the cube you need to study one of a billion guides that are available on the net. Here is a beginner solution by Leyan Lo that I recommend. Once you can solve the cube without referring to a guide, you can start to read more advanced stuff. The ultimate guide is written by Jessica Fridrich, but it is not easy to read. I found CubeFreak by Shotaro Makisumi to be the most useful site out there.

After studying these sites, as well as hundreds of other sites and watching plenty of youtube videos, I have ended up with a simplified Fridrich method with a four-look last layer. Here is what I do to solve it in less than 60 seconds:

1. Solve the extended cross ~5 sec (always a white cross)
2. Solve the first two layers (F2L) ~30 sec (keep cross on bottom)
3. Orient the last layer edges ~5 sec (1 out of 3 algorithms)
4. Orient the last layer corners ~5 sec (1 out of 7 algorithms)
5. Permute the last layer corners ~5 sec (1 out of 2 algorithms)
6. Permute the last layer edges ~5 sec (1 out of 4 algorithms)

My current focus is to improve the F2L step as I am still struggling to get under 30 seconds, but I am confident that with some more practice I will manage to get closer to 20 seconds and then I can label myself a sub-60 cuber.

For further inspiration, here is a video of a sub-120 cuber and a sub-10 cuber.

Happy cubing!

The homebuilt NAS/home server, revisited

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog. I’ve blogged building my own NAS/home server before, see here, here, here and here. After a few months, I think it might be time for an interim update.

Building a new home NAS/home server, part IV

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog. I’ve done some more performance testing and while I’m not 100% happy with the results, I decided to keep using FreeBSD with zfs on the server for the time being.

Building a new home NAS/home server, part III

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog. Unfortunately the excitement from seeing OpenSolaris’s disk performance died down pretty quickly when I noticed that putting some decent load on the network interface resulted in the network card locking up after a little while.

Reblog: Building a new home NAS/home server, part II

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul of these posts so I wanted to consolidate all the articles on the same blog. The good news is that the hardware seems to be behaving it for a while now and everything appears to Just Work. FreeBSD makes things easy for me in this case as I’m very familiar with it so I only spent a few hours getting everything set up.

Reblog: Building a new home NAS/home server, Part I

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog. Up to now I’ve mostly been using recycled workstations as my home mail, SVN and storage server. Nothing really wrong with that as most workstations are fast enough but I’m running into disk space issues again after I started backing up all the important machines onto my server.

The joy of using outdated C++ compiler versions

Thud, thud, thud… The sound of the developer’s head banging on the desk late at night. What happened? Well, I had a requirement to make use of some smart pointers to handle a somewhat complicated resource management issue that was mostly being ignored in the current implementation, mainly on the grounds of it being slightly to complicated to handle successfully using manual pointer management. The result - not entirely unexpected - was a not so nice memory leak.