Exit code crimes

I've seen a few exit code crimes recently, so I thought I'd list them.

1. Catch something specific

if __name__ == "__main__":
    try:
        run(None)
        sys.exit(0)
    except ValueError:
        sys.exit(1)

So, what does this do if another type of error is thrown? Why is it catching a specific type of error? The rest of the code is logging info - why don't we log the error? How does this make troubleshooting easy, or even possible?

2. Throw something specific

The pattern above is of course fine when the run function works like this:

def run(config):
    try:
    catch:
        raise ValueError

3. Tell no-one

Make the last line in you script

exit(0)

regardless of whatever just happened. There was a chance the script could return the exit code of whatever it called, so typing the extra line is first, effort, second achieves nothing, and finally means no-one will notice if something went wrong. Or at least not at the time. Perhaps whoever did that thought a shell script won't exit without the keyword exit at the end.



There are others - I'll add them as I find them.




Exit code crimes

I've seen a few exit code crimes recently, so I thought I'd list them.

1. Catch something specific

if __name__ == "__main__":
    try:
        run(None)
        sys.exit(0)
    except ValueError:
        sys.exit(1)

So, what does this do if another type of error is thrown? Why is it catching a specific type of error? The rest of the code is logging info - why don't we log the error? How does this make troubleshooting easy, or even possible?

2. Throw something specific

The pattern above is of course fine when the run function works like this:

def run(config):
    try:
    catch:
        raise ValueError

3. Tell no-one

Make the last line in you script

exit(0)

regardless of whatever just happened. There was a chance the script could return the exit code of whatever it called, so typing the extra line is first, effort, second achieves nothing, and finally means no-one will notice if something went wrong. Or at least not at the time. Perhaps whoever did that thought a shell script won't exit without the keyword exit at the end.



There are others - I'll add them as I find them.




Setting up the Ghost blogging system on FreeBSD

Ah, a meta blogging post. Sorry, I try to keep these to a minimum… For those who haven’t been caught up in the hype yet, Ghost is a new blogging system that is much more minimal than WordPress and the other more popular systems. It’s designed to be much smaller and faster (plus it uses a lot of cool tools like node.js, handlebars etc). I recently tried to set up the 0.

With Emacs on Windows, make sure you know where your $HOME is

The Gnu Emacs for Windows distribution appears to be pretty good at inferring where a reasonable place for $HOME is, straight out of the box. In my case, said reasonable place was %USERPROFILE%/AppData/Roaming which was an entirely acceptable default. That is, until several other tools entered the picture and disagreed with Emacs. We’ve recently switched to using git at work and the git ecosystem needed to have some ideas where its home was.

I know, never install an OS the day it was released

Throwing caution into the wind this morning, I’m having an updatefest only a few hours after the software was released: While I’ve grown accustomed to using Win8 - the Surface RT helped - from using the 8.1 prerelease it really looked like 8.1 would be an improvement, so I bit the bullet and installed the upgrades as soon as they were available. Well, as soon as I got up the day they were available, that is.

My first variadic template


//main.cpp

#include <iostream>

void print()
{
  std::cout << '\n';
}


template<typename H, typename... T>

void print(H h, T... t)
{
  std::cout << h << ' ';
  print(t...);
}



int main()
{
  print(1);
  print(1, "h");
}


To build: g++ -std=c++0x main.cpp -o noodle



My first variadic template


//main.cpp

#include <iostream>

void print()
{
  std::cout << '\n';
}


template<typename H, typename... T>

void print(H h, T... t)
{
  std::cout << h << ' ';
  print(t...);
}



int main()
{
  print(1);
  print(1, "h");
}


To build: g++ -std=c++0x main.cpp -o noodle



Writing: The Ethical Programmer

The latest C Vu magazine from ACCU is out now. It contains my latest Becoming a Better Programer column. This month it's called The Ethical Programmer; the first instalment of a two-part series on ethics and the modern programmer. Gripping stuff.

This month, I look at our attitudes, at legal issues, and discuss software licenses.

To make the world a better place, you can enjoy a picture of some old rope, and a chicken. I also throw in some bad puns.

Writing: The Ethical Programmer

The latest C Vu magazine from ACCU is out now. It contains my latest Becoming a Better Programer column. This month it's called The Ethical Programmer; the first instalment of a two-part series on ethics and the modern programmer. Gripping stuff.

This month, I look at our attitudes, at legal issues, and discuss software licenses.

To make the world a better place, you can enjoy a picture of some old rope, and a chicken. I also throw in some bad puns.

The Minimum Viable Wiki

Documentation is usually considered a necessary evil. The truth is probably closer to it being necessary and evil. For most of us it's one of those things we feel we should have more of (or any at all) but never have the time. And how much documentation do we need anyway?

Most project documentation is worse than useless. Why? For the same reason that most comments in code are useless-to-dangerous - only more so. Documentation, when written, is rarely kept up-to-date. It is rarely complete. It was often never "true" to start with.

In short the docs are just not reliable

Oh, there are exceptions, of course. API docs generated from source code, for example. They will be consistent, right? Perhaps. Assuming they are kept maintained within the source. It is still possible for them to lie.

If we are Agile we don't do documentation, do we? After all the manifesto states:

Working software over comprehensive documentation

Of course it doesn't say we don't value documentation - just that we value working software more. And even then it talks about comprehensive documentation.

There is the idea of "Just Enough Documentation". I hear people talk about this and see it referred to, along with some personal interpretation. But I have yet to see a definitive write-up of this idea (please let me know if you know of one). Perhaps this is meant to be deliberately ironic? In any case, my understanding is that this relates to the sort of documents that are often seen as deliverables of a project - for example a requirements doc.

I'm not going to talk so much about those, although the principle I'll describe may often be applied. I want to concentrate on the internal, developer-oriented, documentation that describes things such as how a project is organised, what bits do what, what to expect in certain situations etc. This is often captured in a wiki that is specific to a team.

This sort of documentation is often seen as unnecessary in a Agile team. This is where face-to-face communication between team members is sufficient, isn't it?

That view seems to be borne out on projects that do have a wiki. The wiki is rarely complete or up-to-date. It quickly falls out of use altogether, or is approached with trepidation as it is as likely to mislead as inform. If it is kept up-to-date it is likely due to a Mandate From Above, which leads to more time invested in its maintenance than is worth it.

So is there a way this can be made to work? Can it ever be worth it? Why would we need it at all?

Well that first depends on your project and team. Maybe the conversations alone really are sufficient. My experience is that this reaches a limit pretty quickly. We need something. How many times have you, or someone on your team, wasted a day looking for something or going down a wrong path, only for someone else to then say, "oh, that's over here", or, "that's done differently for x reason". Yes the conversation was had - but time was already wasted

So the problem is knowing when and who to ask - especially for things you would have thought were "obvious"

In terms of the Four stages of competence this is about moving from Unconscious Incompetence (you don't know what you don't know) to Conscious Incompetence (you at least know what things you don't know). This is much more valuable than it sounds at first! If you don't know that adding a setting to a config file dumps the information you need to a log you won't spend six hours writing a whole chunk of ad-hoc logging code to do the same thing, for example. Instead you'll remember, "there was something about being able to enable logging in a config file". Now you know to ask the question and finding the answer should be quick and accurate.

Having a full specification, perhaps along with examples, of every attribute that could go in the config file might sound better. And if that config file was meant for end-user customisation you'd probably need that. But if it's some internal thing then any attempt at such documentation is likely to be incomplete and/ or out of date - if written at all

But a line that says, "logging is configurable", somewhere that people will read leaves out those noisy, unstable, details and simply informs you of what is possible. You now know what you don't know (i.e. how to enable logging), whereas before you didn't even know it was possible. Details can be asked for.

So my recommendation for maintaining team information on a wiki is to stick to the briefest possible comment that moves the reader from Unconscious to Conscious Incompetence. Unless you really need it avoid the level of detail that will leave it unmaintained and incomplete.

Visual Studio 2012 C++ Variadic Template support in the runtime library

As VS2012’s C++ compiler doesn’t support “true” variadic templates, the new runtime library classes that use variadic templates are implemented using macro magic behind the scenes. In order to get the “variadic” templates to accept more than the default of five parameters, you’ll have to set _VARIADIC_MAX to the desired maximum number of parameters (between five and ten). For more information, see the “faux variadics” section of this blog post on MSDN.

Capturing lvalue references in C++11 lambdas

Recently the question "what is the type of an lvalue reference when captured by reference in a C++11 lambda?" was asked. It turns out that it's a reference to whatever the original reference was too. This is just like taking a reference to an existing reference, e.g.

int foo = 7;
int& rfoo = foo;
int& rfoo1 = rfoo;
int& rfoo2 = rfoo1;

All references refer to foo rather than rfoo2->rfoo1->rfoo->foo meaning the following code

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 << ", rfoo2:" << rfoo2 
          << '\n';
++foo;

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 << ", rfoo2:" << rfoo2 
          << '\n';

std::cout << "&foo:" << &foo << ", &rfoo:" << &rfoo 
          << ", &rfoo1:" << &rfoo1 << ", &rfoo2:" << &rfoo2 
          << '\n';

Which gives:

foo:7, rfoo:7, rfoo1:7, rfoo2:7
foo:8, rfoo:8, rfoo1:8, rfoo2:8
&foo:00D3FB0C, &rfoo:00D3FB0C, &rfoo1:00D3FB0C, &rfoo2:00D3FB0C

I.e. all the references are aliases for the original foo hence the same value is displayed including when the original is modified and that the address of each variable is the same, that of foo.

There is nothing surprising here it's just basic C++ but it's along time since I've thought about it which is why with lambdas, l-value, r-value and universal references I sometimes I do a double take on what was once obvious.

The same happens with lambda capture but it's a slightly more interesting story. Take the following example:

int foo = 99;
int& rfoo = foo;
int& rfoo1 = foo;

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 
          << '\n';

std::cout << "&foo:" << &foo << ", &rfoo:" << &rfoo 
          << ", &rfoo1:" << &rfoo1 
          << '\n';

auto l = [foo, rfoo, &rfoo1]()
{
    std::cout << "foo:" << foo << '\n';
    std::cout << "rfoo:" << rfoo << '\n';
    std::cout << "rfoo1:" << rfoo1 << '\n';

    std::cout << "&foo:" << &foo << ", &rfoo:" 
              << &rfoo << ", &rfoo1:" << &rfoo1 
              << '\n';
};

foo = 100;

l();

Which gives:

foo:99, rfoo:99, rfoo1:99
&foo:00D3FB0C, &rfoo:00D3FB0C, &rfoo1:00D3FB0C
foo:99
rfoo:99
rfoo1:100
&foo:00D3FAE0, &rfoo:00D3FAE4, &rfoo1:00D3FB0C

To begin with it behaves as per the first example in that foo, rfoo and rfoo1 all give the same value as rfoo and rfoo1 are effectively aliases for foo as shown when displaying their addresses; they're all the same.

However, when these same variables are captured it's a different story: The capture of foo is of no surprise as this is by-value so displays the captured value of 99 despite the original foo being changed to 100 prior to the lambda being invoked. Its address is that of a new variable; a member of the lambda.

It starts to get interesting with the capture of rfoo. When the lambda is invoked this too displays 99, the original captured value. Also, its address is not that of the original foo. It seems that the reference itself has not been captured but rather what it refers too, in this case an int with the value of 99. It appears to have been magically dereferenced as part of the capture.

This is the correct behaviour and when thought about becomes somewhat obvious. It's just like assigning a variable from a reference, e.g.

int foo = 7;
int& rfoo = foo;
int bar = rfoo;

bar doesn't become an int& and  rfoo is magically dereferenced except in this scenario there is nothing magical at all, it's as expected. If int were replaced with auto, e.g.

auto bar = rfoo;

then it would be expected that bar is an int as auto strips of CV and reference qualifiers.

Finally, there is rfoo1. This too is odd as it is attempting to take a reference to a reference. As seen in the first example this is perfectly fine. The end effect is that there can't be a reference to reference and so on and all are aliases of the original variable.

This is pretty much what's happening here. It's irrelevant that the target of the capture is a reference. In the end the capture by reference is capture by reference of the underlying variable, i.e. what rfoo1 refers too, in this case foo not rfoo1 itself. This is demonstrated twofold by rfoo1 within the lambda displaying the updated value of foo and also that the address of rfoo1 within the lambda is that of foo outside it.

This is as per the standard section 5.1.2 Lambda expression sub-note 14:

An entity is captured by copy if it is implicitly captured and the capture-default is = or if it is explicitly
captured with a capture that does not include an &. For each entity captured by copy, an unnamed nonstatic
data member is declared in the closure type. The declaration order of these members is unspecified.
The type of such a data member is the type of the corresponding captured entity if the entity is not a
reference to an object, or the referenced type otherwise. [ Note: If the captured entity is a reference to a
function, the corresponding data member is also a reference to a function. —end note ]

The sentence in bold states that for a reference captured by value then the type of the captured value is the type referred to, i.e. the reference aspect as been removed the crucial part being "or the referenced type otherwise". (NOTE: I haven't experimented with references to functions).

Finally, a vivid example showing that a reference captured by value involves a dereference.

class Bar
{
private:
int mValue;

public:
Bar(const Bar&) : mValue(9999)
{
}

public:
Bar(const int value) : mValue(value) {}
int GetValue() const { return mValue; }
void SetValue(const int value) { mValue = value; }
};

Bar bar(1);
Bar& rbar = bar;
Bar& rbar1 = bar;

std::cout << "&bar:" << &bar << ", &rbar:" << &rbar<< ", &rbar1:" << &rbar1 << '\n';

auto l2 = [bar, rbar, &rbar1]()
{
std::cout << "bar:" << bar.GetValue() << '\n';
std::cout << "rbar:" << rbar.GetValue() << '\n';
std::cout << "rbar1:" << rbar1.GetValue() << '\n';

std::cout << "&bar:" << &bar << ", &rbar:" << &rbar<< ", &rbar1:" << &rbar1 << '\n';
};

bar.SetValue(2);

l2();

The class bar provides a crude copy-constructor that sets the stored value to 9999. The following output is similar to that in the previous example in that the addresses of bar and rbar in the lambda differ from that of bar showing they're copies whilst rbar1 is the same. Secondly, the value of mValue stored within Bar is shown as 9999 for the first two captured variables meaning they were copy-constructed.

&bar:00D3FB0C, &rbar:00D3FB0C, &rbar1:00D3FB0C
bar:9999
rbar:9999
rbar1:2
&bar:00D3FAE0, &rbar:00D3FAE4, &rbar1:00D3FB0C

Making the copy-construct private (by commenting out the seemingly unnecessary 'public:') prevents compilation.

1>------ Build started: Project: References, Configuration: Debug Win32 ------
1>  main.cpp
1>c:\users\pete\desktop\references\references\main.cpp(85): error C2248: 'Bar::Bar' : cannot access private member declared in class 'Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(59) : see declaration of 'Bar::Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(54) : see declaration of 'Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(59) : see declaration of 'Bar::Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(54) : see declaration of 'Bar'

Writing this post has clarified the situation for me, I hope it helps you as well.

The sample code is available here.

Capturing lvalue references in C++11 lambdas

Recently the question "what is the type of an lvalue reference when captured by reference in a C++11 lambda?" was asked. It turns out that it's a reference to whatever the original reference was too. This is just like taking a reference to an existing reference, e.g.

int foo = 7;
int& rfoo = foo;
int& rfoo1 = rfoo;
int& rfoo2 = rfoo1;

All references refer to foo rather than rfoo2->rfoo1->rfoo->foo meaning the following code

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 << ", rfoo2:" << rfoo2 
          << '\n';
++foo;

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 << ", rfoo2:" << rfoo2 
          << '\n';

std::cout << "&foo:" << &foo << ", &rfoo:" << &rfoo 
          << ", &rfoo1:" << &rfoo1 << ", &rfoo2:" << &rfoo2 
          << '\n';

Which gives:

foo:7, rfoo:7, rfoo1:7, rfoo2:7
foo:8, rfoo:8, rfoo1:8, rfoo2:8
&foo:00D3FB0C, &rfoo:00D3FB0C, &rfoo1:00D3FB0C, &rfoo2:00D3FB0C

I.e. all the references are aliases for the original foo hence the same value is displayed including when the original is modified and that the address of each variable is the same, that of foo.

There is nothing surprising here it's just basic C++ but it's along time since I've thought about it which is why with lambdas, l-value, r-value and universal references I sometimes I do a double take on what was once obvious.

The same happens with lambda capture but it's a slightly more interesting story. Take the following example:

int foo = 99;
int& rfoo = foo;
int& rfoo1 = foo;

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 
          << '\n';

std::cout << "&foo:" << &foo << ", &rfoo:" << &rfoo 
          << ", &rfoo1:" << &rfoo1 
          << '\n';

auto l = [foo, rfoo, &rfoo1]()
{
    std::cout << "foo:" << foo << '\n';
    std::cout << "rfoo:" << rfoo << '\n';
    std::cout << "rfoo1:" << rfoo1 << '\n';

    std::cout << "&foo:" << &foo << ", &rfoo:" 
              << &rfoo << ", &rfoo1:" << &rfoo1 
              << '\n';
};

foo = 100;

l();

Which gives:

foo:99, rfoo:99, rfoo1:99
&foo:00D3FB0C, &rfoo:00D3FB0C, &rfoo1:00D3FB0C
foo:99
rfoo:99
rfoo1:100
&foo:00D3FAE0, &rfoo:00D3FAE4, &rfoo1:00D3FB0C

To begin with it behaves as per the first example in that foo, rfoo and rfoo1 all give the same value as rfoo and rfoo1 are effectively aliases for foo as shown when displaying their addresses; they're all the same.

However, when these same variables are captured it's a different story: The capture of foo is of no surprise as this is by-value so displays the captured value of 99 despite the original foo being changed to 100 prior to the lambda being invoked. Its address is that of a new variable; a member of the lambda.

It starts to get interesting with the capture of rfoo. When the lambda is invoked this too displays 99, the original captured value. Also, its address is not that of the original foo. It seems that the reference itself has not been captured but rather what it refers too, in this case an int with the value of 99. It appears to have been magically dereferenced as part of the capture.

This is the correct behaviour and when thought about becomes somewhat obvious. It's just like assigning a variable from a reference, e.g.

int foo = 7;
int& rfoo = foo;
int bar = rfoo;

bar doesn't become an int& and  rfoo is magically dereferenced except in this scenario there is nothing magical at all, it's as expected. If int were replaced with auto, e.g.

auto bar = rfoo;

then it would be expected that bar is an int as auto strips of CV and reference qualifiers.

Finally, there is rfoo1. This too is odd as it is attempting to take a reference to a reference. As seen in the first example this is perfectly fine. The end effect is that there can't be a reference to reference and so on and all are aliases of the original variable.

This is pretty much what's happening here. It's irrelevant that the target of the capture is a reference. In the end the capture by reference is capture by reference of the underlying variable, i.e. what rfoo1 refers too, in this case foo not rfoo1 itself. This is demonstrated twofold by rfoo1 within the lambda displaying the updated value of foo and also that the address of rfoo1 within the lambda is that of foo outside it.

This is as per the standard section 5.1.2 Lambda expression sub-note 14:

An entity is captured by copy if it is implicitly captured and the capture-default is = or if it is explicitly
captured with a capture that does not include an &. For each entity captured by copy, an unnamed nonstatic
data member is declared in the closure type. The declaration order of these members is unspecified.
The type of such a data member is the type of the corresponding captured entity if the entity is not a
reference to an object, or the referenced type otherwise. [ Note: If the captured entity is a reference to a
function, the corresponding data member is also a reference to a function. —end note ]

The sentence in bold states that for a reference captured by value then the type of the captured value is the type referred to, i.e. the reference aspect as been removed the crucial part being "or the referenced type otherwise". (NOTE: I haven't experimented with references to functions).

Finally, a vivid example showing that a reference captured by value involves a dereference.

class Bar
{
private:
int mValue;

public:
Bar(const Bar&) : mValue(9999)
{
}

public:
Bar(const int value) : mValue(value) {}
int GetValue() const { return mValue; }
void SetValue(const int value) { mValue = value; }
};

Bar bar(1);
Bar& rbar = bar;
Bar& rbar1 = bar;

std::cout << "&bar:" << &bar << ", &rbar:" << &rbar<< ", &rbar1:" << &rbar1 << '\n';

auto l2 = [bar, rbar, &rbar1]()
{
std::cout << "bar:" << bar.GetValue() << '\n';
std::cout << "rbar:" << rbar.GetValue() << '\n';
std::cout << "rbar1:" << rbar1.GetValue() << '\n';

std::cout << "&bar:" << &bar << ", &rbar:" << &rbar<< ", &rbar1:" << &rbar1 << '\n';
};

bar.SetValue(2);

l2();

The class bar provides a crude copy-constructor that sets the stored value to 9999. The following output is similar to that in the previous example in that the addresses of bar and rbar in the lambda differ from that of bar showing they're copies whilst rbar1 is the same. Secondly, the value of mValue stored within Bar is shown as 9999 for the first two captured variables meaning they were copy-constructed.

&bar:00D3FB0C, &rbar:00D3FB0C, &rbar1:00D3FB0C
bar:9999
rbar:9999
rbar1:2
&bar:00D3FAE0, &rbar:00D3FAE4, &rbar1:00D3FB0C

Making the copy-construct private (by commenting out the seemingly unnecessary 'public:') prevents compilation.

1>------ Build started: Project: References, Configuration: Debug Win32 ------
1>  main.cpp
1>c:\users\pete\desktop\references\references\main.cpp(85): error C2248: 'Bar::Bar' : cannot access private member declared in class 'Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(59) : see declaration of 'Bar::Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(54) : see declaration of 'Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(59) : see declaration of 'Bar::Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(54) : see declaration of 'Bar'

Writing this post has clarified the situation for me, I hope it helps you as well.

The sample code is available here.

How I learned about delete-selection-mode

One thing I really like about stackoverflow.com is that you end up learning as much answering questions on there as you do by asking them. For example, when I saw this question I was sure there would be a way to delete a region by simply starting to type after selecting the region, but I didn’t know how. However given that this is emacs, I seriously doubted the person asking the question would be the first one to want this particular feature.

Repost – how to get rid of those pesky ^M characters using Emacs

I had another of these annoying mixed-mode DOS/Unix text files that suffered from being edited in text editors that didn’t agree which line ending mode they should use. Unfortunately Emacs defaults to Unix text mode in this case so I had an already ugly file that wasn’t exactly prettified by random ^M characters all over the place. I also don’t have the cygwin tools on the machine that I was seeing this problem on, I couldn’t just run unix2dos or dos2unix over the file and be done with it, but at least I had emacs on that machine.

A couple of useful Emacs modes

This is a repost from my old blog - I’m moving some of my older articles over as nobody knows how long the machine that hosts that blog will still be around. highlight-changes-mode – as the name implies, it highlights changes that you make to a file. I do find it useful for the typical scenario of checking out a file, making a couple of smaller changes to it and then having to diff it to work out what you actually changed.

Injecting Singletons in Objective-C Unit Tests

I've promised to write this up a few times now. As I've just given another talk that covers it I thought it was time to make good on that promise.

The topic is the use of singletons in UIKit (and AppKit) and how that makes code using them hard to test. These APIs are riddled with singletons and you can't really avoid them. In case you need convincing that singletons are problematic take this contrived function:

NSString* makeWidget() {
    NSString* colour = 
        [[NSUserDefaults standardUserDefaults] stringForKey: @"defaultColour"];
    return [colour stringByAppendingString: @"Widget"];
}

NSUserDefaults is a singleton - the sole instance of which is returned when you call standardUserDefaults.

Monster1

A perturbing problem

Now consider how we might test this code. Obviously in an example this trivial there are various ways we could change the code to make the problem go away. Consider this a scaled down example of a problem that may be deeper in the code - perhaps a legacy code-base (or even some third party library!).

A naive test might set the "defaultColour" key in NSUserDefaults prior to calling makeWidget(). The problem with that is that the environment is left in a changed state after the test. Subsequent tests may now pick up a different value if they use NSUserDefaults. Worse: NSUserDefaults is backed by persistent storage that can potentially leave your whole user account in a changed state!

So, at the very least, we should restore the prior value at the end of the test. This leads to further problems: If the test fails, or an exception is otherwise thrown, the clean-up would not be called. So we'd need to wrap it in a @try-@finally too. Then, can we be sure we know what value to restore it to. It's probably nil - but if it's not the environment is still in a different state. So we should capture the prior value first and hold it in a variable.

Now what if you need to set more than one value. Or you change the keys used. We're starting to do a lot of bookkeeping just to compensate for the fact that a singleton is being used. Not only is it ugly but it's increasingly error prone.

Better if we can avoid this in the first place. If we have the option - prefer to pass dependencies in - rather than have your code reach out to these Dependency Singularities. In our example either pass in the default colour, or failing that, pass in NSUserDefaults.

NSString* makeWidget( NSUserDefaults* defaults ) {
    NSString* colour = [defaults stringForKey: @"defaultColour"];
    return [colour stringByAppendingString: @"Widget"];
}

At first this doesn't seem to buy us much. We still need an instance of NSUserDefaults. Even if we alloc-init it we'll get a copy of the global one. That's better but we'd still be dependent on the environment and have to take steps to compensate. And in other cases we may not even have that option

Monster2

If you can't make it - fake it!

We might not be able to create completely fresh instances of NSUserDefaults - but we can create instances of a stand-in class. Due to Objective-C's dynamic nature we don't even need to subclass - and we only have to implement the methods that are actually called - in this case stringForKey:. We could do that with a Mock Object. Or we can build our own Fake. Let's assume you've written a Fake called FakeUserDefaults, which contains an NSMutableDictionary, a means to populate it (perhaps via an initialiser) and an implementation of stringForKey: that looks the key up in the dictionary. Now we can test like this:

TEST_CASE() {
    id defaults =
        [[FakeUserDefaults alloc] initWithValue: @"Red" 
                                         forKey: @"defaultColour"];
    REQUIRE_THAT( makeWidget( defaults ), StartsWith( @"Red" ) );    
}

Great. That seems to tick all the boxes. We have complete control of the default value and we haven't perturbed our environment. No clean-up is required at the end of the test (not even memory, if we're using ARC)

Assuming you have the freedom to change the code under test, here, of course. If makeWidget() was buried deep in some legacy code, for example, it may not be feasible to make such a change (yet). Even if we can make the change it can be useful to be able to put the test in first to watch your back while you change it. If we need to leave the call to [NSUserDefaults standardUserDefaults] baked into the code under test for whatever reason what else can we do?

Monster3

To catch a singleton we must think like a singleton

What we'd like is that, when standardUserDefaults is called on NSUserDefaults deep in the bowels of the code under test, it returns an instance of our fake class instead - but only while we're testing. Again, due to Objective-C's dynamic nature we can achieve this. But it starts to get messier. It involves gritty low-level functions from objc/runtime.h. Can we package that away somewhere?

Of course we can! Enter TBCSingletonInjector. I've uploaded the code to GitHub, but there's actually not much to it. It exposes one public (class) method:

+(void) injectSingleton: (id) injectedSingleton
              intoClass: (Class) originalClass
            forSelector: (SEL)originalSelector
              withBlock: (void (^)(void) ) code;

The usage is best explained by example:

TEST_CASE() {
    id defaults =
        [[FakeUserDefaults alloc] initWithValue:@"Red" forKey:@"defaultColour"];

    [TBCSingletonInjector injectSingleton: defaults
                                intoClass: [NSUserDefaults class]
                              forSelector: @selector(standardUserDefaults)
                                withBlock: ^ {
            REQUIRE_THAT( makeWidget(), StartsWith( @"Red" ) );
        } ];
}

Magic! How does it work? It uses a technique known as "method swizzling" (Ruby or Pythonists know it as "monkey patching"). In short we replace a singleton accessor method (such as standardUserDefaults) with one we control (actually another, not otherwise exposed, class method of TBCSingletonInjector). More specifically we swap the two implementations. This is so we can swap them back again when we're done. Then we call the code block - all within a @try-@finally - so no matter what happens we always restore everything to its previous state.

What does the method we swap in do? It returns a global variable.

Wait, what? I thought globals and singletons were basically the same thing? Aren't we out of the frying pan into the fire?

In the war against singletons we must fight them with singletons! Well it's not all bad. This global is only in our test code and we have full control over it. It gets set to our "injected" singleton instance (and set back to nil at the end). It's not perfect - we can only use this implementation to handle one singleton at a time. I've not yet needed to handle more than one but I daresay the implementation could be extended to handle it.

Keep it clean

Since we've hand rolled our own fake class here (FakeUserDefaults) we can tidy things up further if we encapsulate the use of the singleton injector within it. Just adding a method like this should do the trick:

-(void) use:(void (^)(void) ) code
{
    [TBCSingletonInjector injectSingleton: self
                                intoClass: [NSUserDefaults class]
                              forSelector: @selector(standardUserDefaults)
                                withBlock: code ];
}

Now the test code becomes:

    FakeUserDefaults* defs = 
        [[FakeUserDefaults alloc] initWithValue: @"Red" 
                                         forKey: @"defaultColour"];
    [defs use:^{
            REQUIRE_THAT( makeWidget(), StartsWith( @"Red" ) );
         }];

Or, if you prefer, even:

    [[[FakeUserDefaults alloc] initWithValue: @"Red" 
                                      forKey: @"defaultColour"]
    	use:^{
            REQUIRE_THAT( makeWidget(), StartsWith( @"Red" ) );
         }];

Not too bad, really. But, still, prefer to avoid the singletons in the first place if you have the option.

Monster4

Mocking a monster

Rather than hand rolling a Fake you might prefer to use a Mock object too. I've found OCMock does the job well enough. I'm sure other mocking frameworks would do so at least as well. I prefer to use mocks when I want to test the behaviour, though. In this context that might equate to testing that some code under test sets a value in a singleton (e.g. sets a key in NSUserDefaults). The Singleton Injector works just as well for that, of course.

So there we have it. When you really have to deal with the beast you now have some tools to do so. If you do it please consider only doing so until you are able to replace the singularity with something better behaved instead.

If we consider software creation a craft, Is it time to ‘bring our own tools’?

If you look at really productive programmers - like the top 10-20% - there are usually a couple of characteristics that they share. Aptitude and in-depth understanding of both the system they are working on and the technologies involved is obviously one very important factor. Another factor that tends to be overlooked is that these programmers are also masters of their tools in the same way that a master craftsman - say, a carpenter - is also a master of their tools.

Effective C++11/14

Work sent me on Scott Meyers' latest course at Developer Focus. He gave us these guidelines:
  1. Prefer auto to explicit type declarations - remember that auto + {expr} => std::initializer_list
  2. Prefer nullptr to 0 and Null
  3. Prefer scoped enums to unscoped enums
  4. Distinguish () and {} when creating objects - the latter disallows narrowing, that might be good
  5. Declare functions noexcept whenever possible - esp. swap move
  6. Make const member functions threadsafe - make them bitwise const or internally synchronised
  7. Distinguish "universal references" from r-value references - "universal references" is his phrase, just note if you see type&&& type might not be an r-value
  8. Avoid overloading on && - typically r-value ref versus l-value ref is ok, but just r-value ref might be trouble cos it's greedy
  9. Pass and return r-value refs via std::move, universal refs by std::forward - and allow RVO to happens as before in C++98/03
  10. Understand reference collapsing See SO
  11. Assume that move operations are not present, not cheap and not used
  12. Make std::thread unjoinable on all paths - even if there's an exception
  13. Use std::launch::async with std::async if asynchronicity is essential - but is it really essential?
  14. Be aware of varying thread handle destructive behaviour
  15. Create tasks not threads
  16. Consider void functions for one-shot event communication
  17. Pass parameterless functions to std::asyncatd::thread}} and {{{std::call_once - the arguments are unconditionally copied, as with std::bind. Use a lambda instead
  18. Use native handles to transcend the C++11/14 API - if you need to configure your thread, but don't use a thread, so you won't need too
  19. Prefer lambdas to std::bind - inlining is possible
  20. Beware default captures in member functions - [=] captures the this pointer, and so member variables via this->variable, which could dangle and are "by ref" i.e. will match [&]. C++14 will add stuff to help
  21. Use std::make_shared and std::make_unique whenever possible
  22. Keep abreast of standardisation

Effective C++11/14

Work sent me on Scott Meyers' latest course at Developer Focus. He gave us these guidelines:
  1. Prefer auto to explicit type declarations - remember that auto + {expr} => std::initializer_list
  2. Prefer nullptr to 0 and Null
  3. Prefer scoped enums to unscoped enums
  4. Distinguish () and {} when creating objects - the latter disallows narrowing, that might be good
  5. Declare functions noexcept whenever possible - esp. swap move
  6. Make const member functions threadsafe - make them bitwise const or internally synchronised
  7. Distinguish "universal references" from r-value references - "universal references" is his phrase, just note if you see type&&& type might not be an r-value
  8. Avoid overloading on && - typically r-value ref versus l-value ref is ok, but just r-value ref might be trouble cos it's greedy
  9. Pass and return r-value refs via std::move, universal refs by std::forward - and allow RVO to happens as before in C++98/03
  10. Understand reference collapsing See SO
  11. Assume that move operations are not present, not cheap and not used
  12. Make std::thread unjoinable on all paths - even if there's an exception
  13. Use std::launch::async with std::async if asynchronicity is essential - but is it really essential?
  14. Be aware of varying thread handle destructive behaviour
  15. Create tasks not threads
  16. Consider void functions for one-shot event communication
  17. Pass parameterless functions to std::asyncatd::thread}} and {{{std::call_once - the arguments are unconditionally copied, as with std::bind. Use a lambda instead
  18. Use native handles to transcend the C++11/14 API - if you need to configure your thread, but don't use a thread, so you won't need too
  19. Prefer lambdas to std::bind - inlining is possible
  20. Beware default captures in member functions - [=] captures the this pointer, and so member variables via this->variable, which could dangle and are "by ref" i.e. will match [&]. C++14 will add stuff to help
  21. Use std::make_shared and std::make_unique whenever possible
  22. Keep abreast of standardisation

Speaking: Running Effective Rehearsals

I'll be speaking at The Worship Collective conference in Cambridge, UK on June 29th. This is an awesome event for musicians and worship leaders.

I'm leading a seminar entitled Running Effective Rehearsals. Obviously, this is a really practical subject, but I promise it'll be fun too. Hopefully there will be some practical wisdom to apply, and some encouraging advice to take away.

Speaking: Running Effective Rehearsals

I'll be speaking at The Worship Collective conference in Cambridge, UK on June 29th. This is an awesome event for musicians and worship leaders.

I'm leading a seminar entitled Running Effective Rehearsals. Obviously, this is a really practical subject, but I promise it'll be fun too. Hopefully there will be some practical wisdom to apply, and some encouraging advice to take away.

How to enable (hack) git-p4 in msysgit for Windows

The default installation of msysgit (aka the official git client for Windows) is unfortunately built without python support. There are understandable reasons as to why this is, starting with “where the heck do I find the various python versions on Windows”. For me the problem was that I needed git-p4 to extract some code history out of a Perforce repository and guess what, git-p4 is written in Python. Only solution for me was that I had to find a way to make this work short of throwing Linux in a VM just to get a git import going.

I guess that’s one way of mounting an SSD

The perils of buying a used computer - yes, I am too cheap or just not rich enough to buy a new Mac Pro - is that sometimes you find that you inherited “interesting” fixes. Like this SSD mount: Yes, that’s electrical tape and no, I don’t agree with this special mounting method. At least they did put some electrical tape between the case of the SSD and the case of the DVD drive.

Hello concurrent world


From Anthony William's "C++ concurrency in action - practical multithreading", section 1.4.1 gives a simple "Hello world" program using C++11's thread.

#include <iostream>
#include <thread>

void hello()
{
std::cout <<"Hello concurrent world!\n";
}


int main()
{
std::thread t(hello);
t.join();
}


After Matthew Wilson re-starting his series in Overload, "Quality Matters #7 Exceptions: the story so far" http://accu.org/var/uploads/journals/Overload114.pdf page 10ff, I had a nagging feeling I should put some exception handling round this.

First question, what happens if we make the hello throw an exception? For example, what would this do?

std::thread t_trouble( []{ throw std::exception("Oops");} );

It calls abort. The thread function mustn't let exceptions escape. Also, main should probably catch some exceptions; for example, maybe there aren't enough resources to start the thread yet.

#include <iostream>
#include <thread>

void hello()
{
try
{
std::cout <<"Hello concurrent world!\n";
}
catch(const std::exception& e)
{
//erm... what to do with it?
}
}


int main()
{
try
{
std::thread t(hello);//can I pass parameters? Nico says I can to async (page 964)
t.join(); //Nico says we can do a t.detach and when main exits it will get stopped
}
catch(const std::system_error& e) //pulled in by thread I presume
{
if(e.code() == std::errc::resource_unavailable_try_again)
{
std::cout << "Try again\n";
}
}
catch(const std::exception& e)
{
std::cout << e.what() << '\n';
}
}

Right, so now we are ignoring any exceptions that get thrown.
What should I do with any exceptions I get in a function that's sent to a thread? I could use std::exception_ptr, and std::rethrow_exception when a client tries to get the result. It might be better if I read all of Anthony's book (esp Chapter 8) and use std::packaged_task instead.

accu-general (http://accu.org/index.php/mailinglists) helpfully told me to read all the chapters in the book concurrently.

Hello concurrent world


From Anthony William's "C++ concurrency in action - practical multithreading", section 1.4.1 gives a simple "Hello world" program using C++11's thread.

#include <iostream>
#include <thread>

void hello()
{
std::cout <<"Hello concurrent world!\n";
}


int main()
{
std::thread t(hello);
t.join();
}


After Matthew Wilson re-starting his series in Overload, "Quality Matters #7 Exceptions: the story so far" http://accu.org/var/uploads/journals/Overload114.pdf page 10ff, I had a nagging feeling I should put some exception handling round this.

First question, what happens if we make the hello throw an exception? For example, what would this do?

std::thread t_trouble( []{ throw std::exception("Oops");} );

It calls abort. The thread function mustn't let exceptions escape. Also, main should probably catch some exceptions; for example, maybe there aren't enough resources to start the thread yet.

#include <iostream>
#include <thread>

void hello()
{
try
{
std::cout <<"Hello concurrent world!\n";
}
catch(const std::exception& e)
{
//erm... what to do with it?
}
}


int main()
{
try
{
std::thread t(hello);//can I pass parameters? Nico says I can to async (page 964)
t.join(); //Nico says we can do a t.detach and when main exits it will get stopped
}
catch(const std::system_error& e) //pulled in by thread I presume
{
if(e.code() == std::errc::resource_unavailable_try_again)
{
std::cout << "Try again\n";
}
}
catch(const std::exception& e)
{
std::cout << e.what() << '\n';
}
}

Right, so now we are ignoring any exceptions that get thrown.
What should I do with any exceptions I get in a function that's sent to a thread? I could use std::exception_ptr, and std::rethrow_exception when a client tries to get the result. It might be better if I read all of Anthony's book (esp Chapter 8) and use std::packaged_task instead.

accu-general (http://accu.org/index.php/mailinglists) helpfully told me to read all the chapters in the book concurrently.

About a month without Google Reader

As a bit of an RSS junkie - see previous post - I had to go look for alternatives to Google Reader. I’ve been a feedly user on and off for a few years but I was never that taken with it. It does seem to do mostly do what it says on the tin and having various tablet apps available for feedly is a good thing, but it tends to run into a few issues with high-volume feeds (craigslist feeds, I’m looking at you).

Visual Lint and Windows Driver Kit (WDK) projects

We have recently been working with Don Burn on PC-lint analysis of Windows Driver Kit (WDK) projects, and he has written an interesting article on the subject titled "Another Look at Lint" in the March-April 2013 issue of the NT Insider.

Within the article you will find the following rather complementary passage:

Finally the ultimate tool for using PC-lint with the WDK is Riverblade's Visual Lint. This is a third party tool providing an integrated package that works inside VS2012. The tool is an add-on to PC-lint which you must still purchase. The capabilities include background analysis of the project, coded display listings that - like Visual Studio - clicking on the error takes you to the line to edit and provides easy lookup of the description of the errors. The latest version of Visual Lint (4.0.2.198) is required for use with the WDK. The tool has a minor bug that if there are two subprojects with the same name, such as filter in the Toaster sample, one needs to be renamed for analysis to work. A fix is in the works.

To use Visual Lint with the WDK choose LintLdx.lnt as the standard lint configuration file for the tool. There is a 30-day free trial of Visual Lint available so if you are considering PC-lint, take a look at what Visual Lint can add to the experience. I expect to be using it for much of my work.

Our thanks to Don Burn for his patience while we worked through the issues raised by the analysis of WDK projects. As a postscript, a fix for the bug he refers to above has already been checked in and should become available in the next public Visual Lint build (most likely 4.0.3.200).

Visual Lint and Windows Driver Kit (WDK) projects

We have recently been working with Don Burn on PC-lint analysis of Windows Driver Kit (WDK) projects, and he has written an interesting article on the subject titled "Another Look at Lint" in the March-April 2013 issue of the NT Insider.

Within the article you will find the following rather complementary passage:

Finally the ultimate tool for using PC-lint with the WDK is Riverblade's Visual Lint. This is a third party tool providing an integrated package that works inside VS2012. The tool is an add-on to PC-lint which you must still purchase. The capabilities include background analysis of the project, coded display listings that - like Visual Studio - clicking on the error takes you to the line to edit and provides easy lookup of the description of the errors. The latest version of Visual Lint (4.0.2.198) is required for use with the WDK. The tool has a minor bug that if there are two subprojects with the same name, such as filter in the Toaster sample, one needs to be renamed for analysis to work. A fix is in the works.

To use Visual Lint with the WDK choose LintLdx.lnt as the standard lint configuration file for the tool. There is a 30-day free trial of Visual Lint available so if you are considering PC-lint, take a look at what Visual Lint can add to the experience. I expect to be using it for much of my work.

Our thanks to Don Burn for his patience while we worked through the issues raised by the analysis of WDK projects. As a postscript, a fix for the bug he refers to above has already been checked in and should become available in the next public Visual Lint build (most likely 4.0.3.200).

Visual Lint and Windows Driver Kit (WDK) projects

We have recently been working with Don Burn on PC-lint analysis of Windows Driver Kit (WDK) projects, and he has written an interesting article on the subject titled "Another Look at Lint" in the March-April 2013 issue of the NT Insider.

Within the article you will find the following rather complementary passage:

Finally the ultimate tool for using PC-lint with the WDK is Riverblade's Visual Lint. This is a third party tool providing an integrated package that works inside VS2012. The tool is an add-on to PC-lint which you must still purchase. The capabilities include background analysis of the project, coded display listings that - like Visual Studio - clicking on the error takes you to the line to edit and provides easy lookup of the description of the errors. The latest version of Visual Lint (4.0.2.198) is required for use with the WDK. The tool has a minor bug that if there are two subprojects with the same name, such as filter in the Toaster sample, one needs to be renamed for analysis to work. A fix is in the works.

To use Visual Lint with the WDK choose LintLdx.lnt as the standard lint configuration file for the tool. There is a 30-day free trial of Visual Lint available so if you are considering PC-lint, take a look at what Visual Lint can add to the experience. I expect to be using it for much of my work.

Our thanks to Don Burn for his patience while we worked through the issues raised by the analysis of WDK projects. As a postscript, a fix for the bug he refers to above has already been checked in and should become available in the next public Visual Lint build (most likely 4.0.3.200).

Visual Lint and Windows Driver Kit (WDK) projects

We have recently been working with Don Burn on PC-lint analysis of Windows Driver Kit (WDK) projects, and he has written an interesting article on the subject titled "Another Look at Lint" in the March-April 2013 issue of the NT Insider. Within the article you will find the following rather complementary passage:
Finally the ultimate tool for using PC-lint with the WDK is Riverblade's Visual Lint. This is a third party tool providing an integrated package that works inside VS2012. The tool is an add-on to PC-lint which you must still purchase. The capabilities include background analysis of the project, coded display listings that - like Visual Studio - clicking on the error takes you to the line to edit and provides easy lookup of the description of the errors. The latest version of Visual Lint (4.0.2.198) is required for use with the WDK. The tool has a minor bug that if there are two subprojects with the same name, such as filter in the Toaster sample, one needs to be renamed for analysis to work. A fix is in the works. To use Visual Lint with the WDK choose LintLdx.lnt as the standard lint configuration file for the tool. There is a 30-day free trial of Visual Lint available so if you are considering PC-lint, take a look at what Visual Lint can add to the experience. I expect to be using it for much of my work.
Our thanks to Don Burn for his patience while we worked through the issues raised by the analysis of WDK projects. As a postscript, a fix for the bug he refers to above has already been checked in and should become available in the next public Visual Lint build (most likely 4.0.3.200).

Of course I have to post something about Google Reader, too

The demise of Google reader viewed from a slightly different perspective. I find the analysis from someone who isn’t a proto-geek but rather an investment professional interesting, mainly because there are insights that some like me - who doesn’t spend the whole day looking at companies and trying to figure out what they are doing as opposed to what they say they are doing - would and this case, have missed.

If you’re using boost::variant, you need to have a look at Boost 1.53

I was profiling some code a while ago that makes extensive use of boost::variant and one of the lessons from the profiler run was that boost variants appear to be fairly expensive to construct and copy. As of 1.53, variants support rvalue constructors and rvalue assignment operators. My initial measurements suggest that when used with types that are “move enabled”, there is a benefit in upgrading to this version of boost variant, both in performance and memory consumption.

Improving the Emacs integration in Windows

I was trying to make Windows a little more Emacs-friendly (or was it the other way around?). First step was to enable the emacs server in my .emacs so I could make use of Emacs for quick and dirty editing tasks that require an editor better than Notepad but where the average Emacs startup time was just a little too long to make Emacs a viable alternative. A typical example would be to use Emacs as the editor for commit messages in Mercurial.

The stuff you find when you’re moving home

Happy New Year to all readers. I’ve been blogging even less recently as we’ve just moved house but unpacking all the boxes meant that I came across one of my favourite magazines: German readers of this blog (if are there any) might recognise the magazine - it’s the first issue of “c’t”, a magazine that is still going on strong almost thirty years later. The issue above is dated November/December 1983 and has moved house (and continents) with me a fair number of times.

PC-lint compiler options files for Visual Studio 2012

Since Visual Studio 2012 was released we have been using the compiler options files for Visual Studio 2010 (co-msc100.lnt and co-msc100.h) when analysing code for it as a set for VS2012 was not available. For the most part that works fine, however it does mean that code which checks _MSC_VER etc. to invoke conditional behaviour may not analyse correctly in some cases. We have now submitted dedicated PC-lint 9.0 compiler options files for Visual Studio 2012 (co-msc110.lnt and co-msc110.h) with the correct values for these symbols as given by the Visual Studio 2012 RTM. The co-msc110.lnt file also includes some suppression directives for Visual Studio 2012 unit tests. In due course they will become available in new PC-lint 9.0 installations and via the PC-lint 9.0 patch page, however for the time being Gimpel have made them available from the PC-lint 9.0 beta page at http://gimpel.com/html/90beta/. Incidentally the current PC-lint 9.0 beta version (9.00i2) looks like it contains some fairly significant changes to MISRA, PCH and function call side effect analysis - so that is a pretty good indication of what to expect in the next public version (PC-lint 9.00j). Details of the changes in PC-lint 9.00i2 are available at http://gimpel.com/html/90beta/fixes.txt.

Windows 8 Pro on an early 2009 iMac 21.5 (Core 2 Duo)

A couple of weeks back I thought I'd have a go writing a Windows Store App.  To do this requires Windows 8.  At the time I was running Windows 7 Home Premium on an early 2009 iMac 21.5 (Core 2 Duo).  This had been installed using Boot Camp including install Boot Camp assistant and the drivers supplied by Apple.

To upgrade to Windows 8 I wanted to avoid a re-installation of all my apps. and data etc so I went with an in place upgrade.  This all seemed to work properly and soon I was running Windows 8 and could access the Windows Store App templates from Visual Studio.  However, soon after Windows 8 kept crashing, well freezing.  It got to the point that after every reboot I'd be lucky to get 5 minutes of up time between each freeze.

Given that Apple haven't provided Windows 8 drivers yet this wasn't exactly a surprise.  I decided to try and work around this by rebooting to OS X and using VMWare Fusion to access the Boot Camp partition.  Whilst rebooting in OS X I managed to corrupt the Windows installation.  I use a non-Apple wireless keyboard (as I need the insert, delete, home & end plus the easily accessible cursor keys for VS development) so holding down Alt to select the OS to boot into didn't work.  When I realized it was going back into Windows I just turned the machine off.  After a couple of times the Windows installation was toast!  To get back to the point of trying Fusion I had to do a fresh Windows install.  In this case installing a minimal Windows 7 installation: just enough to allow the download of Windows 8.  I then installed Windows 8 using the preserve nothing option.

Having now gone through the steps I wanted to avoid I decided to give the new installation a go via direct boot, i.e. no Fusion.  That was two weeks ago.  Since then I've re-installed all the apps. and my personal data and (fingers crossed) haven't had a single crash.  As the freezes were usually happening during some graphical operation e.g. a status bar updating I assumed the fault probably lay with the video drivers.  I didn't install Boot Camp assistant and in particular the Windows 7 drivers from OS X disc.  Well, I did install one.  After a while I noticed I wasn't getting any sound even though all the audio drivers and hardware claimed they were happy.  Eventually I installed by the Cirrus Logic driver which made the speakers work. I haven't gone anywhere near the NVIDIA drivers.

So, the whole point of this post is for those who run Windows via Boot Camp on early iMacs and want to run Windows 8 then perhaps a fresh install (or maybe uninstall the Boot Camp supplied drivers prior to upgrade) is probably the way to go.

Windows 8 Pro on an early 2009 iMac 21.5 (Core 2 Duo)

A couple of weeks back I thought I'd have a go writing a Windows Store App.  To do this requires Windows 8.  At the time I was running Windows 7 Home Premium on an early 2009 iMac 21.5 (Core 2 Duo).  This had been installed using Boot Camp including install Boot Camp assistant and the drivers supplied by Apple.

To upgrade to Windows 8 I wanted to avoid a re-installation of all my apps. and data etc so I went with an in place upgrade.  This all seemed to work properly and soon I was running Windows 8 and could access the Windows Store App templates from Visual Studio.  However, soon after Windows 8 kept crashing, well freezing.  It got to the point that after every reboot I'd be lucky to get 5 minutes of up time between each freeze.

Given that Apple haven't provided Windows 8 drivers yet this wasn't exactly a surprise.  I decided to try and work around this by rebooting to OS X and using VMWare Fusion to access the Boot Camp partition.  Whilst rebooting in OS X I managed to corrupt the Windows installation.  I use a non-Apple wireless keyboard (as I need the insert, delete, home & end plus the easily accessible cursor keys for VS development) so holding down Alt to select the OS to boot into didn't work.  When I realized it was going back into Windows I just turned the machine off.  After a couple of times the Windows installation was toast!  To get back to the point of trying Fusion I had to do a fresh Windows install.  In this case installing a minimal Windows 7 installation: just enough to allow the download of Windows 8.  I then installed Windows 8 using the preserve nothing option.

Having now gone through the steps I wanted to avoid I decided to give the new installation a go via direct boot, i.e. no Fusion.  That was two weeks ago.  Since then I've re-installed all the apps. and my personal data and (fingers crossed) haven't had a single crash.  As the freezes were usually happening during some graphical operation e.g. a status bar updating I assumed the fault probably lay with the video drivers.  I didn't install Boot Camp assistant and in particular the Windows 7 drivers from OS X disc.  Well, I did install one.  After a while I noticed I wasn't getting any sound even though all the audio drivers and hardware claimed they were happy.  Eventually I installed by the Cirrus Logic driver which made the speakers work. I haven't gone anywhere near the NVIDIA drivers.

So, the whole point of this post is for those who run Windows via Boot Camp on early iMacs and want to run Windows 8 then perhaps a fresh install (or maybe uninstall the Boot Camp supplied drivers prior to upgrade) is probably the way to go.

How to make a self-signed SSL certificate work with Windows RT’s Mail App on a Microsoft Surface RT

Long title, I know… I was trying to get Windows RT’s Mail App to access the email on my own server. The server uses IMAPS with s self-signed certificate as I only want SSL for it encryption and don’t really need it for authentication purposes as well. As long as it is the correct self-signed certificate I’m happy. The Mail app however rejects certificates that weren’t signed by a trusted authority and doesn’t offer an obvious exception mechanism (like Thunderbird or Apple Mail) that circumvents the need for a trusted certificate.

I don’t want to see another ‘using namespace xxx;’ in a header file ever again

There, I’ve said it. No tiptoeing around. As a senior developer/team lead, I get involved in hiring new team members and in certain cases also help out other teams with interviewing people. As part of the interview process, candidates are usually asked to write code, so I review a lot of code submissions. One trend I noticed with recent C++ code submissions is that the first like I encounter in any header file is

That’s another warranty voided, then

Last night I did something I was adamant I wasn’t going to do, namely rooting my Android phone and installing CyanogenMod on it. Normally I don’t like messing with (smart)phones - they’re tools in the pipe wrench sense to me, they should hopefully not require much in the way of care & feeding apart from charging and the odd app or OS update. Of course, the odd OS update is can already be a problem as no official updates have been available for this phone (a Motorola Droid) for a while and between the provider-installed bloatware that couldn’t be uninstalled and the usual cruft that seems to accumulate on computers over time, the phone was really sluggish, often unresponsive and pretty much permanently complained about running out of memory.

A(nother) tool post

I generally don’t post that much about the tools I use as they’re pretty standard fare and most of the time, your success as a programmer depends more on your skills than on your tools. Mastery of your tools will make you a better software engineer, but if you put the tools first, you end up with the cart before the horse. I guess people have noticed that I use Emacs a lot :).

Specifying the directory to create SQL CE databases when using Entity Framework

In the last few posts I've been describing how to create instances of SQLCE in order to perform automated Integration Testing using NUnit and accessing the dB using Entity Framework.  I covered creating the dB using both Entity Framework and the SQL CE classes.  In particular I wanted control over the directory the dB was created in but I didn't want to tie to a specific location rather let it use the current working directory.

Using the Entity Framework's DbContext constructor that takes the name of a connection string or database name it's suddenly very easy to end up NOT creating the dB you expected where you expected it to be.  This post shows how to avoid these.  Generally speaking the use of the DbContext constructor that takes a Connection String should be avoided unless the name of a connection string from the .config file is being specified.

Example 1 - Using the SqlCeEngine class
1:  const string DB_NAME = "test1.sdf";  
2: const string DB_PATH = @".\" + DB_NAME; // Use ".\" for CWD or a specific path
3: const string CONNECTION_STRING = "data source=" + DB_PATH;
4:
5: using (var eng = new SqlCeEngine(CONNECTION_STRING))
6: {
7: eng.CreateDatabase();
8: }
9:
10: using (var conn = new SqlCeConnection(CONNECTION_STRING))
11: {
12: conn.Open(); // do stuff with db...
13: }
14:

The important thing to note is that the constructor for SqlCeEngine that takes an argument requires a Connection String, i.e. a string containing the "data source=...".  Just specifying the dB path is not sufficient.  To specify a specific directory  include the absolute or relative path.  To specify the current working directory, e.g. bin\debug then just use ".\".

Example 2 - Using DbContext (doesn't work)
1:  using (var ctx = new DbContext("test2.sdf"))  
2: {
3: ctx.Database.Create();
4: }

This code appears to work but doesn't create an instance of an SQL CE dB as desired.  Instead it creates a localDB instance in the user's home directory.  In my case: C:\Users\Pete\._test.sdf.mdf (& corresponding log file).  This is not really surprising as Entity Framework had no way of knowing that a SQL CE dB should be created.

Example 3 - Using DbContext (does work)
1:  Database.DefaultConnectionFactory =  
2: new SqlCeConnectionFactory(
3: "System.Data.SqlServerCe.4.0",
4: @".\", "");
5:
6: using (var ctx = new DbContext("test2.sdf"))
7: {
8: ctx.Database.Create();
9: // do stuff with ctx...
10: }

The difference between the last and this example is changing the default type of dB that EF should create.  As shown this is done by installing a different factory.

The 3rd parameter to SqlCeConnectionFactory is the directory that the dB should be created in.  Just like the first example specifying ".\" means the current working directory and specifying an absolute path to a directory will lead to them being created there.

NOTE: As per the post Integration Testing with NUnit and Entity Framework be aware that creating a dB using the Entity Framework results in the additional table '_MigrationHistory' being created which EF uses to keep the model and dB synchronized.

NOTE1: Whereas SqlCeEngine is a SQL CE class from the System.Data.SqlServerCe assembly, SqlCeConnectionFactory appears to be part of the System.Data.Entity assembly which is part of the Entity Framework.


In the above example the string passed to DbContext can be a name (of a connection string from the .config file) or a connection string.  In this case passing the name of the db, i.e. test2.sdf is equivalent to passing "data source=test2.sdf", well more or less.  If the '.sdf' suffix is omitted with "data source" then the resultant dB is called test2 but if just test2 is passed then the resulting dB will be called test2.sdf.

Example 4 - Using DbContext and the .config file
1:  using (var ctx = new DbContext("test5"))  
2: {
3: ctx.Database.Create();
4: }

App or Web .config
1:  <connectionStrings>  
2: <add name="test5"
3: providerName="System.Data.SqlServerCe.4.0"
4: connectionString="Data Source=test5.sdf"/>
5: </connectionStrings>

This time no factory is specified but the argument to DbContext is the name of a Connection String in the .config file.  As can be seen this contains similar information to that in the factory method enabling EF to create a dB of the correct type.

To use these the instances of these databases rather than calling the create method on the context just use the context directly or more likely in the case of EF a derived context which brings us to one last example.

Example 5 - Using a derived context and .config file
1:  public class TestCtx : DbContext  
2: {
3: }
4: using (var ctx = new TestCtx())
5: {
6: ctx.Database.Create();
7: }

App or Web .config
1:  <connectionStrings>  
2: <add name="TestCtx"
3: providerName="System.Data.SqlServerCe.4.0"
4: connectionString="Data Source=test6.sdf"/>
5: </connectionStrings>

If a derived context is created which will almost certainly be the case then if an instance of this is created and a dB created then EF will look for a Connection String in the .config file that has the same name as the context and take the information from there.

Specifying the directory to create SQL CE databases when using Entity Framework

In the last few posts I've been describing how to create instances of SQLCE in order to perform automated Integration Testing using NUnit and accessing the dB using Entity Framework.  I covered creating the dB using both Entity Framework and the SQL CE classes.  In particular I wanted control over the directory the dB was created in but I didn't want to tie to a specific location rather let it use the current working directory.

Using the Entity Framework's DbContext constructor that takes the name of a connection string or database name it's suddenly very easy to end up NOT creating the dB you expected where you expected it to be.  This post shows how to avoid these.  Generally speaking the use of the DbContext constructor that takes a Connection String should be avoided unless the name of a connection string from the .config file is being specified.

Example 1 - Using the SqlCeEngine class
1:  const string DB_NAME = "test1.sdf";  
2: const string DB_PATH = @".\" + DB_NAME; // Use ".\" for CWD or a specific path
3: const string CONNECTION_STRING = "data source=" + DB_PATH;
4:
5: using (var eng = new SqlCeEngine(CONNECTION_STRING))
6: {
7: eng.CreateDatabase();
8: }
9:
10: using (var conn = new SqlCeConnection(CONNECTION_STRING))
11: {
12: conn.Open(); // do stuff with db...
13: }
14:

The important thing to note is that the constructor for SqlCeEngine that takes an argument requires a Connection String, i.e. a string containing the "data source=...".  Just specifying the dB path is not sufficient.  To specify a specific directory  include the absolute or relative path.  To specify the current working directory, e.g. bin\debug then just use ".\".

Example 2 - Using DbContext (doesn't work)
1:  using (var ctx = new DbContext("test2.sdf"))  
2: {
3: ctx.Database.Create();
4: }

This code appears to work but doesn't create an instance of an SQL CE dB as desired.  Instead it creates a localDB instance in the user's home directory.  In my case: C:\Users\Pete\._test.sdf.mdf (& corresponding log file).  This is not really surprising as Entity Framework had no way of knowing that a SQL CE dB should be created.

Example 3 - Using DbContext (does work)
1:  Database.DefaultConnectionFactory =  
2: new SqlCeConnectionFactory(
3: "System.Data.SqlServerCe.4.0",
4: @".\", "");
5:
6: using (var ctx = new DbContext("test2.sdf"))
7: {
8: ctx.Database.Create();
9: // do stuff with ctx...
10: }

The difference between the last and this example is changing the default type of dB that EF should create.  As shown this is done by installing a different factory.

The 3rd parameter to SqlCeConnectionFactory is the directory that the dB should be created in.  Just like the first example specifying ".\" means the current working directory and specifying an absolute path to a directory will lead to them being created there.

NOTE: As per the post Integration Testing with NUnit and Entity Framework be aware that creating a dB using the Entity Framework results in the additional table '_MigrationHistory' being created which EF uses to keep the model and dB synchronized.

NOTE1: Whereas SqlCeEngine is a SQL CE class from the System.Data.SqlServerCe assembly, SqlCeConnectionFactory appears to be part of the System.Data.Entity assembly which is part of the Entity Framework.


In the above example the string passed to DbContext can be a name (of a connection string from the .config file) or a connection string.  In this case passing the name of the db, i.e. test2.sdf is equivalent to passing "data source=test2.sdf", well more or less.  If the '.sdf' suffix is omitted with "data source" then the resultant dB is called test2 but if just test2 is passed then the resulting dB will be called test2.sdf.

Example 4 - Using DbContext and the .config file
1:  using (var ctx = new DbContext("test5"))  
2: {
3: ctx.Database.Create();
4: }

App or Web .config
1:  <connectionStrings>  
2: <add name="test5"
3: providerName="System.Data.SqlServerCe.4.0"
4: connectionString="Data Source=test5.sdf"/>
5: </connectionStrings>

This time no factory is specified but the argument to DbContext is the name of a Connection String in the .config file.  As can be seen this contains similar information to that in the factory method enabling EF to create a dB of the correct type.

To use these the instances of these databases rather than calling the create method on the context just use the context directly or more likely in the case of EF a derived context which brings us to one last example.

Example 5 - Using a derived context and .config file
1:  public class TestCtx : DbContext  
2: {
3: }
4: using (var ctx = new TestCtx())
5: {
6: ctx.Database.Create();
7: }

App or Web .config
1:  <connectionStrings>  
2: <add name="TestCtx"
3: providerName="System.Data.SqlServerCe.4.0"
4: connectionString="Data Source=test6.sdf"/>
5: </connectionStrings>

If a derived context is created which will almost certainly be the case then if an instance of this is created and a dB created then EF will look for a Connection String in the .config file that has the same name as the context and take the information from there.

Integration Testing with NUnit and Entity Framework

This post gives a quick introduction into creating SQL CE dBs for performing Integration Tests using NUnit.

In the previous post Using NUnit and Entity Framework DbContext to programmatically create SQL Server CE databases and specify the databse directory a basic way was shown to how to create a new dB (using Entity Framework's DbContext) programmtically.  This was used to generate a new dB for a test hosted by NUnit.

The subsequent post Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to generate a SQL CE dB schema from an existing SQL Server database.

This post ties theprevious ones together.  As mentioned in the first post the reason for this is an attempt at what amounts to Integration Testing using NUnit.  I'm currently building a Repository and Unit Of Work abstraction on top of Entity Framework which will allow the isolation of the dB code (in fact it will isolate and abstract away most forms of data storage).  This means any business logic can be tested with a test-double that implements the Repository and UnitOfWork interfaces; which is straight forward Unit Testing.  The Integration Testing is to verify that the Repository and Unit Of Work implementations work correctly.

The rest of the post isn't focused on these two patterns; though it may mention them.  Instead it documents my further experience of using NUnit to writes tests that interact with dB via Entity Framework.  The premise for this is that a dB already exists.

As such the approach to using Entity Framework is a hybrid of Database First and Code First in that the dB schema exists and needs be maintained outside of EF and also that EF should not generate model classes, i.e. allowing the use of Code First POCOs.  This is possible as the POCOs can be defined, a connection made to dB and then the two are conflated via an EF DbContext.  It then seems that EF creates the model on the fly (internally compiles it) and as long as the POCO types map to the dB types then it all works as if by magic!

The advantage of doing it this way is that the existing dB is SQL Express based but for the Integration Testing a new dB can be created when needed, potentially one per test.  In order to keep the test dBs isolated from the real dB SQL Server Compact Edition (SQL Server CE V4) was used.  Therefore the requirement was for the EF code to be able to work with SQL Express and SQL CE with the primary definition of the schema taken from SQL Express.  It's not possible to use exactly the same schema as SQL CE only has a subset of the data-types provides by SQL CE.  However, the process described in the post 
Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to create semantically equivalent SQL.


From this point onwards it's assumed that an SQL file to create the dB has been generated.  Now create a new C# class library project and using the NUGet add Entity Framework, NUnit and SQL CE 4.0.  All my work has been with EF 4.3.1.  Following this drag the Model1.edmx.sqlce file from the project used to generate to new project.  You may wish to rename it, e.g. to test.sqlce.


Creating the database

The post Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to create a new CE dB per-test using the EF DbContext to do the hard work.  A different approach is now taken as the problem with creating a dB using DbContext is that in addition to creating any specified tables and indices etc. it also creates an additional table called '__MigrationHistory' which contains a description of the EF model used to create the dB.  The description of the problem caused by this will be delayed until the "Why DbContext is no longer used to create the database" section.  Suffice to say for the present using the new mechanism avoids the creation of this table.

The code below is the beginnings of a test class.  It is assumed all the tests need a fresh copy of the dB hence the creation is performed in the Setup method.  All this code does is create a SQL CE dB and then
creates the schema.

1:  [TestFixture]  
2: public class SimpleTests
3: {
4: const string DB_NAME = "test.sdf";
5: const string DB_PATH = @".\" + DB_NAME;
6: const string CONNECTION_STRING = "data source=" + DB_PATH;
7: [SetUp]
8: public void Setup()
9: {
10: DeleteDb();
11: using (var eng = new SqlCeEngine(CONNECTION_STRING))
12: eng.CreateDatabase();
13: using (var conn = new SqlCeConnection(CONNECTION_STRING))
14: {
15: conn.Open();
16: string sql=ReadSQLFromFile(@"C:\Users\Pete\work\Jub\EFTests\Test.sqlce");
17: string[] sqlCmds = sql.Split(new string[] { "GO" }, int.MaxValue, StringSplitOptions.RemoveEmptyEntries);
18: foreach (string sqlCmd in sqlCmds)
19: try
20: {
21: var cmd = conn.CreateCommand();
22:
23: cmd.CommandText = sqlCmd;
24: cmd.ExecuteNonQuery();
25: }
26: catch (Exception e)
27: {
28: Console.Error.WriteLine("{0}:{1}", e.Message, sqlCmd);
29: throw;
30: }
31: }
32: }
33: public void DeleteDb()
34: {
35: if (File.Exists(DB_PATH))
36: File.Delete(DB_PATH);
37: }
38: private string ReadSQLFromFile(string sqlFilePath)
39: {
40: using (TextReader r = new StreamReader(sqlFilePath))
41: {
42: return r.ReadToEnd();
43: }
44: }
45: }
46:
The dB file (Test.sdf) will be created in the current working directory.  As the test assembly is located in <project>\bin\debug which is where the NUnit test runner picks up the DLL from this directory this is where it is created.  If a specific directory is required then the '.\' can be replaced with the required path.

The Setup method is marked with NUnit's SetUp attribute meaning it will be invoked on a per-test basis creating a new dB instance for each test.  The DeleteDb method could be marked with [TearDown] attribute but at the moment any previous dB is deleted before creating a new one.  It would be fine to do both as a belt and braces approach.  The reason I didn't make it the TearDown method is so that I could inspect the dB following a test if needed.

SQL CE does not support batch execution of SQL scripts which is where it gets interesting as the SQL generated previously is in batch form.  The code reads the entire file into a string and determines each individual statement by splitting string on the 'GO' command that separates each SQL command.

To help understand the SQL the following is the diagram of the dB I'm working with.  All fields are strings except for the Ids which are numeric.
Each of these commands is then executed.  The previously generated SQL (the SQL for the dB I'm working with is below) will not work completely out of the box.  The ALTER and DROP statements at the beginning don't apply as the schema is being applied to an empty dB, these should be removed.  Interestingly the schema generation step for my dB seems to miss out a 'GO' between the penultimate and ultimate statement.  I had to add one by hand.  Finally, the comments at the end prove a problem as there is no terminating 'GO'.  Removing these fixes the problem.  In the code above the exception handler re-throws the exception after writing out the details.  For everything to proceed the SQL needs modifying to execute perfectly.  If the re-throw is removed then the code will tolerate individual command failures which in this context really just amount to warnings.

NOTE: Text highlighted in red has been removed and text in blue added.

-- --------------------------------------------------
-- Entity Designer DDL Script for SQL Server Compact Edition
-- --------------------------------------------------
-- Date Created: 07/29/2012 12:28:35
-- Generated from EDMX file: C:\Users\Pete\work\Jub\DummyWebApplicationToGenerateSQLServerCE4Script\Model1.edmx
-- --------------------------------------------------


-- --------------------------------------------------
-- Dropping existing FOREIGN KEY constraints
-- NOTE: if the constraint does not exist, an ignorable error will be reported.
-- --------------------------------------------------

    ALTER TABLE [RepComments] DROP CONSTRAINT [FK_RepComments_Reps];
GO

-- --------------------------------------------------
-- Dropping existing tables
-- NOTE: if the table does not exist, an ignorable error will be reported.
-- --------------------------------------------------

    DROP TABLE [RepComments];
GO
    DROP TABLE [Reps];
GO
    DROP TABLE [Roads];
GO

-- --------------------------------------------------
-- Creating all tables
-- --------------------------------------------------

-- Creating table 'RepComments'
CREATE TABLE [RepComments] (
    [CommentId] int IDENTITY(1,1) NOT NULL,
    [RepId] int  NOT NULL,
    [Comment] ntext  NOT NULL
);
GO

-- Creating table 'Reps'
CREATE TABLE [Reps] (
    [RepId] int IDENTITY(1,1) NOT NULL,
    [RepName] nvarchar(50)  NOT NULL,
    [RoadName] nvarchar(256)  NOT NULL,
    [HouseNumberOrName] nvarchar(50)  NOT NULL,
    [ContactTelNumber] nvarchar(20)  NOT NULL,
    [Email] nvarchar(50)  NULL
);
GO

-- Creating table 'Roads'
CREATE TABLE [Roads] (
    [Name] nvarchar(256)  NOT NULL
);
GO

-- --------------------------------------------------
-- Creating all PRIMARY KEY constraints
-- --------------------------------------------------

-- Creating primary key on [CommentId] in table 'RepComments'
ALTER TABLE [RepComments]
ADD CONSTRAINT [PK_RepComments]
    PRIMARY KEY ([CommentId] );
GO

-- Creating primary key on [RepId] in table 'Reps'
ALTER TABLE [Reps]
ADD CONSTRAINT [PK_Reps]
    PRIMARY KEY ([RepId] );
GO

-- Creating primary key on [Name] in table 'Roads'
ALTER TABLE [Roads]
ADD CONSTRAINT [PK_Roads]
    PRIMARY KEY ([Name] );
GO

-- --------------------------------------------------
-- Creating all FOREIGN KEY constraints
-- --------------------------------------------------

-- Creating foreign key on [RepId] in table 'RepComments'
ALTER TABLE [RepComments]
ADD CONSTRAINT [FK_RepComments_Reps]
    FOREIGN KEY ([RepId])
    REFERENCES [Reps]
        ([RepId])
    ON DELETE NO ACTION ON UPDATE NO ACTION;
GO
-- Creating non-clustered index for FOREIGN KEY 'FK_RepComments_Reps'
CREATE INDEX [IX_FK_RepComments_Reps]
ON [RepComments]
    ([RepId]);
GO

-- --------------------------------------------------
-- Script has ended
-- --------------------------------------------------

Getting the SQL into a state where it will run flawlessly is a little bit of a hassle but given the number of times it will be used subsequently it's job a big job, well for a small dB anyway.  To verify that your dB has been created as needed an quick and easy way to test is to comment out the call to DeleteDb() and after a test has run open to the dB using Server Explorer within VS, i.e.



Using the dB in a test

Now that a fresh dB will be created for each test it's time to look at simple test:

1:  [Test]  
2: public void TestOne()
3: {
4: using (var conn = new SqlCeConnection(CONNECTION_STRING))
5: using (var ctx = new TestCtx(conn))
6: {
7: ctx.Roads.Add(new Road() { Name = "Test" });
8: ctx.SaveChanges();
9: Assert.That(1, Is.EqualTo(ctx.Roads.Count()));
10: }
11: }
Road in this case is defined as:

1:  class Road  
2: {
3: [Key]
4: public string Name { get; set; }
5: }

The first thing to note is that EF is not used to form the connection to the dB, instead one is made using the SqlCe specific classes.  Attempting to get EF to connect to a specific dB instance when not referring to a named connection strings in the .config file is a bit of an art (I may write another entry about this).  However, EF is quite happy to work with an existing connection.  This makes for a good separation of responsibilities in the code where EF manages the interactions with the dB but the control of the connection is elsewhere.

NOTE: It is likely that each test will require a connection and a context hence rather it might make more sense to move the creation of the SqlCeConnection and the context (TestCtx in this case) to a SetUp method and as these resources need disposing of adding a TearDown method to do that.  TestCtx could also be modified to pass true to the DbContext constructor to give ownership of the connection to the context so that it will dispose of it then context is disposed off.

I would have preferred to avoid having to defined a specific derived context and instead use DbContext directory, e.g.
1:  [Test]  
2: public void TesTwo()
3: {
4: using (var conn = new SqlCeConnection(CONNECTION_STRING))
5: using (var ctx = new DbContext(conn, false))
6: {
7: ctx.Set<Road>().Add(new Road() { Name = "Test" });
8: ctx.SaveChanges();
9: Assert.That(1, Is.EqualTo(ctx.Set<Road>().Count()));
10: }
11: }

However when SaveChanges() is called the following exception is thrown:

System.InvalidOperationException : The entity type Road is not part of the model for the current context.

This is because EF knows nothing about the Road type.  When a derived context is created for the first time I think EF performs reflection on any properties that expose DbSet.  These are the types that form the Model.  Another option is to create the model, optionally compile it and then pass it to an instance of DbContext.  This way involves a lot less code.

That's it.  The final section is just footnote about the move away from using EF to create the dB.

Why DbContext is no longer used to create the database

As mentioned creating the dB using:
1:  using (var ctx = new DbContext("bar.sdf"))  
2: {
3: ctx.Database.Create();
4: // create schema etc.
5: }
causes the '__MigrationHistory' table to be created.  Assuming this method was used, later on when TestCtx was used top open the dB and perform an operation the following exception would be thrown:

System.InvalidOperationException : The model backing the 'DbContext' context has changed since the database was created. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=238269).
This is because the context used to create the model was a raw DbContext (as per the previous post) whereas the dB was accessed via the TestCtx.  If the context used to create the dB is also changed to TestCtx then this problem goes away.
However, given the original dB is not intended to be created nor be maintained (code migrations) by EF then using the non-context/EF approach to dB completely removes EF from the picture.









Integration Testing with NUnit and Entity Framework

This post gives a quick introduction into creating SQL CE dBs for performing Integration Tests using NUnit.

In the previous post Using NUnit and Entity Framework DbContext to programmatically create SQL Server CE databases and specify the databse directory a basic way was shown to how to create a new dB (using Entity Framework's DbContext) programmtically.  This was used to generate a new dB for a test hosted by NUnit.

The subsequent post Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to generate a SQL CE dB schema from an existing SQL Server database.

This post ties theprevious ones together.  As mentioned in the first post the reason for this is an attempt at what amounts to Integration Testing using NUnit.  I'm currently building a Repository and Unit Of Work abstraction on top of Entity Framework which will allow the isolation of the dB code (in fact it will isolate and abstract away most forms of data storage).  This means any business logic can be tested with a test-double that implements the Repository and UnitOfWork interfaces; which is straight forward Unit Testing.  The Integration Testing is to verify that the Repository and Unit Of Work implementations work correctly.

The rest of the post isn't focused on these two patterns; though it may mention them.  Instead it documents my further experience of using NUnit to writes tests that interact with dB via Entity Framework.  The premise for this is that a dB already exists.

As such the approach to using Entity Framework is a hybrid of Database First and Code First in that the dB schema exists and needs be maintained outside of EF and also that EF should not generate model classes, i.e. allowing the use of Code First POCOs.  This is possible as the POCOs can be defined, a connection made to dB and then the two are conflated via an EF DbContext.  It then seems that EF creates the model on the fly (internally compiles it) and as long as the POCO types map to the dB types then it all works as if by magic!

The advantage of doing it this way is that the existing dB is SQL Express based but for the Integration Testing a new dB can be created when needed, potentially one per test.  In order to keep the test dBs isolated from the real dB SQL Server Compact Edition (SQL Server CE V4) was used.  Therefore the requirement was for the EF code to be able to work with SQL Express and SQL CE with the primary definition of the schema taken from SQL Express.  It's not possible to use exactly the same schema as SQL CE only has a subset of the data-types provides by SQL CE.  However, the process described in the post 
Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to create semantically equivalent SQL.


From this point onwards it's assumed that an SQL file to create the dB has been generated.  Now create a new C# class library project and using the NUGet add Entity Framework, NUnit and SQL CE 4.0.  All my work has been with EF 4.3.1.  Following this drag the Model1.edmx.sqlce file from the project used to generate to new project.  You may wish to rename it, e.g. to test.sqlce.


Creating the database

The post Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to create a new CE dB per-test using the EF DbContext to do the hard work.  A different approach is now taken as the problem with creating a dB using DbContext is that in addition to creating any specified tables and indices etc. it also creates an additional table called '__MigrationHistory' which contains a description of the EF model used to create the dB.  The description of the problem caused by this will be delayed until the "Why DbContext is no longer used to create the database" section.  Suffice to say for the present using the new mechanism avoids the creation of this table.

The code below is the beginnings of a test class.  It is assumed all the tests need a fresh copy of the dB hence the creation is performed in the Setup method.  All this code does is create a SQL CE dB and then
creates the schema.

1:  [TestFixture]  
2: public class SimpleTests
3: {
4: const string DB_NAME = "test.sdf";
5: const string DB_PATH = @".\" + DB_NAME;
6: const string CONNECTION_STRING = "data source=" + DB_PATH;
7: [SetUp]
8: public void Setup()
9: {
10: DeleteDb();
11: using (var eng = new SqlCeEngine(CONNECTION_STRING))
12: eng.CreateDatabase();
13: using (var conn = new SqlCeConnection(CONNECTION_STRING))
14: {
15: conn.Open();
16: string sql=ReadSQLFromFile(@"C:\Users\Pete\work\Jub\EFTests\Test.sqlce");
17: string[] sqlCmds = sql.Split(new string[] { "GO" }, int.MaxValue, StringSplitOptions.RemoveEmptyEntries);
18: foreach (string sqlCmd in sqlCmds)
19: try
20: {
21: var cmd = conn.CreateCommand();
22:
23: cmd.CommandText = sqlCmd;
24: cmd.ExecuteNonQuery();
25: }
26: catch (Exception e)
27: {
28: Console.Error.WriteLine("{0}:{1}", e.Message, sqlCmd);
29: throw;
30: }
31: }
32: }
33: public void DeleteDb()
34: {
35: if (File.Exists(DB_PATH))
36: File.Delete(DB_PATH);
37: }
38: private string ReadSQLFromFile(string sqlFilePath)
39: {
40: using (TextReader r = new StreamReader(sqlFilePath))
41: {
42: return r.ReadToEnd();
43: }
44: }
45: }
46:
The dB file (Test.sdf) will be created in the current working directory.  As the test assembly is located in <project>\bin\debug which is where the NUnit test runner picks up the DLL from this directory this is where it is created.  If a specific directory is required then the '.\' can be replaced with the required path.

The Setup method is marked with NUnit's SetUp attribute meaning it will be invoked on a per-test basis creating a new dB instance for each test.  The DeleteDb method could be marked with [TearDown] attribute but at the moment any previous dB is deleted before creating a new one.  It would be fine to do both as a belt and braces approach.  The reason I didn't make it the TearDown method is so that I could inspect the dB following a test if needed.

SQL CE does not support batch execution of SQL scripts which is where it gets interesting as the SQL generated previously is in batch form.  The code reads the entire file into a string and determines each individual statement by splitting string on the 'GO' command that separates each SQL command.

To help understand the SQL the following is the diagram of the dB I'm working with.  All fields are strings except for the Ids which are numeric.
Each of these commands is then executed.  The previously generated SQL (the SQL for the dB I'm working with is below) will not work completely out of the box.  The ALTER and DROP statements at the beginning don't apply as the schema is being applied to an empty dB, these should be removed.  Interestingly the schema generation step for my dB seems to miss out a 'GO' between the penultimate and ultimate statement.  I had to add one by hand.  Finally, the comments at the end prove a problem as there is no terminating 'GO'.  Removing these fixes the problem.  In the code above the exception handler re-throws the exception after writing out the details.  For everything to proceed the SQL needs modifying to execute perfectly.  If the re-throw is removed then the code will tolerate individual command failures which in this context really just amount to warnings.

NOTE: Text highlighted in red has been removed and text in blue added.

-- --------------------------------------------------
-- Entity Designer DDL Script for SQL Server Compact Edition
-- --------------------------------------------------
-- Date Created: 07/29/2012 12:28:35
-- Generated from EDMX file: C:\Users\Pete\work\Jub\DummyWebApplicationToGenerateSQLServerCE4Script\Model1.edmx
-- --------------------------------------------------


-- --------------------------------------------------
-- Dropping existing FOREIGN KEY constraints
-- NOTE: if the constraint does not exist, an ignorable error will be reported.
-- --------------------------------------------------

    ALTER TABLE [RepComments] DROP CONSTRAINT [FK_RepComments_Reps];
GO

-- --------------------------------------------------
-- Dropping existing tables
-- NOTE: if the table does not exist, an ignorable error will be reported.
-- --------------------------------------------------

    DROP TABLE [RepComments];
GO
    DROP TABLE [Reps];
GO
    DROP TABLE [Roads];
GO

-- --------------------------------------------------
-- Creating all tables
-- --------------------------------------------------

-- Creating table 'RepComments'
CREATE TABLE [RepComments] (
    [CommentId] int IDENTITY(1,1) NOT NULL,
    [RepId] int  NOT NULL,
    [Comment] ntext  NOT NULL
);
GO

-- Creating table 'Reps'
CREATE TABLE [Reps] (
    [RepId] int IDENTITY(1,1) NOT NULL,
    [RepName] nvarchar(50)  NOT NULL,
    [RoadName] nvarchar(256)  NOT NULL,
    [HouseNumberOrName] nvarchar(50)  NOT NULL,
    [ContactTelNumber] nvarchar(20)  NOT NULL,
    [Email] nvarchar(50)  NULL
);
GO

-- Creating table 'Roads'
CREATE TABLE [Roads] (
    [Name] nvarchar(256)  NOT NULL
);
GO

-- --------------------------------------------------
-- Creating all PRIMARY KEY constraints
-- --------------------------------------------------

-- Creating primary key on [CommentId] in table 'RepComments'
ALTER TABLE [RepComments]
ADD CONSTRAINT [PK_RepComments]
    PRIMARY KEY ([CommentId] );
GO

-- Creating primary key on [RepId] in table 'Reps'
ALTER TABLE [Reps]
ADD CONSTRAINT [PK_Reps]
    PRIMARY KEY ([RepId] );
GO

-- Creating primary key on [Name] in table 'Roads'
ALTER TABLE [Roads]
ADD CONSTRAINT [PK_Roads]
    PRIMARY KEY ([Name] );
GO

-- --------------------------------------------------
-- Creating all FOREIGN KEY constraints
-- --------------------------------------------------

-- Creating foreign key on [RepId] in table 'RepComments'
ALTER TABLE [RepComments]
ADD CONSTRAINT [FK_RepComments_Reps]
    FOREIGN KEY ([RepId])
    REFERENCES [Reps]
        ([RepId])
    ON DELETE NO ACTION ON UPDATE NO ACTION;
GO
-- Creating non-clustered index for FOREIGN KEY 'FK_RepComments_Reps'
CREATE INDEX [IX_FK_RepComments_Reps]
ON [RepComments]
    ([RepId]);
GO

-- --------------------------------------------------
-- Script has ended
-- --------------------------------------------------

Getting the SQL into a state where it will run flawlessly is a little bit of a hassle but given the number of times it will be used subsequently it's job a big job, well for a small dB anyway.  To verify that your dB has been created as needed an quick and easy way to test is to comment out the call to DeleteDb() and after a test has run open to the dB using Server Explorer within VS, i.e.



Using the dB in a test

Now that a fresh dB will be created for each test it's time to look at simple test:

1:  [Test]  
2: public void TestOne()
3: {
4: using (var conn = new SqlCeConnection(CONNECTION_STRING))
5: using (var ctx = new TestCtx(conn))
6: {
7: ctx.Roads.Add(new Road() { Name = "Test" });
8: ctx.SaveChanges();
9: Assert.That(1, Is.EqualTo(ctx.Roads.Count()));
10: }
11: }
Road in this case is defined as:

1:  class Road  
2: {
3: [Key]
4: public string Name { get; set; }
5: }

The first thing to note is that EF is not used to form the connection to the dB, instead one is made using the SqlCe specific classes.  Attempting to get EF to connect to a specific dB instance when not referring to a named connection strings in the .config file is a bit of an art (I may write another entry about this).  However, EF is quite happy to work with an existing connection.  This makes for a good separation of responsibilities in the code where EF manages the interactions with the dB but the control of the connection is elsewhere.

NOTE: It is likely that each test will require a connection and a context hence rather it might make more sense to move the creation of the SqlCeConnection and the context (TestCtx in this case) to a SetUp method and as these resources need disposing of adding a TearDown method to do that.  TestCtx could also be modified to pass true to the DbContext constructor to give ownership of the connection to the context so that it will dispose of it then context is disposed off.

I would have preferred to avoid having to defined a specific derived context and instead use DbContext directory, e.g.
1:  [Test]  
2: public void TesTwo()
3: {
4: using (var conn = new SqlCeConnection(CONNECTION_STRING))
5: using (var ctx = new DbContext(conn, false))
6: {
7: ctx.Set<Road>().Add(new Road() { Name = "Test" });
8: ctx.SaveChanges();
9: Assert.That(1, Is.EqualTo(ctx.Set<Road>().Count()));
10: }
11: }

However when SaveChanges() is called the following exception is thrown:

System.InvalidOperationException : The entity type Road is not part of the model for the current context.

This is because EF knows nothing about the Road type.  When a derived context is created for the first time I think EF performs reflection on any properties that expose DbSet.  These are the types that form the Model.  Another option is to create the model, optionally compile it and then pass it to an instance of DbContext.  This way involves a lot less code.

That's it.  The final section is just footnote about the move away from using EF to create the dB.

Why DbContext is no longer used to create the database

As mentioned creating the dB using:
1:  using (var ctx = new DbContext("bar.sdf"))  
2: {
3: ctx.Database.Create();
4: // create schema etc.
5: }
causes the '__MigrationHistory' table to be created.  Assuming this method was used, later on when TestCtx was used top open the dB and perform an operation the following exception would be thrown:

System.InvalidOperationException : The model backing the 'DbContext' context has changed since the database was created. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=238269).
This is because the context used to create the model was a raw DbContext (as per the previous post) whereas the dB was accessed via the TestCtx.  If the context used to create the dB is also changed to TestCtx then this problem goes away.
However, given the original dB is not intended to be created nor be maintained (code migrations) by EF then using the non-context/EF approach to dB completely removes EF from the picture.









Visual Studio 2012 theme support

One of the unexpected (and I would suggest from the comments, unwelcome) changes sprung on developers in the Visual Studio 2012 Beta back in February was the "Metroification" of the development environment.

However, eye candy (and eyesores!) come and go, and within that change is a more fundamental one - direct support for themes within the Visual Studio IDE. The Visual Studio 2012 Beta and RC include two themes - light (i.e. grey) and dark. Whilst the latter has an obvious appeal within the developer community (we all know devs who prefer green text on a black background) the former hasn't exactly been welcomed, to say the least.

Personally, rather than develop custom theme support for each tool individually I wish they'd just add a "dark" theme to Windows instead and respect the theme settings of the operating system. Obviously my view just isn't "cool" enough for the Visual Studio UX team, but I digress...

Although a campaign to retain the existing Visual Studio 2010 theme has been running on the UserVoice site since the beta arrived (see the UserVoice posts Add some color to Visual Studio 11 and Leave VS 2010 theme (and the theme editor extension) as an option) Microsoft have not indicated what - if any - changes will be made to the Visual Studio 2012 themes at RTM.

Our working assumption therefore has to be that the themes in the RTM will be broadly comparable with those in the RC (i.e. light and dark). We will find out whether that assumption is correct later this month, of course.

With that in mind, we have been working on theme support in the development branch for Visual Lint for some time now, and things are now beginning to come together:

Visual Lint running with the Visual Studio 2012 RC dark theme.Visual Lint running with the Visual Studio 2012 RC dark theme. Visual Lint running with the Visual Studio 2012 RC light theme.Visual Lint running with the Visual Studio 2012 RC light theme.

As Visual Lint uses standard Win32 controls for most of the UI (which for the most part do not support custom text/background colours), to get this far we have had to write custom painted WTL checkbox, radio button, combobox and header controls in addition to the usual WM_CTLCOLORxxxx voodoo. Other UI elements such as menus, scrollbars, command buttons etc. yet haven't yet been looked at, but hopefully will be in due course (there seems to be some indication in the MSDN blogs that scollbars will be auto-themed by the RTM, but we'll see).

Within the displays themselves, the text and background colours of each item are checked for adequate contrast, and the text colour adjusted (by tweaking the luminance) automatically if need be.

Although the Visual Studio interfaces expose the colours used in the active theme (via IVsUIShell2::GetVSSysColorEx()), they do not seem to provide any way of detecting if the theme has changed (or indeed, finding out which theme is actually running at the time). Our workaround for this is simply to reload the colour scheme whenever the "Tools|Options" command has been executed. We don't really care which theme is running after all - just what colour values it uses, and where.

Indeed, one of the first things we did while working on this was to dump all of the colour values used by the Visual Studio 2012 RC light & dark themes, as well as the default Visual Studio 2010 theme, into spreadsheets so we could use them for testing without firing up a host instance of the IDE (developing add-ins may be fun, but it is also much slower than working on your own executable).

Finally, it is a little known fact that the Visual Studio IDE has had colour scheme support internally for some time, so the scheme we have designed will also work with Visual Studio 2010 if you have the theme editor extension installed:

Visual Lint running with Visual Studio 2010 with a modified "Expression" theme.Visual Lint running with Visual Studio 2010 with a modified "Expression" theme.

Needless to say, all of this is proving to be a major task, and it has therefore diverted significant resources from other things we should really have been working on this summer. As a consolation, the theme code we're developing is generic (albeit only on Windows), so can also be used with Eclipse 4.0 (I note that themes are coming to that IDE as well when the time comes.

Another obvious benefit is of course that there's potentially at least one new CodeProject article (want a themed XP button with a custom background colour? We know how to do it now) in all of this once the dust settles and the inevitable bugs have crawled away. It's about time I wrote a new one, anyway.

Once Visual Lint theme support is complete, we'll obviously also take a look at ResOrg. Beyond that, I think a new article is a foregone conclusion, once we've cleaned the code up a bit and built a good enough demo project.

Visual Studio 2012 theme support

One of the unexpected (and I would suggest from the comments, unwelcome) changes sprung on developers in the Visual Studio 2012 Beta back in February was the Metroification of the development environment.

However, eye candy (and eyesores!) come and go, and within that change is a more fundamental one - direct support for themes within the Visual Studio IDE. The Visual Studio 2012 Beta and RC include two themes - light (i.e. grey) and dark. Whilst the latter has an obvious appeal within the developer community (we all know devs who prefer green text on a black background) the former hasn't exactly been welcomed, to say the least.

Personally, rather than develop custom theme support for each tool individually I wish they'd just add a "dark" theme to Windows instead and respect the theme settings of the operating system. Obviously my view just isn't "cool" enough for the Visual Studio UX team, but I digress...

Although a campaign to retain the existing Visual Studio 2010 theme has been running on the UserVoice site since the beta arrived (see Add some color to Visual Studio 11 and Leave VS 2010 theme (and the theme editor extension) as an option) Microsoft have not indicated what - if any - changes will be made to the Visual Studio 2012 themes at RTM.

Our working assumption therefore has to be that the themes in the RTM will be broadly comparable with those in the RC (i.e. light and dark). We will find out whether that assumption is correct later this month, of course.

With that in mind, we have been working on theme support in the development branch for Visual Lint for some time now, and things are now beginning to come together:

Visual Lint running with the Visual Studio 2012 RC dark theme

Visual Lint running with the Visual Studio 2012 RC light theme

As Visual Lint uses standard Win32 controls for most of the UI (which for the most part do not support custom text/background colours), to get this far we have had to write custom painted WTL checkbox, radio button, combobox and header controls in addition to the usual WM_CTLCOLORxxxx voodoo. Other UI elements such as menus, scrollbars, command buttons etc. yet haven't yet been looked at, but hopefully will be in due course (there seems to be some indication in the MSDN blogs that scollbars will be auto-themed by the RTM, but we'll see).

Within the displays themselves, the text and background colours of each item are checked for adequate contrast, and the text colour adjusted (by tweaking the luminance) automatically if need be.

Although the Visual Studio interfaces expose the colours used in the active theme (via IVsUIShell2::GetVSSysColorEx() ), they do not seem to provide any way of detecting if the theme has changed (or indeed, finding out which theme is actually running at the time). Our workaround for this is simply to reload the colour scheme whenever the "Tools|Options" command has been executed. We don't really care which theme is running after all - just what colour values it uses, and where.

Indeed, one of the first things we did while working on this was to dump all of the colour values used by the VS2012 RC light & dark themes, as well as the default VS2010 theme, into spreadsheets so we could use them for testing without firing up a host instance of the IDE (developing add-ins may be fun, but it is also much slower than working on your own executable).

Finally, it is a little known fact that the Visual Studio IDE has had colour scheme support internally for some time, so the scheme we have designed will also work with Visual Studio 2010 if you have the theme editor extension installed:

Visual Lint running with Visual Studio 2010 with a modified 'Expression' theme

Needless to say, all of this is proving to be a major task, and it has therefore diverted significant resources from other things we should really have been working on this summer. As a consolation, the theme code we're developing is generic (albeit only on Windows), so can also be used with Eclipse 4.0 (I note that themes are coming to that IDE as well) when the time comes.

Another obvious benefit is of course that there's potentially at least one new CodeProject article (want a themed XP button with a custom background colour? We know how to do it now) in all of this once the dust settles and the inevitable bugs have crawled away. It's about time I wrote a new one, anyway.

Once Visual Lint theme support is complete, we'll obviously also take a look at ResOrg. Beyond that, I think a new article is a foregone conclusion, once we've cleaned the code up a bit and built a good enough demo project...

Visual Studio 2012 theme support

One of the unexpected (and I would suggest from the comments, unwelcome) changes sprung on developers in the Visual Studio 2012 Beta back in February was the Metroification of the development environment.

However, eye candy (and eyesores!) come and go, and within that change is a more fundamental one - direct support for themes within the Visual Studio IDE. The Visual Studio 2012 Beta and RC include two themes - light (i.e. grey) and dark. Whilst the latter has an obvious appeal within the developer community (we all know devs who prefer green text on a black background) the former hasn't exactly been welcomed, to say the least.

Personally, rather than develop custom theme support for each tool individually I wish they'd just add a "dark" theme to Windows instead and respect the theme settings of the operating system. Obviously my view just isn't "cool" enough for the Visual Studio UX team, but I digress...

Although a campaign to retain the existing Visual Studio 2010 theme has been running on the UserVoice site since the beta arrived (see Add some color to Visual Studio 11 and Leave VS 2010 theme (and the theme editor extension) as an option) Microsoft have not indicated what - if any - changes will be made to the Visual Studio 2012 themes at RTM.

Our working assumption therefore has to be that the themes in the RTM will be broadly comparable with those in the RC (i.e. light and dark). We will find out whether that assumption is correct later this month, of course.

With that in mind, we have been working on theme support in the development branch for Visual Lint for some time now, and things are now beginning to come together:

Visual Lint running with the Visual Studio 2012 RC dark theme

Visual Lint running with the Visual Studio 2012 RC light theme

As Visual Lint uses standard Win32 controls for most of the UI (which for the most part do not support custom text/background colours), to get this far we have had to write custom painted WTL checkbox, radio button, combobox and header controls in addition to the usual WM_CTLCOLORxxxx voodoo. Other UI elements such as menus, scrollbars, command buttons etc. yet haven't yet been looked at, but hopefully will be in due course (there seems to be some indication in the MSDN blogs that scollbars will be auto-themed by the RTM, but we'll see).

Within the displays themselves, the text and background colours of each item are checked for adequate contrast, and the text colour adjusted (by tweaking the luminance) automatically if need be.

Although the Visual Studio interfaces expose the colours used in the active theme (via IVsUIShell2::GetVSSysColorEx() ), they do not seem to provide any way of detecting if the theme has changed (or indeed, finding out which theme is actually running at the time). Our workaround for this is simply to reload the colour scheme whenever the "Tools|Options" command has been executed. We don't really care which theme is running after all - just what colour values it uses, and where.

Indeed, one of the first things we did while working on this was to dump all of the colour values used by the VS2012 RC light & dark themes, as well as the default VS2010 theme, into spreadsheets so we could use them for testing without firing up a host instance of the IDE (developing add-ins may be fun, but it is also much slower than working on your own executable).

Finally, it is a little known fact that the Visual Studio IDE has had colour scheme support internally for some time, so the scheme we have designed will also work with Visual Studio 2010 if you have the theme editor extension installed:

Visual Lint running with Visual Studio 2010 with a modified 'Expression' theme

Needless to say, all of this is proving to be a major task, and it has therefore diverted significant resources from other things we should really have been working on this summer. As a consolation, the theme code we're developing is generic (albeit only on Windows), so can also be used with Eclipse 4.0 (I note that themes are coming to that IDE as well) when the time comes.

Another obvious benefit is of course that there's potentially at least one new CodeProject article (want a themed XP button with a custom background colour? We know how to do it now) in all of this once the dust settles and the inevitable bugs have crawled away. It's about time I wrote a new one, anyway.

Once Visual Lint theme support is complete, we'll obviously also take a look at ResOrg. Beyond that, I think a new article is a foregone conclusion, once we've cleaned the code up a bit and built a good enough demo project...

Visual Studio 2012 theme support

One of the unexpected (and I would suggest from the comments, unwelcome) changes sprung on developers in the Visual Studio 2012 Beta back in February was the Metroification of the development environment. However, eye candy (and eyesores!) come and go, and within that change is a more fundamental one - direct support for themes within the Visual Studio IDE. The Visual Studio 2012 Beta and RC include two themes - light (i.e. grey) and dark. Whilst the latter has an obvious appeal within the developer community (we all know devs who prefer green text on a black background) the former hasn't exactly been welcomed, to say the least. Personally, rather than develop custom theme support for each tool individually I wish they'd just add a "dark" theme to Windows instead and respect the theme settings of the operating system. Obviously my view just isn't "cool" enough for the Visual Studio UX team, but I digress... Although a campaign to retain the existing Visual Studio 2010 theme has been running on the UserVoice site since the beta arrived (see Add some color to Visual Studio 11 and Leave VS 2010 theme (and the theme editor extension) as an option) Microsoft have not indicated what - if any - changes will be made to the Visual Studio 2012 themes at RTM. Our working assumption therefore has to be that the themes in the RTM will be broadly comparable with those in the RC (i.e. light and dark). We will find out whether that assumption is correct later this month, of course. With that in mind, we have been working on theme support in the development branch for Visual Lint for some time now, and things are now beginning to come together:
Visual Lint running with the Visual Studio 2012 RC dark theme

Visual Lint running with the Visual Studio 2012 RC light theme
As Visual Lint uses standard Win32 controls for most of the UI (which for the most part do not support custom text/background colours), to get this far we have had to write custom painted WTL checkbox, radio button, combobox and header controls in addition to the usual WM_CTLCOLORxxxx voodoo. Other UI elements such as menus, scrollbars, command buttons etc. yet haven't yet been looked at, but hopefully will be in due course (there seems to be some indication in the MSDN blogs that scollbars will be auto-themed by the RTM, but we'll see). Within the displays themselves, the text and background colours of each item are checked for adequate contrast, and the text colour adjusted (by tweaking the luminance) automatically if need be. Although the Visual Studio interfaces expose the colours used in the active theme (via IVsUIShell2::GetVSSysColorEx() ), they do not seem to provide any way of detecting if the theme has changed (or indeed, finding out which theme is actually running at the time). Our workaround for this is simply to reload the colour scheme whenever the "Tools|Options" command has been executed. We don't really care which theme is running after all - just what colour values it uses, and where. Indeed, one of the first things we did while working on this was to dump all of the colour values used by the VS2012 RC light & dark themes, as well as the default VS2010 theme, into spreadsheets so we could use them for testing without firing up a host instance of the IDE (developing add-ins may be fun, but it is also much slower than working on your own executable). Finally, it is a little known fact that the Visual Studio IDE has had colour scheme support internally for some time, so the scheme we have designed will also work with Visual Studio 2010 if you have the theme editor extension installed:
Visual Lint running with Visual Studio 2010 with a modified 'Expression' theme

Needless to say, all of this is proving to be a major task, and it has therefore diverted significant resources from other things we should really have been working on this summer. As a consolation, the theme code we're developing is generic (albeit only on Windows), so can also be used with Eclipse 4.0 (I note that themes are coming to that IDE as well) when the time comes. Another obvious benefit is of course that there's potentially at least one new CodeProject article (want a themed XP button with a custom background colour? We know how to do it now) in all of this once the dust settles and the inevitable bugs have crawled away. It's about time I wrote a new one, anyway. Once Visual Lint theme support is complete, we'll obviously also take a look at ResOrg. Beyond that, I think a new article is a foregone conclusion, once we've cleaned the code up a bit and built a good enough demo project...

Generating a SQL Server CE database schema from a SQL Server database using Entity Framework

In a previous entry I described how to programmatically create (& destroy) a SQL CE dB for integration testing using NUnit.  Since getting that working I ran into a couple of other problems which I've more or less solved so I thought I'd write those up.  To begin with though this is a prequel post describing how to obtain the SQL script to create the SQL CE dB.

If you happen to be working exclusively with CE then you'll already have your schema file.  In my case I'm using SQLExpress and as this is experimental work I created my dB by hand.  However, using the EF it's pretty easy to obtain the schema and have the EF wizard generate the CE schema.  This is important as there are differences in the dialect of SQL used by SQL Express and SQL CE and its easier to have a tool handle those, though it doesn't do all of them.

The basic flow is to generate an EF model (EDMX) file from the existing SQL Express database and then use the 'Generate database from model' functionality.  It is at this point that the target SQL dB can be chosen, i.e. SQL Server, SQL Server CE or some others.

To create a model requires adding a 'New Item' of type 'ADO.Net Entity Data Model' to a VS project so first a new dummy project needs creating.  This is where it gets a little complicated as not any type of project will do.  I'm working with CE 4 and require a schema for that version of the dB (though creating one for 3.5 works but I like to things as close to ideal as possible).  Due to this constraint it is necessary to chose a Web type project as for some reason the VS2010 integration provided by EF only supports the generation of CE 4 dBs for Web projects.  If a simple C# Windows Console project is selected then you're limited to CE 3.5.  Thus the simplest project type is the 'ASP.Net Empty Web Application' as shown below.


Having done this, next add a new item of type ADO.Net Entity Data Model as below. NOTE: The project will have to reference the Entity Framework assemblies.  The easiest way to do this (& the one most people are probably using) is to use the NuGet package.


Then follow the wizard.


Selecting "Generate from database".


Choose your SQLExpress (or SQL Server) dB but uncheck the "Save entity connection settings in Web.Config as:" as we're converting to SQLCE so want to minimize anything related to other types of SQL Server.


Finally select the SQL elements you require.  In this example only the existing tables were selected.  As this is generating the EF model from an existing database no SQL file is generated just the model for which the diagram is shown, i.e.


The next phase is to generate the SQL from the model (which was generated from the hand crafted db) but to make sure the SQL that's generated is compliant with SQL CE.

To generate the schema right click and select "Generate model from database..."


This brings up the "Generate database" wizard which is very similar to the previously used "Entity Data Model" wizard used to create the model.  From here choose the "New Connection" option which pops up another set of dialogs.  On the first choose the type of data source as "Microsoft SQL Server Compact 4.0".

Clicking on continue then leads to the next dialog where you need to create a dB.



Ok-ing this leads back to the "Generate database wizard".


This time check the "Save entity connection settings in Web.Config" checkbox.  This information will be useful later (to be covered in a different post).  Clicking "Next" the SQL is generated and present in the wizard.


This can be copied & pasted directly from here or pressing "Finish" will save the SQL to the file indicated at the top of the dialog box.  This file is added to the project.  The following prompt will appear when "Finish" is pressed.
 

This doesn't really matter as this is a throw away project but having the updated schemas maybe useful so go with "Yes".

The SQL can now be used to configure an empty SQL CE 4.0 database.  The easiest way is to open the SQL file and right-click selecting the "Execute SQL" menu item.


This brings up the SQL Server dialog from which if "New Database" is selected an CE 4 one can be specified.


Having specified a location and pressed "Ok" the SQL script is executed.  As can be seen below this is not without errors.  However, this isn't anything to worry about as the errors are to do with dropping tables and indices that currently don't exist as it's a newly created dB.  Performing the same steps but missing out the creation of the dB file as it already exists sees the SQL script execute flawlessly.



The final picture shows the newly created database in VS2010's Server Explorer demonstrating that the tables were indeed created.


The basis for this post is my experimentation on using NUnit to programmatically test some dB based functionality.  If a single instance of a database suffices for all your tests and you can execute the SQL by hand as above and then you can follow these steps.  In may case I want to a fresh database per test so I need to automate the running of the SQL Script combined the with the creation and destruction of the underlying database.  The creation and deletion aspect were covered in a previous post but the next step will have to wait until a later one.

Generating a SQL Server CE database schema from a SQL Server database using Entity Framework

In a previous entry I described how to programmatically create (& destroy) a SQL CE dB for integration testing using NUnit.  Since getting that working I ran into a couple of other problems which I've more or less solved so I thought I'd write those up.  To begin with though this is a prequel post describing how to obtain the SQL script to create the SQL CE dB.

If you happen to be working exclusively with CE then you'll already have your schema file.  In my case I'm using SQLExpress and as this is experimental work I created my dB by hand.  However, using the EF it's pretty easy to obtain the schema and have the EF wizard generate the CE schema.  This is important as there are differences in the dialect of SQL used by SQL Express and SQL CE and its easier to have a tool handle those, though it doesn't do all of them.

The basic flow is to generate an EF model (EDMX) file from the existing SQL Express database and then use the 'Generate database from model' functionality.  It is at this point that the target SQL dB can be chosen, i.e. SQL Server, SQL Server CE or some others.

To create a model requires adding a 'New Item' of type 'ADO.Net Entity Data Model' to a VS project so first a new dummy project needs creating.  This is where it gets a little complicated as not any type of project will do.  I'm working with CE 4 and require a schema for that version of the dB (though creating one for 3.5 works but I like to things as close to ideal as possible).  Due to this constraint it is necessary to chose a Web type project as for some reason the VS2010 integration provided by EF only supports the generation of CE 4 dBs for Web projects.  If a simple C# Windows Console project is selected then you're limited to CE 3.5.  Thus the simplest project type is the 'ASP.Net Empty Web Application' as shown below.


Having done this, next add a new item of type ADO.Net Entity Data Model as below. NOTE: The project will have to reference the Entity Framework assemblies.  The easiest way to do this (& the one most people are probably using) is to use the NuGet package.


Then follow the wizard.


Selecting "Generate from database".


Choose your SQLExpress (or SQL Server) dB but uncheck the "Save entity connection settings in Web.Config as:" as we're converting to SQLCE so want to minimize anything related to other types of SQL Server.


Finally select the SQL elements you require.  In this example only the existing tables were selected.  As this is generating the EF model from an existing database no SQL file is generated just the model for which the diagram is shown, i.e.


The next phase is to generate the SQL from the model (which was generated from the hand crafted db) but to make sure the SQL that's generated is compliant with SQL CE.

To generate the schema right click and select "Generate model from database..."


This brings up the "Generate database" wizard which is very similar to the previously used "Entity Data Model" wizard used to create the model.  From here choose the "New Connection" option which pops up another set of dialogs.  On the first choose the type of data source as "Microsoft SQL Server Compact 4.0".

Clicking on continue then leads to the next dialog where you need to create a dB.



Ok-ing this leads back to the "Generate database wizard".


This time check the "Save entity connection settings in Web.Config" checkbox.  This information will be useful later (to be covered in a different post).  Clicking "Next" the SQL is generated and present in the wizard.


This can be copied & pasted directly from here or pressing "Finish" will save the SQL to the file indicated at the top of the dialog box.  This file is added to the project.  The following prompt will appear when "Finish" is pressed.
 

This doesn't really matter as this is a throw away project but having the updated schemas maybe useful so go with "Yes".

The SQL can now be used to configure an empty SQL CE 4.0 database.  The easiest way is to open the SQL file and right-click selecting the "Execute SQL" menu item.


This brings up the SQL Server dialog from which if "New Database" is selected an CE 4 one can be specified.


Having specified a location and pressed "Ok" the SQL script is executed.  As can be seen below this is not without errors.  However, this isn't anything to worry about as the errors are to do with dropping tables and indices that currently don't exist as it's a newly created dB.  Performing the same steps but missing out the creation of the dB file as it already exists sees the SQL script execute flawlessly.



The final picture shows the newly created database in VS2010's Server Explorer demonstrating that the tables were indeed created.


The basis for this post is my experimentation on using NUnit to programmatically test some dB based functionality.  If a single instance of a database suffices for all your tests and you can execute the SQL by hand as above and then you can follow these steps.  In may case I want to a fresh database per test so I need to automate the running of the SQL Script combined the with the creation and destruction of the underlying database.  The creation and deletion aspect were covered in a previous post but the next step will have to wait until a later one.

I guess the feedback actually did work

After all the brouhaha over Visual Studio 2012 not being able to build executables for Windows XP, it looks like Microsoft has reconsidered: http://blogs.msdn.com/b/vcblog/archive/2012/06/15/10320645.aspx Pity that we’ll have to wait for the update but at least those of us who still have clients that are exclusively XP can use a modern compiler…

If you want to remove a (C++) project from a Visual Studio 2010 solution

… make sure that you have removed all dependencies on the project that you are about to remove before you remove the project from the solution. If you don’t, the projects that still have dependencies on the project you just removed will retain the dependencies, but the dependencies will have become invisible and the only way to rid yourself of the “phantom dependencies” is by editing the actual vxcproj files with a text editor and remove the dependency entry in there manually.

Flashmob daily scrum

I think our team is too big to hold a daily scrum meeting, so I turned to a couple of people near me on Wednesday and asked "What did you do yesterday? What are you doing today? What's holding you up?"
I answered as well.
The next day, I did the same again with a different group of people, announcing "Flash-mob scrum" as we started.
Today I rounded up a couple of people from previous days and we "flash-mob scrummed" by two new people. I'm hoping it might just work.
This was done in a spirit of TCC, larking about, but based on previous practice, which is vital for TCC. The team seem to be talking to each other bit more too. 

Flashmob daily scrum

I think our team is too big to hold a daily scrum meeting, so I turned to a couple of people near me on Wednesday and asked "What did you do yesterday? What are you doing today? What's holding you up?"
I answered as well.
The next day, I did the same again with a different group of people, announcing "Flash-mob scrum" as we started.
Today I rounded up a couple of people from previous days and we "flash-mob scrummed" by two new people. I'm hoping it might just work.
This was done in a spirit of TCC, larking about, but based on previous practice, which is vital for TCC. The team seem to be talking to each other bit more too. 

Introducing VisualLintGui

If you have been following me (@annajayne) on Twitter, you may have noticed me talking about something called "VisualLintGui".

This is actually the second of two projects (the first being VisualLintConsole - the command line version of Visual Lint) we got underway after the release of Visual Lint 3.0.

Now that VisualLintConsole (the command line version of Visual Lint) is out in the wild, we have turned our attention to VisualLintGui. This is, as the name suggests, a standalone Visual Lint application with a graphical user interface - basically a text editor focused on code analysis:

VisualLintGui - the standalone Visual Lint applicationVisualLintGui - the standalone Visual Lint application.

Although it has been fully functional in terms of analysis functions for quite some time, until recently we were not able to devote a great deal of time to the details of its user interface. That has now changed, and since February VisualLintGui has gained many essential capabilities including a syntax colouring editor with analysis issue markers, MDI tabs, Find/Replace and Source/Header flip to name but a handful of the more obvious recent changes.

VisualLintGui is currently capable of analysing projects for Visual Studio, Visual C++, Eclipse, CodeGear C++ and AVR Studio 5.0, but it can obviously potentially analyse a far wider variety of codebases than that.

Indeed, one of the reasons we have been keen to develop it is to provide a way to support embedded IDEs for which developing a Visual Lint plug-in is not a viable proposition. As such we expect to add support for further project and workspace file formats as and when our customers need them.

VisualLintGui currently resides in our Visual Lint development branch, but given the recent pace of development on it we are likely to look at porting it back into Visual Lint 3.5 in the not too distant future.

In the meantime we will have a development build on our stand at the ACCU Conference next week, so if you are going please do come and take a look.

Introducing VisualLintGui

If you have been following me (@annajayne) on Twitter, you may have noticed me talking about something called "VisualLintGui".

This is actually the second of two projects (the first being VisualLintConsole - the command line version of Visual Lint) we got underway after the release of Visual Lint 3.0.

Now that VisualLintConsole (the command line version of Visual Lint) is out in the wild, we have turned our attention to VisualLintGui. This is, as the name suggests, a standalone Visual Lint application with a graphical user interface - basically a text editor focused on code analysis:

Although it has been fully functional in terms of analysis functions for quite some time, until recently we were not able to devote a great deal of time to the details of its user interface. That has now changed, and since February VisualLintGui has gained many essential capabilities including a syntax colouring editor with analysis issue markers, MDI tabs, Find/Replace and Source/Header flip to name but a handful of the more obvious recent changes.

VisualLintGui is currently capable of analysing projects for Visual Studio, Visual C++, Eclipse, CodeGear C++ and AVR Studio 5.0, but it can obviously potentially analyse a far wider variety of codebases than that.

Indeed, one of the reasons we have been keen to develop it is to provide a way to support embedded IDEs for which developing a Visual Lint plug-in is not a viable proposition. As such we expect to add support for further project and workspace file formats as and when our customers need them.

VisualLintGui currently resides in our Visual Lint development branch, but given the recent pace of development on it we are likely to look at porting it back into Visual Lint 3.5 in the not too distant future.

In the meantime we will have a development build on our stand at the ACCU Conference next week, so if you are going please do come and take a look.

Introducing VisualLintGui

If you have been following me (@annajayne) on Twitter, you may have noticed me talking about something called "VisualLintGui".

This is actually the second of two projects (the first being VisualLintConsole - the command line version of Visual Lint) we got underway after the release of Visual Lint 3.0.

Now that VisualLintConsole (the command line version of Visual Lint) is out in the wild, we have turned our attention to VisualLintGui. This is, as the name suggests, a standalone Visual Lint application with a graphical user interface - basically a text editor focused on code analysis:

Although it has been fully functional in terms of analysis functions for quite some time, until recently we were not able to devote a great deal of time to the details of its user interface. That has now changed, and since February VisualLintGui has gained many essential capabilities including a syntax colouring editor with analysis issue markers, MDI tabs, Find/Replace and Source/Header flip to name but a handful of the more obvious recent changes.

VisualLintGui is currently capable of analysing projects for Visual Studio, Visual C++, Eclipse, CodeGear C++ and AVR Studio 5.0, but it can obviously potentially analyse a far wider variety of codebases than that.

Indeed, one of the reasons we have been keen to develop it is to provide a way to support embedded IDEs for which developing a Visual Lint plug-in is not a viable proposition. As such we expect to add support for further project and workspace file formats as and when our customers need them.

VisualLintGui currently resides in our Visual Lint development branch, but given the recent pace of development on it we are likely to look at porting it back into Visual Lint 3.5 in the not too distant future.

In the meantime we will have a development build on our stand at the ACCU Conference next week, so if you are going please do come and take a look.

Introducing VisualLintGui

If you have been following me (@annajayne) on Twitter, you may have noticed me talking about something called "VisualLintGui". This is actually the second of two projects (the first being VisualLintConsole - the command line version of Visual Lint) we got underway after the release of Visual Lint 3.0. Now that VisualLintConsole (the command line version of Visual Lint) is out in the wild, we have turned our attention to VisualLintGui. This is, as the name suggests, a standalone Visual Lint application with a graphical user interface - basically a text editor focused on code analysis: Although it has been fully functional in terms of analysis functions for quite some time, until recently we were not able to devote a great deal of time to the details of its user interface. That has now changed, and since February VisualLintGui has gained many essential capabilities including a syntax colouring editor with analysis issue markers, MDI tabs, Find/Replace and Source/Header flip to name but a handful of the more obvious recent changes. VisualLintGui is currently capable of analysing projects for Visual Studio, Visual C++, Eclipse, CodeGear C++ and AVR Studio 5.0, but it can obviously potentially analyse a far wider variety of codebases than that. Indeed, one of the reasons we have been keen to develop it is to provide a way to support embedded IDEs for which developing a Visual Lint plug-in is not a viable proposition. As such we expect to add support for further project and workspace file formats as and when our customers need them. VisualLintGui currently resides in our Visual Lint development branch, but given the recent pace of development on it we are likely to look at porting it back into Visual Lint 3.5 in the not too distant future. In the meantime we will have a development build on our stand at the ACCU Conference next week, so if you are going please do come and take a look.

Hannametoden – slik løser du Rubik’s kube (som vist på TV2)

Her er en enkel beskrivelse på hvordan man løser Rubik’s kube (PDF). Jeg skrev den som en lærebok til min datter Hanna da hun var 8 år gammel – derav navnet Hannametoden. Det er en forenklet versjon av en metode som brukes av de beste i verden (CFOP / Fridrich). Hun brukte et par dager på å lære seg å løse kuben på egen hånd basert på denne “oppskriften”. Vi besøkte “God Morgen Norge” på TV2 den 17. Februar 2012 hvor blant annet denne metoden ble presentert (artikkel).

English summary: this is a very simple description on how to solve the Rubik’s cube. I wrote it to my then 8 year old daughter – hence the name of the method. It is a simiplified version and a strict subset of the method used by the best cubers in the world. It is in Norwegian, but since it is a visual guide you might enjoy it anyway. Click the PDF link above.

Why I still use a separate editor

There is a lot that modern IDEs do well, but uncluttered writing space isn’t one of them. Once you add the various views of your project, the debug window, the source control window and various other important panes you’re left with a tiny viewport into your code. The visual clutter can be disabled of course, but you’ll get it back sooner or later. When you switch back to debug mode or build mode, for example.

Halfway through GoingNative 2012

It’s almost time to go back for the second day, but before I do I’d like to suggest that if you haven’t had a chance to attend in pereson or watch the livecast, see if you can find the videos online. My understanding is that they should be available - I’m writing this on my phone so I can’t be bothered to look at the moment but I’ll check later.

ResOrg 2.0 has been released

It's done. After a rather extended incubation period ResOrg 2.0.0.15 (the first public ResOrg 2.0 build) was uploaded earlier this morning, and the ResOrg product pages updated to match.

If you have used ResOrg 1.x before, you will notice that the user interface of ResOrg 2.0 is subtly different from its predecessor - notably in the Visual Studio plug-in (which now of course supports Visual Studio 2008 and 2010...).

In particular, the old (and rather limited) "ResOrg.NET Explorer" toolwindow has been replaced by a much more useful "Symbol Files Display" which is also available in the standalone application.

If you are using Visual Studio 2010, it might interest you to know that ResOrg 2.0 can automatically update Ribbon Designer (.mfcribbon-ms) files when an ID referenced in a ribbon resource is renumbered.

I won't include any screenshots in this post as a couple of good ones were included in the previous post, however if you are reading this post in your RSS reader you can find them in the blogpost ResOrg 2.0 update.

ResOrg 2.0 has been released

Well, it's done. After a rather extended incubation period ResOrg 2.0.0.15 (the first public ResOrg 2.0 build) was uploaded earlier this morning, and the ResOrg product pages updated to match.

If you have used ResOrg 1.x before, you will notice that the user interface of ResOrg 2.0 is subtly different from its predecessor - notably in the Visual Studio plug-in (which now of course supports Visual Studio 2008 and 2010...).

In particular, the old (and rather limited) "ResOrg.NET Explorer" toolwindow has been replaced by a much more useful "Symbol Files Display" which is also available in the standalone application.

If you are using Visual Studio 2010, it might interest you to know that ResOrg 2.0 can automatically update Ribbon Designer (.mfcribbon-ms) files when an ID referenced in a ribbon resource is renumbered.

I won't include any screenshots in this post as a couple of good ones were included in the previous post, however if you are reading this post in your RSS reader you can find them at http://www.riverblade.co.uk/blog.php?archive=2011_12_01_archive.xml#2011121501.

ResOrg 2.0 has been released

Well, it's done. After a rather extended incubation period ResOrg 2.0.0.15 (the first public ResOrg 2.0 build) was uploaded earlier this morning, and the ResOrg product pages updated to match.

If you have used ResOrg 1.x before, you will notice that the user interface of ResOrg 2.0 is subtly different from its predecessor - notably in the Visual Studio plug-in (which now of course supports Visual Studio 2008 and 2010...).

In particular, the old (and rather limited) "ResOrg.NET Explorer" toolwindow has been replaced by a much more useful "Symbol Files Display" which is also available in the standalone application.

If you are using Visual Studio 2010, it might interest you to know that ResOrg 2.0 can automatically update Ribbon Designer (.mfcribbon-ms) files when an ID referenced in a ribbon resource is renumbered.

I won't include any screenshots in this post as a couple of good ones were included in the previous post, however if you are reading this post in your RSS reader you can find them at http://www.riverblade.co.uk/blog.php?archive=2011_12_01_archive.xml#2011121501.

ResOrg 2.0 has been released

Well, it's done. After a rather extended incubation period ResOrg 2.0.0.15 (the first public ResOrg 2.0 build) was uploaded earlier this morning, and the ResOrg product pages updated to match. If you have used ResOrg 1.x before, you will notice that the user interface of ResOrg 2.0 is subtly different from its predecessor - notably in the Visual Studio plug-in (which now of course supports Visual Studio 2008 and 2010...). In particular, the old (and rather limited) "ResOrg.NET Explorer" toolwindow has been replaced by a much more useful "Symbol Files Display" which is also available in the standalone application. If you are using Visual Studio 2010, it might interest you to know that ResOrg 2.0 can automatically update Ribbon Designer (.mfcribbon-ms) files when an ID referenced in a ribbon resource is renumbered. I won't include any screenshots in this post as a couple of good ones were included in the previous post, however if you are reading this post in your RSS reader you can find them at http://www.riverblade.co.uk/blog.php?archive=2011_12_01_archive.xml#2011121501.

Moving to a multi-VHD Windows installation to separate work and personal data

I had been thinking about setting myself up with a way to work from home in a disconnected fashion. Most of the places I’ve worked at in the past required me to remote into the work desktop, which is a good idea if both sides have 100% uptime on their network connection and no issues with them being affected by adverse weather. Which in reality means that the connections tended to be unstable if the weather dictated that one really, really wanted to work from home on a particular day because snowfall was horizontal, for example.

Mocking in C++

ACCU London's July 2011 talk was about mocking in C++, given by Ed Sykes and hosted by 7 City.

Ed talked about MockItNow and Hippomocks. He pointed out, as has been said many times before Mocks aren't Stubs.

I can no longer remember all the details so will have to try these out for myself to see how they work.

Many thanks to Ed for a great talk, though.

Mocking in C++

ACCU London's July 2011 talk was about mocking in C++, given by Ed Sykes and hosted by 7 City.

Ed talked about MockItNow and Hippomocks. He pointed out, as has been said many times before Mocks aren't Stubs.

I can no longer remember all the details so will have to try these out for myself to see how they work.

Many thanks to Ed for a great talk, though.

Another good reason to keep source file sizes small

Merging a file between SCM branches that is several thousand lines in size and has significant changes in both branches is a good way to have an unpleasant day, even if the SCM that’s being used has good support for cross-branch merging. Yes, I know, ideally one tries to make sure that two branches don’t diverge that far but that’s not always possible, especially if there are significant changes to the design that affect the merge.

Deep C (and C++)

Programming is hard. Programming correct C and C++ is particularly hard. Indeed, both in C and certainly in C++, it is uncommon to see a screenful containing only well defined and conforming code. Why do professional programmers write code like this? Because most programmers do not have a deep understanding of the language they are using. While they sometimes know that certain things are undefined or unspecified, they often do not know why it is so. In these slides we will study small code snippets in C and C++, and use them to discuss the fundamental building blocks, limitations and underlying design philosophies of these wonderful but dangerous programming languages.

Jon Jagger and I just released a slide deck to discuss the fundamentals of C and C++ (slideshare, pdf).

Visual Lint and Atmel AVR Studio 5

From our perspective one of the more intriguing embedded environments to appear recently is Atmel's AVR Studio 5.

When I first saw a screenshot of this IDE (it was mentioned in a post in the CodeProject Lounge) it was immediately obvious that this was some sort of Visual Studio derivative.

In fact, although it uses GCC toolchains, the environment is based on the Visual Studio 2010 isolated shell (which incidentally is something we briefly considered using ourselves for a future standalone GUI version of Visual Lint, but decided against because of its complexity and the size of the download).

It obviously occurred to us then that as a Visual Studio derivative, it shouldn't be too difficult to get Visual Lint running within it. The first step was obviously to install the IDE in a VM (XP SP3 - doesn't XP look a bit old these days...?) and experiment with some projects.

AVR Studio 5 codebases uses the Visual Studio 2010 solution file format (albeit rebadged as a .avrsln file) and a new MSBuild based project file format (.avrgccproj), so the first thing we obviously had to do was implement parsers for these files (something that will also benefit LintProject Pro, of course). Once that was done, we turned our attention to getting Visual Lint to load within the IDE itself.

This turned out to be fairly straightforward. Although AVR Studio 5 does not seem to support COM add-in registration in HKEY_LOCAL_MACHINE (which is how the Visual Lint add-in registers in Visual Studio), the corresponding registration in HKEY_CURRENT_USER\Software\Atmel\AVRStudio\5.0\AddIns does work. Although this is problematical from an installation point of view (see my previous post on the Visual Studio 11 Developer Preview) it is not a showstopper by any means.

With manual add-in registration in place, Visual Lint loaded within the IDE. Although a few minor tweaks were needed to work around issues such as AVR reporting itself as "Visual Studio Express Edition, version 1.0" (which caused the version detection code in Visual Lint to default to 16 colour command bitmaps!) those were easily addressed.

As a result, we now have AVR Studio 5 running with a development build of Visual Lint:

Visual Lint running within AVR Studio 5: Visual Lint Status View. Visual Lint running within AVR Studio 5: Analysis Status and Results Displays.

Although we still have quite a bit to do (not least the code editor markers and installer) before AVR Studio 5 can become a supported host environment for Visual Lint this is a very promising start. Needless to say, beta testers are welcome.

Visual Lint and Atmel AVR Studio 5

From our perspective one of the more intriguing embedded environments to appear recently is Atmel's AVR Studio 5.

When I first saw a screenshot of this IDE (it was mentioned in a post in the CodeProject Lounge) it was immediately obvious that this was some sort of Visual Studio derivative.

In fact, although it uses GCC toolchains, the environment is based on the Visual Studio 2010 isolated shell (which incidentally is something we briefly considered using ourselves for a future standalone GUI version of Visual Lint, but decided against because of its complexity and the size of the download).

It obviously occured to us then that as a Visual Studio derivative, it shouldn't be too difficult to get Visual Lint running within it. The first step was obviously to install the IDE in a VM (XP SP3 - doesn't XP look a bit old these days...?) and experiment with some projects.

AVR Studio 5 codebases uses the Visual Studio 2010 solution file format (albeit rebadged as a .avrsln file) and a new MSBuild based project file format (.avrgccproj), so the first thing we obviously had to do was implement parsers for these files (something that will also benefit LintProject Pro, of course). Once that was done, we turned our attention to getting Visual Lint to load within the IDE itself.

This turned out to be fairly straightforward. Although AVR Studio 5 does not seem to support COM add-in registration in HKEY_LOCAL_MACHINE (which is how the Visual Lint add-in registers in Visual Studio), the corresponding registration in HKEY_CURRENT_USER\Software\Atmel\AVRStudio\5.0\AddIns does work. Although this is problematical from an installation point of view (see my previous post on the Visual Studio 11 Developer Preview) it is not a showstopper by any means.

With manual add-in registration in place, Visual Lint loaded within the IDE. Although a few minor tweaks were needed to work around issues such as AVR reporting itself as "Visual Studio Express Edition, version 1.0" (which caused the version detection code in Visual Lint to default to 16 colour command bitmaps!) those were easily addressed.

As a result, we now have AVR Studio 5 running with a development build of Visual Lint:

Visual Lint running within AVR Studio 5

Visual Lint running within AVR Studio 5

Although we still have quite a bit to do (not least the code editor markers and installer) before AVR Studio 5 can become a supported host environment for Visual Lint this is a very promising start. Needless to say, beta testers are welcome.

Visual Lint and Atmel AVR Studio 5

From our perspective one of the more intriguing embedded environments to appear recently is Atmel's AVR Studio 5.

When I first saw a screenshot of this IDE (it was mentioned in a post in the CodeProject Lounge) it was immediately obvious that this was some sort of Visual Studio derivative.

In fact, although it uses GCC toolchains, the environment is based on the Visual Studio 2010 isolated shell (which incidentally is something we briefly considered using ourselves for a future standalone GUI version of Visual Lint, but decided against because of its complexity and the size of the download).

It obviously occured to us then that as a Visual Studio derivative, it shouldn't be too difficult to get Visual Lint running within it. The first step was obviously to install the IDE in a VM (XP SP3 - doesn't XP look a bit old these days...?) and experiment with some projects.

AVR Studio 5 codebases uses the Visual Studio 2010 solution file format (albeit rebadged as a .avrsln file) and a new MSBuild based project file format (.avrgccproj), so the first thing we obviously had to do was implement parsers for these files (something that will also benefit LintProject Pro, of course). Once that was done, we turned our attention to getting Visual Lint to load within the IDE itself.

This turned out to be fairly straightforward. Although AVR Studio 5 does not seem to support COM add-in registration in HKEY_LOCAL_MACHINE (which is how the Visual Lint add-in registers in Visual Studio), the corresponding registration in HKEY_CURRENT_USER\Software\Atmel\AVRStudio\5.0\AddIns does work. Although this is problematical from an installation point of view (see my previous post on the Visual Studio 11 Developer Preview) it is not a showstopper by any means.

With manual add-in registration in place, Visual Lint loaded within the IDE. Although a few minor tweaks were needed to work around issues such as AVR reporting itself as "Visual Studio Express Edition, version 1.0" (which caused the version detection code in Visual Lint to default to 16 colour command bitmaps!) those were easily addressed.

As a result, we now have AVR Studio 5 running with a development build of Visual Lint:

Visual Lint running within AVR Studio 5

Visual Lint running within AVR Studio 5

Although we still have quite a bit to do (not least the code editor markers and installer) before AVR Studio 5 can become a supported host environment for Visual Lint this is a very promising start. Needless to say, beta testers are welcome.

Visual Lint and Atmel AVR Studio 5

From our perspective one of the more intriguing embedded environments to appear recently is Atmel's AVR Studio 5. When I first saw a screenshot of this IDE (it was mentioned in a post in the CodeProject Lounge) it was immediately obvious that this was some sort of Visual Studio derivative. In fact, although it uses GCC toolchains, the environment is based on the Visual Studio 2010 isolated shell (which incidentally is something we briefly considered using ourselves for a future standalone GUI version of Visual Lint, but decided against because of its complexity and the size of the download). It obviously occured to us then that as a Visual Studio derivative, it shouldn't be too difficult to get Visual Lint running within it. The first step was obviously to install the IDE in a VM (XP SP3 - doesn't XP look a bit old these days...?) and experiment with some projects. AVR Studio 5 codebases uses the Visual Studio 2010 solution file format (albeit rebadged as a .avrsln file) and a new MSBuild based project file format (.avrgccproj), so the first thing we obviously had to do was implement parsers for these files (something that will also benefit LintProject Pro, of course). Once that was done, we turned our attention to getting Visual Lint to load within the IDE itself. This turned out to be fairly straightforward. Although AVR Studio 5 does not seem to support COM add-in registration in HKEY_LOCAL_MACHINE (which is how the Visual Lint add-in registers in Visual Studio), the corresponding registration in HKEY_CURRENT_USER\Software\Atmel\AVRStudio\5.0\AddIns does work. Although this is problematical from an installation point of view (see my previous post on the Visual Studio 11 Developer Preview) it is not a showstopper by any means. With manual add-in registration in place, Visual Lint loaded within the IDE. Although a few minor tweaks were needed to work around issues such as AVR reporting itself as "Visual Studio Express Edition, version 1.0" (which caused the version detection code in Visual Lint to default to 16 colour command bitmaps!) those were easily addressed. As a result, we now have AVR Studio 5 running with a development build of Visual Lint:
Visual Lint running within AVR Studio 5

Visual Lint running within AVR Studio 5
Although we still have quite a bit to do (not least the code editor markers and installer) before AVR Studio 5 can become a supported host environment for Visual Lint this is a very promising start. Needless to say, beta testers are welcome.

Useful collection of Qt debug visualizers for Visual Studio

I had to reinstall VS2010 at work and because I clearly didn’t think this all the way through, forgot to save my autoexp.dat file before removing the old installation. And of course I didn’t realise what had happened until I had to dig deeper into some Qt GUI code that wasn’t quite working as expected, and of course I was prompted with the raw data. Fortunately a quick search on Google led me to this page Human Machine Teaming Lab | Knowledge / Qt that contains a very comprehensive set of visualisers.

Power series for PCA

The book says estimate the value of the eigenvector, then iterate.
But my vector cycled as transpose(1,1), transpose(-1, 1), transpose(1, 1) which is a bit of a problem.
Oh for precise instructions.
I'll report back when I find a suitable estimate for the starting value.

Power series for PCA

The book says estimate the value of the eigenvector, then iterate.
But my vector cycled as transpose(1,1), transpose(-1, 1), transpose(1, 1) which is a bit of a problem.
Oh for precise instructions.
I'll report back when I find a suitable estimate for the starting value.

The Champion, the Chief and the Manager

Successful product development projects are often characterized by having an enthusiastic product champion with solid domain knowledge, a visible and proud chief engineer, and a clever and supportive project manager. And of course, the most important thing, a group of exceptional developers. From an organizational point of view it makes sense to require that all projects should clearly identify these three roles:

The Champion: The product champion is a person that dreams about the product, has a vision about how it can be used and can answer questions about what is important and what is less important. The product champion is required to have a deep and solid domain knowledge and will often play the role of a customer proxy in the project. This position can only be held by a person that is deeply devoted and has a true passion for the product to be created. The product champion is the main interface between the project and the customer/users. (Sometimes also known as: Product Manager, Project Owner, Customer Proxy…)

The Chief: The chief engineer is a technical expert that has a vision of the complete solution and is always ready to defend this vision. At any time, the chief engineer should be able, and willing to stand up to proudly describe the solution and explain how everything fits together. He/she should feel responsible for technological decisions that the exceptional developers do, but also make sure that the solution is supporting the business strategy. The chief engineer is the main communication channel between this project and other projects. (Sometimes also known as: System Architect, Tech Lead, Shusa, …)

The Manager: The project manager is a person that leads a team to success by managing the resources on a project in an effective and sensible way. He/she will be responsible for actively discovering and removing impediments. The project manager is the main interface between the project and corporate management. (Sometimes also known as: Scrum Master, Team Leader, …)

Of course, for very small projects these three roles can be fulfilled by one person, but for projects of some size there should be three people filling these three roles: one product champion, one chief engineer and one project manager. These three people must work together as a team, form an allround defence (aka kringvern) around the project, while being available to the developers at any time. Their task is to “protect” and “promote” the project to the outside world so that the exceptional developers can focus on doing the job.

I believe that identifying these three roles is the only thing an organization needs to impose in order to increase the chance of success. Then the team of exceptional developers together with their servants decide everything else, including which methodology and technology to use.

Visual Studio 2010 SP1 has been released

For those who are using Visual Studio 2010, the service pack has now been officially released: Visual Studio 2010 Service Pack 1 General Availability - Visual C++ Team Blog - Site Home - MSDN Blogs Edit: The download like doesn’t seem to work for me yet, given that it’s only gone General Availability today it might be worth checking back a little later. Edit again - we have a general availability download link: http://www.

If your VS2010 C++ build is constantly rebuilding a project that hasn’t changed

Check if you’re seeing the following output in the build pane: InitializeBuildStatus: Creating ".unsuccessfulbuild" because "AlwaysCreate" was specified. I’ve just fixed a bunch of these errors in one of our solutions here and all of these were caused by one of two issues: The project file referenced files that were no present in the source tree A custom build step either was supposed to generate a file but didn’t, or the file ended up in the wrong place In order to find out if there are missing files that trigger the perma-rebuild, you’ll also have to enable Visual Studio’s debug output as described in this stackoverflow answer.

How to view undecorated DLL-exported C++ symbols in Visual Studio 2010

Yes, it’s one of those “note to self” posts, but I keep forgetting how to do it. As the first step, you run dumpbin /EXPORTS and redirect the output into a file because the utility that unmangles the names (undname.exe) doesn’t appear to be able to take piped input via stdin. Then, run undname , with being the file that contains the exported symbols. At least that way the symbols become mostly readable.

Boost.Log, preventing the ‘unhandled exception’ in Windows 7 when attempting to log to the event log

I recently ran into a requirements for retrofitting a logging library to an existing project. My first instinct was to throw Pantheios at it as I’ve used it before and It Just Worked. Unfortunately in this case, we needed the ability to log to more than two event sinks and it looked like this was getting a little awkward with Pantheios, which prompted me to look at Boost.Log. After some digging through the documentation and the samples, I managed to get the logging going to the three event sinks we needed.

A couple of noteworthy links

It’s bit of a link roundup from the past couple of months. Most of you probably saw these already as I’d think you’re probably reading the same blogs. C++ links VS2010 SP1 Beta: What’s in it for C++ developers. While I’m not going to chance installing the beta on my main developer workstation, it looks like there are some interesting features in the service pack. I hope that the IDE stability has also been improved.

Sometimes, std::set just doesn’t cut it from a performance point of view

A piece of code I recently worked with required data structures that hold unique, sorted data elements. The requirement for the data being both sorted and unique came from it being fed into std::set_intersection() so using an std::set seemed to be an obvious way of fulfilling these requirements. The code did fulfill all the requirements but I found the performance somewhat wanting in this particular implementation (Visual Studio 2008 with the standard library implementation shipped by Microsoft).

Quick tip if you see ‘bad DLL or entry point msobj80.dll’ when building software with VS2008

Try stopping mspdbsrv.exe (the process that generates the pdb files during a build) if it is still running. My understanding is that it’s supposed to shut down at the end of the compilation but it seems that it can turn into a zombie process and if the latter happens, you can get the above error when linking your binaries. Anyway, I just ran into this issue and stopping the process via the Task Manager resolved the issue for me.