Windows 8 Pro on an early 2009 iMac 21.5 (Core 2 Duo)

A couple of weeks back I thought I'd have a go writing a Windows Store App.  To do this requires Windows 8.  At the time I was running Windows 7 Home Premium on an early 2009 iMac 21.5 (Core 2 Duo).  This had been installed using Boot Camp including install Boot Camp assistant and the drivers supplied by Apple.

To upgrade to Windows 8 I wanted to avoid a re-installation of all my apps. and data etc so I went with an in place upgrade.  This all seemed to work properly and soon I was running Windows 8 and could access the Windows Store App templates from Visual Studio.  However, soon after Windows 8 kept crashing, well freezing.  It got to the point that after every reboot I'd be lucky to get 5 minutes of up time between each freeze.

Given that Apple haven't provided Windows 8 drivers yet this wasn't exactly a surprise.  I decided to try and work around this by rebooting to OS X and using VMWare Fusion to access the Boot Camp partition.  Whilst rebooting in OS X I managed to corrupt the Windows installation.  I use a non-Apple wireless keyboard (as I need the insert, delete, home & end plus the easily accessible cursor keys for VS development) so holding down Alt to select the OS to boot into didn't work.  When I realized it was going back into Windows I just turned the machine off.  After a couple of times the Windows installation was toast!  To get back to the point of trying Fusion I had to do a fresh Windows install.  In this case installing a minimal Windows 7 installation: just enough to allow the download of Windows 8.  I then installed Windows 8 using the preserve nothing option.

Having now gone through the steps I wanted to avoid I decided to give the new installation a go via direct boot, i.e. no Fusion.  That was two weeks ago.  Since then I've re-installed all the apps. and my personal data and (fingers crossed) haven't had a single crash.  As the freezes were usually happening during some graphical operation e.g. a status bar updating I assumed the fault probably lay with the video drivers.  I didn't install Boot Camp assistant and in particular the Windows 7 drivers from OS X disc.  Well, I did install one.  After a while I noticed I wasn't getting any sound even though all the audio drivers and hardware claimed they were happy.  Eventually I installed by the Cirrus Logic driver which made the speakers work. I haven't gone anywhere near the NVIDIA drivers.

So, the whole point of this post is for those who run Windows via Boot Camp on early iMacs and want to run Windows 8 then perhaps a fresh install (or maybe uninstall the Boot Camp supplied drivers prior to upgrade) is probably the way to go.

How to make a self-signed SSL certificate work with Windows RT’s Mail App on a Microsoft Surface RT

Long title, I know… I was trying to get Windows RT’s Mail App to access the email on my own server. The server uses IMAPS with s self-signed certificate as I only want SSL for it encryption and don’t really need it for authentication purposes as well. As long as it is the correct self-signed certificate I’m happy. The Mail app however rejects certificates that weren’t signed by a trusted authority and doesn’t offer an obvious exception mechanism (like Thunderbird or Apple Mail) that circumvents the need for a trusted certificate.

I don’t want to see another ‘using namespace xxx;’ in a header file ever again

There, I’ve said it. No tiptoeing around. As a senior developer/team lead, I get involved in hiring new team members and in certain cases also help out other teams with interviewing people. As part of the interview process, candidates are usually asked to write code, so I review a lot of code submissions. One trend I noticed with recent C++ code submissions is that the first like I encounter in any header file is

That’s another warranty voided, then

Last night I did something I was adamant I wasn’t going to do, namely rooting my Android phone and installing CyanogenMod on it. Normally I don’t like messing with (smart)phones - they’re tools in the pipe wrench sense to me, they should hopefully not require much in the way of care & feeding apart from charging and the odd app or OS update. Of course, the odd OS update is can already be a problem as no official updates have been available for this phone (a Motorola Droid) for a while and between the provider-installed bloatware that couldn’t be uninstalled and the usual cruft that seems to accumulate on computers over time, the phone was really sluggish, often unresponsive and pretty much permanently complained about running out of memory.

A(nother) tool post

I generally don’t post that much about the tools I use as they’re pretty standard fare and most of the time, your success as a programmer depends more on your skills than on your tools. Mastery of your tools will make you a better software engineer, but if you put the tools first, you end up with the cart before the horse. I guess people have noticed that I use Emacs a lot :).

Specifying the directory to create SQL CE databases when using Entity Framework

In the last few posts I've been describing how to create instances of SQLCE in order to perform automated Integration Testing using NUnit and accessing the dB using Entity Framework.  I covered creating the dB using both Entity Framework and the SQL CE classes.  In particular I wanted control over the directory the dB was created in but I didn't want to tie to a specific location rather let it use the current working directory.

Using the Entity Framework's DbContext constructor that takes the name of a connection string or database name it's suddenly very easy to end up NOT creating the dB you expected where you expected it to be.  This post shows how to avoid these.  Generally speaking the use of the DbContext constructor that takes a Connection String should be avoided unless the name of a connection string from the .config file is being specified.

Example 1 - Using the SqlCeEngine class
1:  const string DB_NAME = "test1.sdf";  
2: const string DB_PATH = @".\" + DB_NAME; // Use ".\" for CWD or a specific path
3: const string CONNECTION_STRING = "data source=" + DB_PATH;
4:
5: using (var eng = new SqlCeEngine(CONNECTION_STRING))
6: {
7: eng.CreateDatabase();
8: }
9:
10: using (var conn = new SqlCeConnection(CONNECTION_STRING))
11: {
12: conn.Open(); // do stuff with db...
13: }
14:

The important thing to note is that the constructor for SqlCeEngine that takes an argument requires a Connection String, i.e. a string containing the "data source=...".  Just specifying the dB path is not sufficient.  To specify a specific directory  include the absolute or relative path.  To specify the current working directory, e.g. bin\debug then just use ".\".

Example 2 - Using DbContext (doesn't work)
1:  using (var ctx = new DbContext("test2.sdf"))  
2: {
3: ctx.Database.Create();
4: }

This code appears to work but doesn't create an instance of an SQL CE dB as desired.  Instead it creates a localDB instance in the user's home directory.  In my case: C:\Users\Pete\._test.sdf.mdf (& corresponding log file).  This is not really surprising as Entity Framework had no way of knowing that a SQL CE dB should be created.

Example 3 - Using DbContext (does work)
1:  Database.DefaultConnectionFactory =  
2: new SqlCeConnectionFactory(
3: "System.Data.SqlServerCe.4.0",
4: @".\", "");
5:
6: using (var ctx = new DbContext("test2.sdf"))
7: {
8: ctx.Database.Create();
9: // do stuff with ctx...
10: }

The difference between the last and this example is changing the default type of dB that EF should create.  As shown this is done by installing a different factory.

The 3rd parameter to SqlCeConnectionFactory is the directory that the dB should be created in.  Just like the first example specifying ".\" means the current working directory and specifying an absolute path to a directory will lead to them being created there.

NOTE: As per the post Integration Testing with NUnit and Entity Framework be aware that creating a dB using the Entity Framework results in the additional table '_MigrationHistory' being created which EF uses to keep the model and dB synchronized.

NOTE1: Whereas SqlCeEngine is a SQL CE class from the System.Data.SqlServerCe assembly, SqlCeConnectionFactory appears to be part of the System.Data.Entity assembly which is part of the Entity Framework.


In the above example the string passed to DbContext can be a name (of a connection string from the .config file) or a connection string.  In this case passing the name of the db, i.e. test2.sdf is equivalent to passing "data source=test2.sdf", well more or less.  If the '.sdf' suffix is omitted with "data source" then the resultant dB is called test2 but if just test2 is passed then the resulting dB will be called test2.sdf.

Example 4 - Using DbContext and the .config file
1:  using (var ctx = new DbContext("test5"))  
2: {
3: ctx.Database.Create();
4: }

App or Web .config
1:  <connectionStrings>  
2: <add name="test5"
3: providerName="System.Data.SqlServerCe.4.0"
4: connectionString="Data Source=test5.sdf"/>
5: </connectionStrings>

This time no factory is specified but the argument to DbContext is the name of a Connection String in the .config file.  As can be seen this contains similar information to that in the factory method enabling EF to create a dB of the correct type.

To use these the instances of these databases rather than calling the create method on the context just use the context directly or more likely in the case of EF a derived context which brings us to one last example.

Example 5 - Using a derived context and .config file
1:  public class TestCtx : DbContext  
2: {
3: }
4: using (var ctx = new TestCtx())
5: {
6: ctx.Database.Create();
7: }

App or Web .config
1:  <connectionStrings>  
2: <add name="TestCtx"
3: providerName="System.Data.SqlServerCe.4.0"
4: connectionString="Data Source=test6.sdf"/>
5: </connectionStrings>

If a derived context is created which will almost certainly be the case then if an instance of this is created and a dB created then EF will look for a Connection String in the .config file that has the same name as the context and take the information from there.

Specifying the directory to create SQL CE databases when using Entity Framework

In the last few posts I've been describing how to create instances of SQLCE in order to perform automated Integration Testing using NUnit and accessing the dB using Entity Framework.  I covered creating the dB using both Entity Framework and the SQL CE classes.  In particular I wanted control over the directory the dB was created in but I didn't want to tie to a specific location rather let it use the current working directory.

Using the Entity Framework's DbContext constructor that takes the name of a connection string or database name it's suddenly very easy to end up NOT creating the dB you expected where you expected it to be.  This post shows how to avoid these.  Generally speaking the use of the DbContext constructor that takes a Connection String should be avoided unless the name of a connection string from the .config file is being specified.

Example 1 - Using the SqlCeEngine class
1:  const string DB_NAME = "test1.sdf";  
2: const string DB_PATH = @".\" + DB_NAME; // Use ".\" for CWD or a specific path
3: const string CONNECTION_STRING = "data source=" + DB_PATH;
4:
5: using (var eng = new SqlCeEngine(CONNECTION_STRING))
6: {
7: eng.CreateDatabase();
8: }
9:
10: using (var conn = new SqlCeConnection(CONNECTION_STRING))
11: {
12: conn.Open(); // do stuff with db...
13: }
14:

The important thing to note is that the constructor for SqlCeEngine that takes an argument requires a Connection String, i.e. a string containing the "data source=...".  Just specifying the dB path is not sufficient.  To specify a specific directory  include the absolute or relative path.  To specify the current working directory, e.g. bin\debug then just use ".\".

Example 2 - Using DbContext (doesn't work)
1:  using (var ctx = new DbContext("test2.sdf"))  
2: {
3: ctx.Database.Create();
4: }

This code appears to work but doesn't create an instance of an SQL CE dB as desired.  Instead it creates a localDB instance in the user's home directory.  In my case: C:\Users\Pete\._test.sdf.mdf (& corresponding log file).  This is not really surprising as Entity Framework had no way of knowing that a SQL CE dB should be created.

Example 3 - Using DbContext (does work)
1:  Database.DefaultConnectionFactory =  
2: new SqlCeConnectionFactory(
3: "System.Data.SqlServerCe.4.0",
4: @".\", "");
5:
6: using (var ctx = new DbContext("test2.sdf"))
7: {
8: ctx.Database.Create();
9: // do stuff with ctx...
10: }

The difference between the last and this example is changing the default type of dB that EF should create.  As shown this is done by installing a different factory.

The 3rd parameter to SqlCeConnectionFactory is the directory that the dB should be created in.  Just like the first example specifying ".\" means the current working directory and specifying an absolute path to a directory will lead to them being created there.

NOTE: As per the post Integration Testing with NUnit and Entity Framework be aware that creating a dB using the Entity Framework results in the additional table '_MigrationHistory' being created which EF uses to keep the model and dB synchronized.

NOTE1: Whereas SqlCeEngine is a SQL CE class from the System.Data.SqlServerCe assembly, SqlCeConnectionFactory appears to be part of the System.Data.Entity assembly which is part of the Entity Framework.


In the above example the string passed to DbContext can be a name (of a connection string from the .config file) or a connection string.  In this case passing the name of the db, i.e. test2.sdf is equivalent to passing "data source=test2.sdf", well more or less.  If the '.sdf' suffix is omitted with "data source" then the resultant dB is called test2 but if just test2 is passed then the resulting dB will be called test2.sdf.

Example 4 - Using DbContext and the .config file
1:  using (var ctx = new DbContext("test5"))  
2: {
3: ctx.Database.Create();
4: }

App or Web .config
1:  <connectionStrings>  
2: <add name="test5"
3: providerName="System.Data.SqlServerCe.4.0"
4: connectionString="Data Source=test5.sdf"/>
5: </connectionStrings>

This time no factory is specified but the argument to DbContext is the name of a Connection String in the .config file.  As can be seen this contains similar information to that in the factory method enabling EF to create a dB of the correct type.

To use these the instances of these databases rather than calling the create method on the context just use the context directly or more likely in the case of EF a derived context which brings us to one last example.

Example 5 - Using a derived context and .config file
1:  public class TestCtx : DbContext  
2: {
3: }
4: using (var ctx = new TestCtx())
5: {
6: ctx.Database.Create();
7: }

App or Web .config
1:  <connectionStrings>  
2: <add name="TestCtx"
3: providerName="System.Data.SqlServerCe.4.0"
4: connectionString="Data Source=test6.sdf"/>
5: </connectionStrings>

If a derived context is created which will almost certainly be the case then if an instance of this is created and a dB created then EF will look for a Connection String in the .config file that has the same name as the context and take the information from there.

Integration Testing with NUnit and Entity Framework

This post gives a quick introduction into creating SQL CE dBs for performing Integration Tests using NUnit.

In the previous post Using NUnit and Entity Framework DbContext to programmatically create SQL Server CE databases and specify the databse directory a basic way was shown to how to create a new dB (using Entity Framework's DbContext) programmtically.  This was used to generate a new dB for a test hosted by NUnit.

The subsequent post Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to generate a SQL CE dB schema from an existing SQL Server database.

This post ties theprevious ones together.  As mentioned in the first post the reason for this is an attempt at what amounts to Integration Testing using NUnit.  I'm currently building a Repository and Unit Of Work abstraction on top of Entity Framework which will allow the isolation of the dB code (in fact it will isolate and abstract away most forms of data storage).  This means any business logic can be tested with a test-double that implements the Repository and UnitOfWork interfaces; which is straight forward Unit Testing.  The Integration Testing is to verify that the Repository and Unit Of Work implementations work correctly.

The rest of the post isn't focused on these two patterns; though it may mention them.  Instead it documents my further experience of using NUnit to writes tests that interact with dB via Entity Framework.  The premise for this is that a dB already exists.

As such the approach to using Entity Framework is a hybrid of Database First and Code First in that the dB schema exists and needs be maintained outside of EF and also that EF should not generate model classes, i.e. allowing the use of Code First POCOs.  This is possible as the POCOs can be defined, a connection made to dB and then the two are conflated via an EF DbContext.  It then seems that EF creates the model on the fly (internally compiles it) and as long as the POCO types map to the dB types then it all works as if by magic!

The advantage of doing it this way is that the existing dB is SQL Express based but for the Integration Testing a new dB can be created when needed, potentially one per test.  In order to keep the test dBs isolated from the real dB SQL Server Compact Edition (SQL Server CE V4) was used.  Therefore the requirement was for the EF code to be able to work with SQL Express and SQL CE with the primary definition of the schema taken from SQL Express.  It's not possible to use exactly the same schema as SQL CE only has a subset of the data-types provides by SQL CE.  However, the process described in the post 
Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to create semantically equivalent SQL.


From this point onwards it's assumed that an SQL file to create the dB has been generated.  Now create a new C# class library project and using the NUGet add Entity Framework, NUnit and SQL CE 4.0.  All my work has been with EF 4.3.1.  Following this drag the Model1.edmx.sqlce file from the project used to generate to new project.  You may wish to rename it, e.g. to test.sqlce.


Creating the database

The post Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to create a new CE dB per-test using the EF DbContext to do the hard work.  A different approach is now taken as the problem with creating a dB using DbContext is that in addition to creating any specified tables and indices etc. it also creates an additional table called '__MigrationHistory' which contains a description of the EF model used to create the dB.  The description of the problem caused by this will be delayed until the "Why DbContext is no longer used to create the database" section.  Suffice to say for the present using the new mechanism avoids the creation of this table.

The code below is the beginnings of a test class.  It is assumed all the tests need a fresh copy of the dB hence the creation is performed in the Setup method.  All this code does is create a SQL CE dB and then
creates the schema.

1:  [TestFixture]  
2: public class SimpleTests
3: {
4: const string DB_NAME = "test.sdf";
5: const string DB_PATH = @".\" + DB_NAME;
6: const string CONNECTION_STRING = "data source=" + DB_PATH;
7: [SetUp]
8: public void Setup()
9: {
10: DeleteDb();
11: using (var eng = new SqlCeEngine(CONNECTION_STRING))
12: eng.CreateDatabase();
13: using (var conn = new SqlCeConnection(CONNECTION_STRING))
14: {
15: conn.Open();
16: string sql=ReadSQLFromFile(@"C:\Users\Pete\work\Jub\EFTests\Test.sqlce");
17: string[] sqlCmds = sql.Split(new string[] { "GO" }, int.MaxValue, StringSplitOptions.RemoveEmptyEntries);
18: foreach (string sqlCmd in sqlCmds)
19: try
20: {
21: var cmd = conn.CreateCommand();
22:
23: cmd.CommandText = sqlCmd;
24: cmd.ExecuteNonQuery();
25: }
26: catch (Exception e)
27: {
28: Console.Error.WriteLine("{0}:{1}", e.Message, sqlCmd);
29: throw;
30: }
31: }
32: }
33: public void DeleteDb()
34: {
35: if (File.Exists(DB_PATH))
36: File.Delete(DB_PATH);
37: }
38: private string ReadSQLFromFile(string sqlFilePath)
39: {
40: using (TextReader r = new StreamReader(sqlFilePath))
41: {
42: return r.ReadToEnd();
43: }
44: }
45: }
46:
The dB file (Test.sdf) will be created in the current working directory.  As the test assembly is located in <project>\bin\debug which is where the NUnit test runner picks up the DLL from this directory this is where it is created.  If a specific directory is required then the '.\' can be replaced with the required path.

The Setup method is marked with NUnit's SetUp attribute meaning it will be invoked on a per-test basis creating a new dB instance for each test.  The DeleteDb method could be marked with [TearDown] attribute but at the moment any previous dB is deleted before creating a new one.  It would be fine to do both as a belt and braces approach.  The reason I didn't make it the TearDown method is so that I could inspect the dB following a test if needed.

SQL CE does not support batch execution of SQL scripts which is where it gets interesting as the SQL generated previously is in batch form.  The code reads the entire file into a string and determines each individual statement by splitting string on the 'GO' command that separates each SQL command.

To help understand the SQL the following is the diagram of the dB I'm working with.  All fields are strings except for the Ids which are numeric.
Each of these commands is then executed.  The previously generated SQL (the SQL for the dB I'm working with is below) will not work completely out of the box.  The ALTER and DROP statements at the beginning don't apply as the schema is being applied to an empty dB, these should be removed.  Interestingly the schema generation step for my dB seems to miss out a 'GO' between the penultimate and ultimate statement.  I had to add one by hand.  Finally, the comments at the end prove a problem as there is no terminating 'GO'.  Removing these fixes the problem.  In the code above the exception handler re-throws the exception after writing out the details.  For everything to proceed the SQL needs modifying to execute perfectly.  If the re-throw is removed then the code will tolerate individual command failures which in this context really just amount to warnings.

NOTE: Text highlighted in red has been removed and text in blue added.

-- --------------------------------------------------
-- Entity Designer DDL Script for SQL Server Compact Edition
-- --------------------------------------------------
-- Date Created: 07/29/2012 12:28:35
-- Generated from EDMX file: C:\Users\Pete\work\Jub\DummyWebApplicationToGenerateSQLServerCE4Script\Model1.edmx
-- --------------------------------------------------


-- --------------------------------------------------
-- Dropping existing FOREIGN KEY constraints
-- NOTE: if the constraint does not exist, an ignorable error will be reported.
-- --------------------------------------------------

    ALTER TABLE [RepComments] DROP CONSTRAINT [FK_RepComments_Reps];
GO

-- --------------------------------------------------
-- Dropping existing tables
-- NOTE: if the table does not exist, an ignorable error will be reported.
-- --------------------------------------------------

    DROP TABLE [RepComments];
GO
    DROP TABLE [Reps];
GO
    DROP TABLE [Roads];
GO

-- --------------------------------------------------
-- Creating all tables
-- --------------------------------------------------

-- Creating table 'RepComments'
CREATE TABLE [RepComments] (
    [CommentId] int IDENTITY(1,1) NOT NULL,
    [RepId] int  NOT NULL,
    [Comment] ntext  NOT NULL
);
GO

-- Creating table 'Reps'
CREATE TABLE [Reps] (
    [RepId] int IDENTITY(1,1) NOT NULL,
    [RepName] nvarchar(50)  NOT NULL,
    [RoadName] nvarchar(256)  NOT NULL,
    [HouseNumberOrName] nvarchar(50)  NOT NULL,
    [ContactTelNumber] nvarchar(20)  NOT NULL,
    [Email] nvarchar(50)  NULL
);
GO

-- Creating table 'Roads'
CREATE TABLE [Roads] (
    [Name] nvarchar(256)  NOT NULL
);
GO

-- --------------------------------------------------
-- Creating all PRIMARY KEY constraints
-- --------------------------------------------------

-- Creating primary key on [CommentId] in table 'RepComments'
ALTER TABLE [RepComments]
ADD CONSTRAINT [PK_RepComments]
    PRIMARY KEY ([CommentId] );
GO

-- Creating primary key on [RepId] in table 'Reps'
ALTER TABLE [Reps]
ADD CONSTRAINT [PK_Reps]
    PRIMARY KEY ([RepId] );
GO

-- Creating primary key on [Name] in table 'Roads'
ALTER TABLE [Roads]
ADD CONSTRAINT [PK_Roads]
    PRIMARY KEY ([Name] );
GO

-- --------------------------------------------------
-- Creating all FOREIGN KEY constraints
-- --------------------------------------------------

-- Creating foreign key on [RepId] in table 'RepComments'
ALTER TABLE [RepComments]
ADD CONSTRAINT [FK_RepComments_Reps]
    FOREIGN KEY ([RepId])
    REFERENCES [Reps]
        ([RepId])
    ON DELETE NO ACTION ON UPDATE NO ACTION;
GO
-- Creating non-clustered index for FOREIGN KEY 'FK_RepComments_Reps'
CREATE INDEX [IX_FK_RepComments_Reps]
ON [RepComments]
    ([RepId]);
GO

-- --------------------------------------------------
-- Script has ended
-- --------------------------------------------------

Getting the SQL into a state where it will run flawlessly is a little bit of a hassle but given the number of times it will be used subsequently it's job a big job, well for a small dB anyway.  To verify that your dB has been created as needed an quick and easy way to test is to comment out the call to DeleteDb() and after a test has run open to the dB using Server Explorer within VS, i.e.



Using the dB in a test

Now that a fresh dB will be created for each test it's time to look at simple test:

1:  [Test]  
2: public void TestOne()
3: {
4: using (var conn = new SqlCeConnection(CONNECTION_STRING))
5: using (var ctx = new TestCtx(conn))
6: {
7: ctx.Roads.Add(new Road() { Name = "Test" });
8: ctx.SaveChanges();
9: Assert.That(1, Is.EqualTo(ctx.Roads.Count()));
10: }
11: }
Road in this case is defined as:

1:  class Road  
2: {
3: [Key]
4: public string Name { get; set; }
5: }

The first thing to note is that EF is not used to form the connection to the dB, instead one is made using the SqlCe specific classes.  Attempting to get EF to connect to a specific dB instance when not referring to a named connection strings in the .config file is a bit of an art (I may write another entry about this).  However, EF is quite happy to work with an existing connection.  This makes for a good separation of responsibilities in the code where EF manages the interactions with the dB but the control of the connection is elsewhere.

NOTE: It is likely that each test will require a connection and a context hence rather it might make more sense to move the creation of the SqlCeConnection and the context (TestCtx in this case) to a SetUp method and as these resources need disposing of adding a TearDown method to do that.  TestCtx could also be modified to pass true to the DbContext constructor to give ownership of the connection to the context so that it will dispose of it then context is disposed off.

I would have preferred to avoid having to defined a specific derived context and instead use DbContext directory, e.g.
1:  [Test]  
2: public void TesTwo()
3: {
4: using (var conn = new SqlCeConnection(CONNECTION_STRING))
5: using (var ctx = new DbContext(conn, false))
6: {
7: ctx.Set<Road>().Add(new Road() { Name = "Test" });
8: ctx.SaveChanges();
9: Assert.That(1, Is.EqualTo(ctx.Set<Road>().Count()));
10: }
11: }

However when SaveChanges() is called the following exception is thrown:

System.InvalidOperationException : The entity type Road is not part of the model for the current context.

This is because EF knows nothing about the Road type.  When a derived context is created for the first time I think EF performs reflection on any properties that expose DbSet.  These are the types that form the Model.  Another option is to create the model, optionally compile it and then pass it to an instance of DbContext.  This way involves a lot less code.

That's it.  The final section is just footnote about the move away from using EF to create the dB.

Why DbContext is no longer used to create the database

As mentioned creating the dB using:
1:  using (var ctx = new DbContext("bar.sdf"))  
2: {
3: ctx.Database.Create();
4: // create schema etc.
5: }
causes the '__MigrationHistory' table to be created.  Assuming this method was used, later on when TestCtx was used top open the dB and perform an operation the following exception would be thrown:

System.InvalidOperationException : The model backing the 'DbContext' context has changed since the database was created. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=238269).
This is because the context used to create the model was a raw DbContext (as per the previous post) whereas the dB was accessed via the TestCtx.  If the context used to create the dB is also changed to TestCtx then this problem goes away.
However, given the original dB is not intended to be created nor be maintained (code migrations) by EF then using the non-context/EF approach to dB completely removes EF from the picture.









Integration Testing with NUnit and Entity Framework

This post gives a quick introduction into creating SQL CE dBs for performing Integration Tests using NUnit.

In the previous post Using NUnit and Entity Framework DbContext to programmatically create SQL Server CE databases and specify the databse directory a basic way was shown to how to create a new dB (using Entity Framework's DbContext) programmtically.  This was used to generate a new dB for a test hosted by NUnit.

The subsequent post Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to generate a SQL CE dB schema from an existing SQL Server database.

This post ties theprevious ones together.  As mentioned in the first post the reason for this is an attempt at what amounts to Integration Testing using NUnit.  I'm currently building a Repository and Unit Of Work abstraction on top of Entity Framework which will allow the isolation of the dB code (in fact it will isolate and abstract away most forms of data storage).  This means any business logic can be tested with a test-double that implements the Repository and UnitOfWork interfaces; which is straight forward Unit Testing.  The Integration Testing is to verify that the Repository and Unit Of Work implementations work correctly.

The rest of the post isn't focused on these two patterns; though it may mention them.  Instead it documents my further experience of using NUnit to writes tests that interact with dB via Entity Framework.  The premise for this is that a dB already exists.

As such the approach to using Entity Framework is a hybrid of Database First and Code First in that the dB schema exists and needs be maintained outside of EF and also that EF should not generate model classes, i.e. allowing the use of Code First POCOs.  This is possible as the POCOs can be defined, a connection made to dB and then the two are conflated via an EF DbContext.  It then seems that EF creates the model on the fly (internally compiles it) and as long as the POCO types map to the dB types then it all works as if by magic!

The advantage of doing it this way is that the existing dB is SQL Express based but for the Integration Testing a new dB can be created when needed, potentially one per test.  In order to keep the test dBs isolated from the real dB SQL Server Compact Edition (SQL Server CE V4) was used.  Therefore the requirement was for the EF code to be able to work with SQL Express and SQL CE with the primary definition of the schema taken from SQL Express.  It's not possible to use exactly the same schema as SQL CE only has a subset of the data-types provides by SQL CE.  However, the process described in the post 
Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to create semantically equivalent SQL.


From this point onwards it's assumed that an SQL file to create the dB has been generated.  Now create a new C# class library project and using the NUGet add Entity Framework, NUnit and SQL CE 4.0.  All my work has been with EF 4.3.1.  Following this drag the Model1.edmx.sqlce file from the project used to generate to new project.  You may wish to rename it, e.g. to test.sqlce.


Creating the database

The post Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to create a new CE dB per-test using the EF DbContext to do the hard work.  A different approach is now taken as the problem with creating a dB using DbContext is that in addition to creating any specified tables and indices etc. it also creates an additional table called '__MigrationHistory' which contains a description of the EF model used to create the dB.  The description of the problem caused by this will be delayed until the "Why DbContext is no longer used to create the database" section.  Suffice to say for the present using the new mechanism avoids the creation of this table.

The code below is the beginnings of a test class.  It is assumed all the tests need a fresh copy of the dB hence the creation is performed in the Setup method.  All this code does is create a SQL CE dB and then
creates the schema.

1:  [TestFixture]  
2: public class SimpleTests
3: {
4: const string DB_NAME = "test.sdf";
5: const string DB_PATH = @".\" + DB_NAME;
6: const string CONNECTION_STRING = "data source=" + DB_PATH;
7: [SetUp]
8: public void Setup()
9: {
10: DeleteDb();
11: using (var eng = new SqlCeEngine(CONNECTION_STRING))
12: eng.CreateDatabase();
13: using (var conn = new SqlCeConnection(CONNECTION_STRING))
14: {
15: conn.Open();
16: string sql=ReadSQLFromFile(@"C:\Users\Pete\work\Jub\EFTests\Test.sqlce");
17: string[] sqlCmds = sql.Split(new string[] { "GO" }, int.MaxValue, StringSplitOptions.RemoveEmptyEntries);
18: foreach (string sqlCmd in sqlCmds)
19: try
20: {
21: var cmd = conn.CreateCommand();
22:
23: cmd.CommandText = sqlCmd;
24: cmd.ExecuteNonQuery();
25: }
26: catch (Exception e)
27: {
28: Console.Error.WriteLine("{0}:{1}", e.Message, sqlCmd);
29: throw;
30: }
31: }
32: }
33: public void DeleteDb()
34: {
35: if (File.Exists(DB_PATH))
36: File.Delete(DB_PATH);
37: }
38: private string ReadSQLFromFile(string sqlFilePath)
39: {
40: using (TextReader r = new StreamReader(sqlFilePath))
41: {
42: return r.ReadToEnd();
43: }
44: }
45: }
46:
The dB file (Test.sdf) will be created in the current working directory.  As the test assembly is located in <project>\bin\debug which is where the NUnit test runner picks up the DLL from this directory this is where it is created.  If a specific directory is required then the '.\' can be replaced with the required path.

The Setup method is marked with NUnit's SetUp attribute meaning it will be invoked on a per-test basis creating a new dB instance for each test.  The DeleteDb method could be marked with [TearDown] attribute but at the moment any previous dB is deleted before creating a new one.  It would be fine to do both as a belt and braces approach.  The reason I didn't make it the TearDown method is so that I could inspect the dB following a test if needed.

SQL CE does not support batch execution of SQL scripts which is where it gets interesting as the SQL generated previously is in batch form.  The code reads the entire file into a string and determines each individual statement by splitting string on the 'GO' command that separates each SQL command.

To help understand the SQL the following is the diagram of the dB I'm working with.  All fields are strings except for the Ids which are numeric.
Each of these commands is then executed.  The previously generated SQL (the SQL for the dB I'm working with is below) will not work completely out of the box.  The ALTER and DROP statements at the beginning don't apply as the schema is being applied to an empty dB, these should be removed.  Interestingly the schema generation step for my dB seems to miss out a 'GO' between the penultimate and ultimate statement.  I had to add one by hand.  Finally, the comments at the end prove a problem as there is no terminating 'GO'.  Removing these fixes the problem.  In the code above the exception handler re-throws the exception after writing out the details.  For everything to proceed the SQL needs modifying to execute perfectly.  If the re-throw is removed then the code will tolerate individual command failures which in this context really just amount to warnings.

NOTE: Text highlighted in red has been removed and text in blue added.

-- --------------------------------------------------
-- Entity Designer DDL Script for SQL Server Compact Edition
-- --------------------------------------------------
-- Date Created: 07/29/2012 12:28:35
-- Generated from EDMX file: C:\Users\Pete\work\Jub\DummyWebApplicationToGenerateSQLServerCE4Script\Model1.edmx
-- --------------------------------------------------


-- --------------------------------------------------
-- Dropping existing FOREIGN KEY constraints
-- NOTE: if the constraint does not exist, an ignorable error will be reported.
-- --------------------------------------------------

    ALTER TABLE [RepComments] DROP CONSTRAINT [FK_RepComments_Reps];
GO

-- --------------------------------------------------
-- Dropping existing tables
-- NOTE: if the table does not exist, an ignorable error will be reported.
-- --------------------------------------------------

    DROP TABLE [RepComments];
GO
    DROP TABLE [Reps];
GO
    DROP TABLE [Roads];
GO

-- --------------------------------------------------
-- Creating all tables
-- --------------------------------------------------

-- Creating table 'RepComments'
CREATE TABLE [RepComments] (
    [CommentId] int IDENTITY(1,1) NOT NULL,
    [RepId] int  NOT NULL,
    [Comment] ntext  NOT NULL
);
GO

-- Creating table 'Reps'
CREATE TABLE [Reps] (
    [RepId] int IDENTITY(1,1) NOT NULL,
    [RepName] nvarchar(50)  NOT NULL,
    [RoadName] nvarchar(256)  NOT NULL,
    [HouseNumberOrName] nvarchar(50)  NOT NULL,
    [ContactTelNumber] nvarchar(20)  NOT NULL,
    [Email] nvarchar(50)  NULL
);
GO

-- Creating table 'Roads'
CREATE TABLE [Roads] (
    [Name] nvarchar(256)  NOT NULL
);
GO

-- --------------------------------------------------
-- Creating all PRIMARY KEY constraints
-- --------------------------------------------------

-- Creating primary key on [CommentId] in table 'RepComments'
ALTER TABLE [RepComments]
ADD CONSTRAINT [PK_RepComments]
    PRIMARY KEY ([CommentId] );
GO

-- Creating primary key on [RepId] in table 'Reps'
ALTER TABLE [Reps]
ADD CONSTRAINT [PK_Reps]
    PRIMARY KEY ([RepId] );
GO

-- Creating primary key on [Name] in table 'Roads'
ALTER TABLE [Roads]
ADD CONSTRAINT [PK_Roads]
    PRIMARY KEY ([Name] );
GO

-- --------------------------------------------------
-- Creating all FOREIGN KEY constraints
-- --------------------------------------------------

-- Creating foreign key on [RepId] in table 'RepComments'
ALTER TABLE [RepComments]
ADD CONSTRAINT [FK_RepComments_Reps]
    FOREIGN KEY ([RepId])
    REFERENCES [Reps]
        ([RepId])
    ON DELETE NO ACTION ON UPDATE NO ACTION;
GO
-- Creating non-clustered index for FOREIGN KEY 'FK_RepComments_Reps'
CREATE INDEX [IX_FK_RepComments_Reps]
ON [RepComments]
    ([RepId]);
GO

-- --------------------------------------------------
-- Script has ended
-- --------------------------------------------------

Getting the SQL into a state where it will run flawlessly is a little bit of a hassle but given the number of times it will be used subsequently it's job a big job, well for a small dB anyway.  To verify that your dB has been created as needed an quick and easy way to test is to comment out the call to DeleteDb() and after a test has run open to the dB using Server Explorer within VS, i.e.



Using the dB in a test

Now that a fresh dB will be created for each test it's time to look at simple test:

1:  [Test]  
2: public void TestOne()
3: {
4: using (var conn = new SqlCeConnection(CONNECTION_STRING))
5: using (var ctx = new TestCtx(conn))
6: {
7: ctx.Roads.Add(new Road() { Name = "Test" });
8: ctx.SaveChanges();
9: Assert.That(1, Is.EqualTo(ctx.Roads.Count()));
10: }
11: }
Road in this case is defined as:

1:  class Road  
2: {
3: [Key]
4: public string Name { get; set; }
5: }

The first thing to note is that EF is not used to form the connection to the dB, instead one is made using the SqlCe specific classes.  Attempting to get EF to connect to a specific dB instance when not referring to a named connection strings in the .config file is a bit of an art (I may write another entry about this).  However, EF is quite happy to work with an existing connection.  This makes for a good separation of responsibilities in the code where EF manages the interactions with the dB but the control of the connection is elsewhere.

NOTE: It is likely that each test will require a connection and a context hence rather it might make more sense to move the creation of the SqlCeConnection and the context (TestCtx in this case) to a SetUp method and as these resources need disposing of adding a TearDown method to do that.  TestCtx could also be modified to pass true to the DbContext constructor to give ownership of the connection to the context so that it will dispose of it then context is disposed off.

I would have preferred to avoid having to defined a specific derived context and instead use DbContext directory, e.g.
1:  [Test]  
2: public void TesTwo()
3: {
4: using (var conn = new SqlCeConnection(CONNECTION_STRING))
5: using (var ctx = new DbContext(conn, false))
6: {
7: ctx.Set<Road>().Add(new Road() { Name = "Test" });
8: ctx.SaveChanges();
9: Assert.That(1, Is.EqualTo(ctx.Set<Road>().Count()));
10: }
11: }

However when SaveChanges() is called the following exception is thrown:

System.InvalidOperationException : The entity type Road is not part of the model for the current context.

This is because EF knows nothing about the Road type.  When a derived context is created for the first time I think EF performs reflection on any properties that expose DbSet.  These are the types that form the Model.  Another option is to create the model, optionally compile it and then pass it to an instance of DbContext.  This way involves a lot less code.

That's it.  The final section is just footnote about the move away from using EF to create the dB.

Why DbContext is no longer used to create the database

As mentioned creating the dB using:
1:  using (var ctx = new DbContext("bar.sdf"))  
2: {
3: ctx.Database.Create();
4: // create schema etc.
5: }
causes the '__MigrationHistory' table to be created.  Assuming this method was used, later on when TestCtx was used top open the dB and perform an operation the following exception would be thrown:

System.InvalidOperationException : The model backing the 'DbContext' context has changed since the database was created. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=238269).
This is because the context used to create the model was a raw DbContext (as per the previous post) whereas the dB was accessed via the TestCtx.  If the context used to create the dB is also changed to TestCtx then this problem goes away.
However, given the original dB is not intended to be created nor be maintained (code migrations) by EF then using the non-context/EF approach to dB completely removes EF from the picture.









Visual Studio 2012 theme support

One of the unexpected (and I would suggest from the comments, unwelcome) changes sprung on developers in the Visual Studio 2012 Beta back in February was the "Metroification" of the development environment.

However, eye candy (and eyesores!) come and go, and within that change is a more fundamental one - direct support for themes within the Visual Studio IDE. The Visual Studio 2012 Beta and RC include two themes - light (i.e. grey) and dark. Whilst the latter has an obvious appeal within the developer community (we all know devs who prefer green text on a black background) the former hasn't exactly been welcomed, to say the least.

Personally, rather than develop custom theme support for each tool individually I wish they'd just add a "dark" theme to Windows instead and respect the theme settings of the operating system. Obviously my view just isn't "cool" enough for the Visual Studio UX team, but I digress...

Although a campaign to retain the existing Visual Studio 2010 theme has been running on the UserVoice site since the beta arrived (see the UserVoice posts Add some color to Visual Studio 11 and Leave VS 2010 theme (and the theme editor extension) as an option) Microsoft have not indicated what - if any - changes will be made to the Visual Studio 2012 themes at RTM.

Our working assumption therefore has to be that the themes in the RTM will be broadly comparable with those in the RC (i.e. light and dark). We will find out whether that assumption is correct later this month, of course.

With that in mind, we have been working on theme support in the development branch for Visual Lint for some time now, and things are now beginning to come together:

Visual Lint running with the Visual Studio 2012 RC dark theme.Visual Lint running with the Visual Studio 2012 RC dark theme. Visual Lint running with the Visual Studio 2012 RC light theme.Visual Lint running with the Visual Studio 2012 RC light theme.

As Visual Lint uses standard Win32 controls for most of the UI (which for the most part do not support custom text/background colours), to get this far we have had to write custom painted WTL checkbox, radio button, combobox and header controls in addition to the usual WM_CTLCOLORxxxx voodoo. Other UI elements such as menus, scrollbars, command buttons etc. yet haven't yet been looked at, but hopefully will be in due course (there seems to be some indication in the MSDN blogs that scollbars will be auto-themed by the RTM, but we'll see).

Within the displays themselves, the text and background colours of each item are checked for adequate contrast, and the text colour adjusted (by tweaking the luminance) automatically if need be.

Although the Visual Studio interfaces expose the colours used in the active theme (via IVsUIShell2::GetVSSysColorEx()), they do not seem to provide any way of detecting if the theme has changed (or indeed, finding out which theme is actually running at the time). Our workaround for this is simply to reload the colour scheme whenever the "Tools|Options" command has been executed. We don't really care which theme is running after all - just what colour values it uses, and where.

Indeed, one of the first things we did while working on this was to dump all of the colour values used by the Visual Studio 2012 RC light & dark themes, as well as the default Visual Studio 2010 theme, into spreadsheets so we could use them for testing without firing up a host instance of the IDE (developing add-ins may be fun, but it is also much slower than working on your own executable).

Finally, it is a little known fact that the Visual Studio IDE has had colour scheme support internally for some time, so the scheme we have designed will also work with Visual Studio 2010 if you have the theme editor extension installed:

Visual Lint running with Visual Studio 2010 with a modified "Expression" theme.Visual Lint running with Visual Studio 2010 with a modified "Expression" theme.

Needless to say, all of this is proving to be a major task, and it has therefore diverted significant resources from other things we should really have been working on this summer. As a consolation, the theme code we're developing is generic (albeit only on Windows), so can also be used with Eclipse 4.0 (I note that themes are coming to that IDE as well when the time comes.

Another obvious benefit is of course that there's potentially at least one new CodeProject article (want a themed XP button with a custom background colour? We know how to do it now) in all of this once the dust settles and the inevitable bugs have crawled away. It's about time I wrote a new one, anyway.

Once Visual Lint theme support is complete, we'll obviously also take a look at ResOrg. Beyond that, I think a new article is a foregone conclusion, once we've cleaned the code up a bit and built a good enough demo project.

Visual Studio 2012 theme support

One of the unexpected (and I would suggest from the comments, unwelcome) changes sprung on developers in the Visual Studio 2012 Beta back in February was the Metroification of the development environment.

However, eye candy (and eyesores!) come and go, and within that change is a more fundamental one - direct support for themes within the Visual Studio IDE. The Visual Studio 2012 Beta and RC include two themes - light (i.e. grey) and dark. Whilst the latter has an obvious appeal within the developer community (we all know devs who prefer green text on a black background) the former hasn't exactly been welcomed, to say the least.

Personally, rather than develop custom theme support for each tool individually I wish they'd just add a "dark" theme to Windows instead and respect the theme settings of the operating system. Obviously my view just isn't "cool" enough for the Visual Studio UX team, but I digress...

Although a campaign to retain the existing Visual Studio 2010 theme has been running on the UserVoice site since the beta arrived (see Add some color to Visual Studio 11 and Leave VS 2010 theme (and the theme editor extension) as an option) Microsoft have not indicated what - if any - changes will be made to the Visual Studio 2012 themes at RTM.

Our working assumption therefore has to be that the themes in the RTM will be broadly comparable with those in the RC (i.e. light and dark). We will find out whether that assumption is correct later this month, of course.

With that in mind, we have been working on theme support in the development branch for Visual Lint for some time now, and things are now beginning to come together:

Visual Lint running with the Visual Studio 2012 RC dark theme

Visual Lint running with the Visual Studio 2012 RC light theme

As Visual Lint uses standard Win32 controls for most of the UI (which for the most part do not support custom text/background colours), to get this far we have had to write custom painted WTL checkbox, radio button, combobox and header controls in addition to the usual WM_CTLCOLORxxxx voodoo. Other UI elements such as menus, scrollbars, command buttons etc. yet haven't yet been looked at, but hopefully will be in due course (there seems to be some indication in the MSDN blogs that scollbars will be auto-themed by the RTM, but we'll see).

Within the displays themselves, the text and background colours of each item are checked for adequate contrast, and the text colour adjusted (by tweaking the luminance) automatically if need be.

Although the Visual Studio interfaces expose the colours used in the active theme (via IVsUIShell2::GetVSSysColorEx() ), they do not seem to provide any way of detecting if the theme has changed (or indeed, finding out which theme is actually running at the time). Our workaround for this is simply to reload the colour scheme whenever the "Tools|Options" command has been executed. We don't really care which theme is running after all - just what colour values it uses, and where.

Indeed, one of the first things we did while working on this was to dump all of the colour values used by the VS2012 RC light & dark themes, as well as the default VS2010 theme, into spreadsheets so we could use them for testing without firing up a host instance of the IDE (developing add-ins may be fun, but it is also much slower than working on your own executable).

Finally, it is a little known fact that the Visual Studio IDE has had colour scheme support internally for some time, so the scheme we have designed will also work with Visual Studio 2010 if you have the theme editor extension installed:

Visual Lint running with Visual Studio 2010 with a modified 'Expression' theme

Needless to say, all of this is proving to be a major task, and it has therefore diverted significant resources from other things we should really have been working on this summer. As a consolation, the theme code we're developing is generic (albeit only on Windows), so can also be used with Eclipse 4.0 (I note that themes are coming to that IDE as well) when the time comes.

Another obvious benefit is of course that there's potentially at least one new CodeProject article (want a themed XP button with a custom background colour? We know how to do it now) in all of this once the dust settles and the inevitable bugs have crawled away. It's about time I wrote a new one, anyway.

Once Visual Lint theme support is complete, we'll obviously also take a look at ResOrg. Beyond that, I think a new article is a foregone conclusion, once we've cleaned the code up a bit and built a good enough demo project...

Visual Studio 2012 theme support

One of the unexpected (and I would suggest from the comments, unwelcome) changes sprung on developers in the Visual Studio 2012 Beta back in February was the Metroification of the development environment.

However, eye candy (and eyesores!) come and go, and within that change is a more fundamental one - direct support for themes within the Visual Studio IDE. The Visual Studio 2012 Beta and RC include two themes - light (i.e. grey) and dark. Whilst the latter has an obvious appeal within the developer community (we all know devs who prefer green text on a black background) the former hasn't exactly been welcomed, to say the least.

Personally, rather than develop custom theme support for each tool individually I wish they'd just add a "dark" theme to Windows instead and respect the theme settings of the operating system. Obviously my view just isn't "cool" enough for the Visual Studio UX team, but I digress...

Although a campaign to retain the existing Visual Studio 2010 theme has been running on the UserVoice site since the beta arrived (see Add some color to Visual Studio 11 and Leave VS 2010 theme (and the theme editor extension) as an option) Microsoft have not indicated what - if any - changes will be made to the Visual Studio 2012 themes at RTM.

Our working assumption therefore has to be that the themes in the RTM will be broadly comparable with those in the RC (i.e. light and dark). We will find out whether that assumption is correct later this month, of course.

With that in mind, we have been working on theme support in the development branch for Visual Lint for some time now, and things are now beginning to come together:

Visual Lint running with the Visual Studio 2012 RC dark theme

Visual Lint running with the Visual Studio 2012 RC light theme

As Visual Lint uses standard Win32 controls for most of the UI (which for the most part do not support custom text/background colours), to get this far we have had to write custom painted WTL checkbox, radio button, combobox and header controls in addition to the usual WM_CTLCOLORxxxx voodoo. Other UI elements such as menus, scrollbars, command buttons etc. yet haven't yet been looked at, but hopefully will be in due course (there seems to be some indication in the MSDN blogs that scollbars will be auto-themed by the RTM, but we'll see).

Within the displays themselves, the text and background colours of each item are checked for adequate contrast, and the text colour adjusted (by tweaking the luminance) automatically if need be.

Although the Visual Studio interfaces expose the colours used in the active theme (via IVsUIShell2::GetVSSysColorEx() ), they do not seem to provide any way of detecting if the theme has changed (or indeed, finding out which theme is actually running at the time). Our workaround for this is simply to reload the colour scheme whenever the "Tools|Options" command has been executed. We don't really care which theme is running after all - just what colour values it uses, and where.

Indeed, one of the first things we did while working on this was to dump all of the colour values used by the VS2012 RC light & dark themes, as well as the default VS2010 theme, into spreadsheets so we could use them for testing without firing up a host instance of the IDE (developing add-ins may be fun, but it is also much slower than working on your own executable).

Finally, it is a little known fact that the Visual Studio IDE has had colour scheme support internally for some time, so the scheme we have designed will also work with Visual Studio 2010 if you have the theme editor extension installed:

Visual Lint running with Visual Studio 2010 with a modified 'Expression' theme

Needless to say, all of this is proving to be a major task, and it has therefore diverted significant resources from other things we should really have been working on this summer. As a consolation, the theme code we're developing is generic (albeit only on Windows), so can also be used with Eclipse 4.0 (I note that themes are coming to that IDE as well) when the time comes.

Another obvious benefit is of course that there's potentially at least one new CodeProject article (want a themed XP button with a custom background colour? We know how to do it now) in all of this once the dust settles and the inevitable bugs have crawled away. It's about time I wrote a new one, anyway.

Once Visual Lint theme support is complete, we'll obviously also take a look at ResOrg. Beyond that, I think a new article is a foregone conclusion, once we've cleaned the code up a bit and built a good enough demo project...

Visual Studio 2012 theme support

One of the unexpected (and I would suggest from the comments, unwelcome) changes sprung on developers in the Visual Studio 2012 Beta back in February was the Metroification of the development environment. However, eye candy (and eyesores!) come and go, and within that change is a more fundamental one - direct support for themes within the Visual Studio IDE. The Visual Studio 2012 Beta and RC include two themes - light (i.e. grey) and dark. Whilst the latter has an obvious appeal within the developer community (we all know devs who prefer green text on a black background) the former hasn't exactly been welcomed, to say the least. Personally, rather than develop custom theme support for each tool individually I wish they'd just add a "dark" theme to Windows instead and respect the theme settings of the operating system. Obviously my view just isn't "cool" enough for the Visual Studio UX team, but I digress... Although a campaign to retain the existing Visual Studio 2010 theme has been running on the UserVoice site since the beta arrived (see Add some color to Visual Studio 11 and Leave VS 2010 theme (and the theme editor extension) as an option) Microsoft have not indicated what - if any - changes will be made to the Visual Studio 2012 themes at RTM. Our working assumption therefore has to be that the themes in the RTM will be broadly comparable with those in the RC (i.e. light and dark). We will find out whether that assumption is correct later this month, of course. With that in mind, we have been working on theme support in the development branch for Visual Lint for some time now, and things are now beginning to come together:
Visual Lint running with the Visual Studio 2012 RC dark theme

Visual Lint running with the Visual Studio 2012 RC light theme
As Visual Lint uses standard Win32 controls for most of the UI (which for the most part do not support custom text/background colours), to get this far we have had to write custom painted WTL checkbox, radio button, combobox and header controls in addition to the usual WM_CTLCOLORxxxx voodoo. Other UI elements such as menus, scrollbars, command buttons etc. yet haven't yet been looked at, but hopefully will be in due course (there seems to be some indication in the MSDN blogs that scollbars will be auto-themed by the RTM, but we'll see). Within the displays themselves, the text and background colours of each item are checked for adequate contrast, and the text colour adjusted (by tweaking the luminance) automatically if need be. Although the Visual Studio interfaces expose the colours used in the active theme (via IVsUIShell2::GetVSSysColorEx() ), they do not seem to provide any way of detecting if the theme has changed (or indeed, finding out which theme is actually running at the time). Our workaround for this is simply to reload the colour scheme whenever the "Tools|Options" command has been executed. We don't really care which theme is running after all - just what colour values it uses, and where. Indeed, one of the first things we did while working on this was to dump all of the colour values used by the VS2012 RC light & dark themes, as well as the default VS2010 theme, into spreadsheets so we could use them for testing without firing up a host instance of the IDE (developing add-ins may be fun, but it is also much slower than working on your own executable). Finally, it is a little known fact that the Visual Studio IDE has had colour scheme support internally for some time, so the scheme we have designed will also work with Visual Studio 2010 if you have the theme editor extension installed:
Visual Lint running with Visual Studio 2010 with a modified 'Expression' theme

Needless to say, all of this is proving to be a major task, and it has therefore diverted significant resources from other things we should really have been working on this summer. As a consolation, the theme code we're developing is generic (albeit only on Windows), so can also be used with Eclipse 4.0 (I note that themes are coming to that IDE as well) when the time comes. Another obvious benefit is of course that there's potentially at least one new CodeProject article (want a themed XP button with a custom background colour? We know how to do it now) in all of this once the dust settles and the inevitable bugs have crawled away. It's about time I wrote a new one, anyway. Once Visual Lint theme support is complete, we'll obviously also take a look at ResOrg. Beyond that, I think a new article is a foregone conclusion, once we've cleaned the code up a bit and built a good enough demo project...

Generating a SQL Server CE database schema from a SQL Server database using Entity Framework

In a previous entry I described how to programmatically create (& destroy) a SQL CE dB for integration testing using NUnit.  Since getting that working I ran into a couple of other problems which I've more or less solved so I thought I'd write those up.  To begin with though this is a prequel post describing how to obtain the SQL script to create the SQL CE dB.

If you happen to be working exclusively with CE then you'll already have your schema file.  In my case I'm using SQLExpress and as this is experimental work I created my dB by hand.  However, using the EF it's pretty easy to obtain the schema and have the EF wizard generate the CE schema.  This is important as there are differences in the dialect of SQL used by SQL Express and SQL CE and its easier to have a tool handle those, though it doesn't do all of them.

The basic flow is to generate an EF model (EDMX) file from the existing SQL Express database and then use the 'Generate database from model' functionality.  It is at this point that the target SQL dB can be chosen, i.e. SQL Server, SQL Server CE or some others.

To create a model requires adding a 'New Item' of type 'ADO.Net Entity Data Model' to a VS project so first a new dummy project needs creating.  This is where it gets a little complicated as not any type of project will do.  I'm working with CE 4 and require a schema for that version of the dB (though creating one for 3.5 works but I like to things as close to ideal as possible).  Due to this constraint it is necessary to chose a Web type project as for some reason the VS2010 integration provided by EF only supports the generation of CE 4 dBs for Web projects.  If a simple C# Windows Console project is selected then you're limited to CE 3.5.  Thus the simplest project type is the 'ASP.Net Empty Web Application' as shown below.


Having done this, next add a new item of type ADO.Net Entity Data Model as below. NOTE: The project will have to reference the Entity Framework assemblies.  The easiest way to do this (& the one most people are probably using) is to use the NuGet package.


Then follow the wizard.


Selecting "Generate from database".


Choose your SQLExpress (or SQL Server) dB but uncheck the "Save entity connection settings in Web.Config as:" as we're converting to SQLCE so want to minimize anything related to other types of SQL Server.


Finally select the SQL elements you require.  In this example only the existing tables were selected.  As this is generating the EF model from an existing database no SQL file is generated just the model for which the diagram is shown, i.e.


The next phase is to generate the SQL from the model (which was generated from the hand crafted db) but to make sure the SQL that's generated is compliant with SQL CE.

To generate the schema right click and select "Generate model from database..."


This brings up the "Generate database" wizard which is very similar to the previously used "Entity Data Model" wizard used to create the model.  From here choose the "New Connection" option which pops up another set of dialogs.  On the first choose the type of data source as "Microsoft SQL Server Compact 4.0".

Clicking on continue then leads to the next dialog where you need to create a dB.



Ok-ing this leads back to the "Generate database wizard".


This time check the "Save entity connection settings in Web.Config" checkbox.  This information will be useful later (to be covered in a different post).  Clicking "Next" the SQL is generated and present in the wizard.


This can be copied & pasted directly from here or pressing "Finish" will save the SQL to the file indicated at the top of the dialog box.  This file is added to the project.  The following prompt will appear when "Finish" is pressed.
 

This doesn't really matter as this is a throw away project but having the updated schemas maybe useful so go with "Yes".

The SQL can now be used to configure an empty SQL CE 4.0 database.  The easiest way is to open the SQL file and right-click selecting the "Execute SQL" menu item.


This brings up the SQL Server dialog from which if "New Database" is selected an CE 4 one can be specified.


Having specified a location and pressed "Ok" the SQL script is executed.  As can be seen below this is not without errors.  However, this isn't anything to worry about as the errors are to do with dropping tables and indices that currently don't exist as it's a newly created dB.  Performing the same steps but missing out the creation of the dB file as it already exists sees the SQL script execute flawlessly.



The final picture shows the newly created database in VS2010's Server Explorer demonstrating that the tables were indeed created.


The basis for this post is my experimentation on using NUnit to programmatically test some dB based functionality.  If a single instance of a database suffices for all your tests and you can execute the SQL by hand as above and then you can follow these steps.  In may case I want to a fresh database per test so I need to automate the running of the SQL Script combined the with the creation and destruction of the underlying database.  The creation and deletion aspect were covered in a previous post but the next step will have to wait until a later one.

Generating a SQL Server CE database schema from a SQL Server database using Entity Framework

In a previous entry I described how to programmatically create (& destroy) a SQL CE dB for integration testing using NUnit.  Since getting that working I ran into a couple of other problems which I've more or less solved so I thought I'd write those up.  To begin with though this is a prequel post describing how to obtain the SQL script to create the SQL CE dB.

If you happen to be working exclusively with CE then you'll already have your schema file.  In my case I'm using SQLExpress and as this is experimental work I created my dB by hand.  However, using the EF it's pretty easy to obtain the schema and have the EF wizard generate the CE schema.  This is important as there are differences in the dialect of SQL used by SQL Express and SQL CE and its easier to have a tool handle those, though it doesn't do all of them.

The basic flow is to generate an EF model (EDMX) file from the existing SQL Express database and then use the 'Generate database from model' functionality.  It is at this point that the target SQL dB can be chosen, i.e. SQL Server, SQL Server CE or some others.

To create a model requires adding a 'New Item' of type 'ADO.Net Entity Data Model' to a VS project so first a new dummy project needs creating.  This is where it gets a little complicated as not any type of project will do.  I'm working with CE 4 and require a schema for that version of the dB (though creating one for 3.5 works but I like to things as close to ideal as possible).  Due to this constraint it is necessary to chose a Web type project as for some reason the VS2010 integration provided by EF only supports the generation of CE 4 dBs for Web projects.  If a simple C# Windows Console project is selected then you're limited to CE 3.5.  Thus the simplest project type is the 'ASP.Net Empty Web Application' as shown below.


Having done this, next add a new item of type ADO.Net Entity Data Model as below. NOTE: The project will have to reference the Entity Framework assemblies.  The easiest way to do this (& the one most people are probably using) is to use the NuGet package.


Then follow the wizard.


Selecting "Generate from database".


Choose your SQLExpress (or SQL Server) dB but uncheck the "Save entity connection settings in Web.Config as:" as we're converting to SQLCE so want to minimize anything related to other types of SQL Server.


Finally select the SQL elements you require.  In this example only the existing tables were selected.  As this is generating the EF model from an existing database no SQL file is generated just the model for which the diagram is shown, i.e.


The next phase is to generate the SQL from the model (which was generated from the hand crafted db) but to make sure the SQL that's generated is compliant with SQL CE.

To generate the schema right click and select "Generate model from database..."


This brings up the "Generate database" wizard which is very similar to the previously used "Entity Data Model" wizard used to create the model.  From here choose the "New Connection" option which pops up another set of dialogs.  On the first choose the type of data source as "Microsoft SQL Server Compact 4.0".

Clicking on continue then leads to the next dialog where you need to create a dB.



Ok-ing this leads back to the "Generate database wizard".


This time check the "Save entity connection settings in Web.Config" checkbox.  This information will be useful later (to be covered in a different post).  Clicking "Next" the SQL is generated and present in the wizard.


This can be copied & pasted directly from here or pressing "Finish" will save the SQL to the file indicated at the top of the dialog box.  This file is added to the project.  The following prompt will appear when "Finish" is pressed.
 

This doesn't really matter as this is a throw away project but having the updated schemas maybe useful so go with "Yes".

The SQL can now be used to configure an empty SQL CE 4.0 database.  The easiest way is to open the SQL file and right-click selecting the "Execute SQL" menu item.


This brings up the SQL Server dialog from which if "New Database" is selected an CE 4 one can be specified.


Having specified a location and pressed "Ok" the SQL script is executed.  As can be seen below this is not without errors.  However, this isn't anything to worry about as the errors are to do with dropping tables and indices that currently don't exist as it's a newly created dB.  Performing the same steps but missing out the creation of the dB file as it already exists sees the SQL script execute flawlessly.



The final picture shows the newly created database in VS2010's Server Explorer demonstrating that the tables were indeed created.


The basis for this post is my experimentation on using NUnit to programmatically test some dB based functionality.  If a single instance of a database suffices for all your tests and you can execute the SQL by hand as above and then you can follow these steps.  In may case I want to a fresh database per test so I need to automate the running of the SQL Script combined the with the creation and destruction of the underlying database.  The creation and deletion aspect were covered in a previous post but the next step will have to wait until a later one.

I guess the feedback actually did work

After all the brouhaha over Visual Studio 2012 not being able to build executables for Windows XP, it looks like Microsoft has reconsidered: http://blogs.msdn.com/b/vcblog/archive/2012/06/15/10320645.aspx Pity that we’ll have to wait for the update but at least those of us who still have clients that are exclusively XP can use a modern compiler…

If you want to remove a (C++) project from a Visual Studio 2010 solution

… make sure that you have removed all dependencies on the project that you are about to remove before you remove the project from the solution. If you don’t, the projects that still have dependencies on the project you just removed will retain the dependencies, but the dependencies will have become invisible and the only way to rid yourself of the “phantom dependencies” is by editing the actual vxcproj files with a text editor and remove the dependency entry in there manually.

Flashmob daily scrum

I think our team is too big to hold a daily scrum meeting, so I turned to a couple of people near me on Wednesday and asked "What did you do yesterday? What are you doing today? What's holding you up?"
I answered as well.
The next day, I did the same again with a different group of people, announcing "Flash-mob scrum" as we started.
Today I rounded up a couple of people from previous days and we "flash-mob scrummed" by two new people. I'm hoping it might just work.
This was done in a spirit of TCC, larking about, but based on previous practice, which is vital for TCC. The team seem to be talking to each other bit more too. 

Flashmob daily scrum

I think our team is too big to hold a daily scrum meeting, so I turned to a couple of people near me on Wednesday and asked "What did you do yesterday? What are you doing today? What's holding you up?"
I answered as well.
The next day, I did the same again with a different group of people, announcing "Flash-mob scrum" as we started.
Today I rounded up a couple of people from previous days and we "flash-mob scrummed" by two new people. I'm hoping it might just work.
This was done in a spirit of TCC, larking about, but based on previous practice, which is vital for TCC. The team seem to be talking to each other bit more too. 

Introducing VisualLintGui

If you have been following me (@annajayne) on Twitter, you may have noticed me talking about something called "VisualLintGui".

This is actually the second of two projects (the first being VisualLintConsole - the command line version of Visual Lint) we got underway after the release of Visual Lint 3.0.

Now that VisualLintConsole (the command line version of Visual Lint) is out in the wild, we have turned our attention to VisualLintGui. This is, as the name suggests, a standalone Visual Lint application with a graphical user interface - basically a text editor focused on code analysis:

VisualLintGui - the standalone Visual Lint applicationVisualLintGui - the standalone Visual Lint application.

Although it has been fully functional in terms of analysis functions for quite some time, until recently we were not able to devote a great deal of time to the details of its user interface. That has now changed, and since February VisualLintGui has gained many essential capabilities including a syntax colouring editor with analysis issue markers, MDI tabs, Find/Replace and Source/Header flip to name but a handful of the more obvious recent changes.

VisualLintGui is currently capable of analysing projects for Visual Studio, Visual C++, Eclipse, CodeGear C++ and AVR Studio 5.0, but it can obviously potentially analyse a far wider variety of codebases than that.

Indeed, one of the reasons we have been keen to develop it is to provide a way to support embedded IDEs for which developing a Visual Lint plug-in is not a viable proposition. As such we expect to add support for further project and workspace file formats as and when our customers need them.

VisualLintGui currently resides in our Visual Lint development branch, but given the recent pace of development on it we are likely to look at porting it back into Visual Lint 3.5 in the not too distant future.

In the meantime we will have a development build on our stand at the ACCU Conference next week, so if you are going please do come and take a look.

Introducing VisualLintGui

If you have been following me (@annajayne) on Twitter, you may have noticed me talking about something called "VisualLintGui".

This is actually the second of two projects (the first being VisualLintConsole - the command line version of Visual Lint) we got underway after the release of Visual Lint 3.0.

Now that VisualLintConsole (the command line version of Visual Lint) is out in the wild, we have turned our attention to VisualLintGui. This is, as the name suggests, a standalone Visual Lint application with a graphical user interface - basically a text editor focused on code analysis:

Although it has been fully functional in terms of analysis functions for quite some time, until recently we were not able to devote a great deal of time to the details of its user interface. That has now changed, and since February VisualLintGui has gained many essential capabilities including a syntax colouring editor with analysis issue markers, MDI tabs, Find/Replace and Source/Header flip to name but a handful of the more obvious recent changes.

VisualLintGui is currently capable of analysing projects for Visual Studio, Visual C++, Eclipse, CodeGear C++ and AVR Studio 5.0, but it can obviously potentially analyse a far wider variety of codebases than that.

Indeed, one of the reasons we have been keen to develop it is to provide a way to support embedded IDEs for which developing a Visual Lint plug-in is not a viable proposition. As such we expect to add support for further project and workspace file formats as and when our customers need them.

VisualLintGui currently resides in our Visual Lint development branch, but given the recent pace of development on it we are likely to look at porting it back into Visual Lint 3.5 in the not too distant future.

In the meantime we will have a development build on our stand at the ACCU Conference next week, so if you are going please do come and take a look.

Introducing VisualLintGui

If you have been following me (@annajayne) on Twitter, you may have noticed me talking about something called "VisualLintGui".

This is actually the second of two projects (the first being VisualLintConsole - the command line version of Visual Lint) we got underway after the release of Visual Lint 3.0.

Now that VisualLintConsole (the command line version of Visual Lint) is out in the wild, we have turned our attention to VisualLintGui. This is, as the name suggests, a standalone Visual Lint application with a graphical user interface - basically a text editor focused on code analysis:

Although it has been fully functional in terms of analysis functions for quite some time, until recently we were not able to devote a great deal of time to the details of its user interface. That has now changed, and since February VisualLintGui has gained many essential capabilities including a syntax colouring editor with analysis issue markers, MDI tabs, Find/Replace and Source/Header flip to name but a handful of the more obvious recent changes.

VisualLintGui is currently capable of analysing projects for Visual Studio, Visual C++, Eclipse, CodeGear C++ and AVR Studio 5.0, but it can obviously potentially analyse a far wider variety of codebases than that.

Indeed, one of the reasons we have been keen to develop it is to provide a way to support embedded IDEs for which developing a Visual Lint plug-in is not a viable proposition. As such we expect to add support for further project and workspace file formats as and when our customers need them.

VisualLintGui currently resides in our Visual Lint development branch, but given the recent pace of development on it we are likely to look at porting it back into Visual Lint 3.5 in the not too distant future.

In the meantime we will have a development build on our stand at the ACCU Conference next week, so if you are going please do come and take a look.

Introducing VisualLintGui

If you have been following me (@annajayne) on Twitter, you may have noticed me talking about something called "VisualLintGui". This is actually the second of two projects (the first being VisualLintConsole - the command line version of Visual Lint) we got underway after the release of Visual Lint 3.0. Now that VisualLintConsole (the command line version of Visual Lint) is out in the wild, we have turned our attention to VisualLintGui. This is, as the name suggests, a standalone Visual Lint application with a graphical user interface - basically a text editor focused on code analysis: Although it has been fully functional in terms of analysis functions for quite some time, until recently we were not able to devote a great deal of time to the details of its user interface. That has now changed, and since February VisualLintGui has gained many essential capabilities including a syntax colouring editor with analysis issue markers, MDI tabs, Find/Replace and Source/Header flip to name but a handful of the more obvious recent changes. VisualLintGui is currently capable of analysing projects for Visual Studio, Visual C++, Eclipse, CodeGear C++ and AVR Studio 5.0, but it can obviously potentially analyse a far wider variety of codebases than that. Indeed, one of the reasons we have been keen to develop it is to provide a way to support embedded IDEs for which developing a Visual Lint plug-in is not a viable proposition. As such we expect to add support for further project and workspace file formats as and when our customers need them. VisualLintGui currently resides in our Visual Lint development branch, but given the recent pace of development on it we are likely to look at porting it back into Visual Lint 3.5 in the not too distant future. In the meantime we will have a development build on our stand at the ACCU Conference next week, so if you are going please do come and take a look.

Hannametoden – slik løser du Rubik’s kube (som vist på TV2)

Her er en enkel beskrivelse på hvordan man løser Rubik’s kube (PDF). Jeg skrev den som en lærebok til min datter Hanna da hun var 8 år gammel – derav navnet Hannametoden. Det er en forenklet versjon av en metode som brukes av de beste i verden (CFOP / Fridrich). Hun brukte et par dager på å lære seg å løse kuben på egen hånd basert på denne “oppskriften”. Vi besøkte “God Morgen Norge” på TV2 den 17. Februar 2012 hvor blant annet denne metoden ble presentert (artikkel).

English summary: this is a very simple description on how to solve the Rubik’s cube. I wrote it to my then 8 year old daughter – hence the name of the method. It is a simiplified version and a strict subset of the method used by the best cubers in the world. It is in Norwegian, but since it is a visual guide you might enjoy it anyway. Click the PDF link above.

Why I still use a separate editor

There is a lot that modern IDEs do well, but uncluttered writing space isn’t one of them. Once you add the various views of your project, the debug window, the source control window and various other important panes you’re left with a tiny viewport into your code. The visual clutter can be disabled of course, but you’ll get it back sooner or later. When you switch back to debug mode or build mode, for example.

Halfway through GoingNative 2012

It’s almost time to go back for the second day, but before I do I’d like to suggest that if you haven’t had a chance to attend in pereson or watch the livecast, see if you can find the videos online. My understanding is that they should be available - I’m writing this on my phone so I can’t be bothered to look at the moment but I’ll check later.

ResOrg 2.0 has been released

It's done. After a rather extended incubation period ResOrg 2.0.0.15 (the first public ResOrg 2.0 build) was uploaded earlier this morning, and the ResOrg product pages updated to match.

If you have used ResOrg 1.x before, you will notice that the user interface of ResOrg 2.0 is subtly different from its predecessor - notably in the Visual Studio plug-in (which now of course supports Visual Studio 2008 and 2010...).

In particular, the old (and rather limited) "ResOrg.NET Explorer" toolwindow has been replaced by a much more useful "Symbol Files Display" which is also available in the standalone application.

If you are using Visual Studio 2010, it might interest you to know that ResOrg 2.0 can automatically update Ribbon Designer (.mfcribbon-ms) files when an ID referenced in a ribbon resource is renumbered.

I won't include any screenshots in this post as a couple of good ones were included in the previous post, however if you are reading this post in your RSS reader you can find them in the blogpost ResOrg 2.0 update.

ResOrg 2.0 has been released

Well, it's done. After a rather extended incubation period ResOrg 2.0.0.15 (the first public ResOrg 2.0 build) was uploaded earlier this morning, and the ResOrg product pages updated to match.

If you have used ResOrg 1.x before, you will notice that the user interface of ResOrg 2.0 is subtly different from its predecessor - notably in the Visual Studio plug-in (which now of course supports Visual Studio 2008 and 2010...).

In particular, the old (and rather limited) "ResOrg.NET Explorer" toolwindow has been replaced by a much more useful "Symbol Files Display" which is also available in the standalone application.

If you are using Visual Studio 2010, it might interest you to know that ResOrg 2.0 can automatically update Ribbon Designer (.mfcribbon-ms) files when an ID referenced in a ribbon resource is renumbered.

I won't include any screenshots in this post as a couple of good ones were included in the previous post, however if you are reading this post in your RSS reader you can find them at http://www.riverblade.co.uk/blog.php?archive=2011_12_01_archive.xml#2011121501.

ResOrg 2.0 has been released

Well, it's done. After a rather extended incubation period ResOrg 2.0.0.15 (the first public ResOrg 2.0 build) was uploaded earlier this morning, and the ResOrg product pages updated to match.

If you have used ResOrg 1.x before, you will notice that the user interface of ResOrg 2.0 is subtly different from its predecessor - notably in the Visual Studio plug-in (which now of course supports Visual Studio 2008 and 2010...).

In particular, the old (and rather limited) "ResOrg.NET Explorer" toolwindow has been replaced by a much more useful "Symbol Files Display" which is also available in the standalone application.

If you are using Visual Studio 2010, it might interest you to know that ResOrg 2.0 can automatically update Ribbon Designer (.mfcribbon-ms) files when an ID referenced in a ribbon resource is renumbered.

I won't include any screenshots in this post as a couple of good ones were included in the previous post, however if you are reading this post in your RSS reader you can find them at http://www.riverblade.co.uk/blog.php?archive=2011_12_01_archive.xml#2011121501.

ResOrg 2.0 has been released

Well, it's done. After a rather extended incubation period ResOrg 2.0.0.15 (the first public ResOrg 2.0 build) was uploaded earlier this morning, and the ResOrg product pages updated to match. If you have used ResOrg 1.x before, you will notice that the user interface of ResOrg 2.0 is subtly different from its predecessor - notably in the Visual Studio plug-in (which now of course supports Visual Studio 2008 and 2010...). In particular, the old (and rather limited) "ResOrg.NET Explorer" toolwindow has been replaced by a much more useful "Symbol Files Display" which is also available in the standalone application. If you are using Visual Studio 2010, it might interest you to know that ResOrg 2.0 can automatically update Ribbon Designer (.mfcribbon-ms) files when an ID referenced in a ribbon resource is renumbered. I won't include any screenshots in this post as a couple of good ones were included in the previous post, however if you are reading this post in your RSS reader you can find them at http://www.riverblade.co.uk/blog.php?archive=2011_12_01_archive.xml#2011121501.

Moving to a multi-VHD Windows installation to separate work and personal data

I had been thinking about setting myself up with a way to work from home in a disconnected fashion. Most of the places I’ve worked at in the past required me to remote into the work desktop, which is a good idea if both sides have 100% uptime on their network connection and no issues with them being affected by adverse weather. Which in reality means that the connections tended to be unstable if the weather dictated that one really, really wanted to work from home on a particular day because snowfall was horizontal, for example.

Mocking in C++

ACCU London's July 2011 talk was about mocking in C++, given by Ed Sykes and hosted by 7 City.

Ed talked about MockItNow and Hippomocks. He pointed out, as has been said many times before Mocks aren't Stubs.

I can no longer remember all the details so will have to try these out for myself to see how they work.

Many thanks to Ed for a great talk, though.

Mocking in C++

ACCU London's July 2011 talk was about mocking in C++, given by Ed Sykes and hosted by 7 City.

Ed talked about MockItNow and Hippomocks. He pointed out, as has been said many times before Mocks aren't Stubs.

I can no longer remember all the details so will have to try these out for myself to see how they work.

Many thanks to Ed for a great talk, though.

Another good reason to keep source file sizes small

Merging a file between SCM branches that is several thousand lines in size and has significant changes in both branches is a good way to have an unpleasant day, even if the SCM that’s being used has good support for cross-branch merging. Yes, I know, ideally one tries to make sure that two branches don’t diverge that far but that’s not always possible, especially if there are significant changes to the design that affect the merge.

Deep C (and C++)

Programming is hard. Programming correct C and C++ is particularly hard. Indeed, both in C and certainly in C++, it is uncommon to see a screenful containing only well defined and conforming code. Why do professional programmers write code like this? Because most programmers do not have a deep understanding of the language they are using. While they sometimes know that certain things are undefined or unspecified, they often do not know why it is so. In these slides we will study small code snippets in C and C++, and use them to discuss the fundamental building blocks, limitations and underlying design philosophies of these wonderful but dangerous programming languages.

Jon Jagger and I just released a slide deck to discuss the fundamentals of C and C++ (slideshare, pdf).

Visual Lint and Atmel AVR Studio 5

From our perspective one of the more intriguing embedded environments to appear recently is Atmel's AVR Studio 5.

When I first saw a screenshot of this IDE (it was mentioned in a post in the CodeProject Lounge) it was immediately obvious that this was some sort of Visual Studio derivative.

In fact, although it uses GCC toolchains, the environment is based on the Visual Studio 2010 isolated shell (which incidentally is something we briefly considered using ourselves for a future standalone GUI version of Visual Lint, but decided against because of its complexity and the size of the download).

It obviously occurred to us then that as a Visual Studio derivative, it shouldn't be too difficult to get Visual Lint running within it. The first step was obviously to install the IDE in a VM (XP SP3 - doesn't XP look a bit old these days...?) and experiment with some projects.

AVR Studio 5 codebases uses the Visual Studio 2010 solution file format (albeit rebadged as a .avrsln file) and a new MSBuild based project file format (.avrgccproj), so the first thing we obviously had to do was implement parsers for these files (something that will also benefit LintProject Pro, of course). Once that was done, we turned our attention to getting Visual Lint to load within the IDE itself.

This turned out to be fairly straightforward. Although AVR Studio 5 does not seem to support COM add-in registration in HKEY_LOCAL_MACHINE (which is how the Visual Lint add-in registers in Visual Studio), the corresponding registration in HKEY_CURRENT_USER\Software\Atmel\AVRStudio\5.0\AddIns does work. Although this is problematical from an installation point of view (see my previous post on the Visual Studio 11 Developer Preview) it is not a showstopper by any means.

With manual add-in registration in place, Visual Lint loaded within the IDE. Although a few minor tweaks were needed to work around issues such as AVR reporting itself as "Visual Studio Express Edition, version 1.0" (which caused the version detection code in Visual Lint to default to 16 colour command bitmaps!) those were easily addressed.

As a result, we now have AVR Studio 5 running with a development build of Visual Lint:

Visual Lint running within AVR Studio 5: Visual Lint Status View. Visual Lint running within AVR Studio 5: Analysis Status and Results Displays.

Although we still have quite a bit to do (not least the code editor markers and installer) before AVR Studio 5 can become a supported host environment for Visual Lint this is a very promising start. Needless to say, beta testers are welcome.

Visual Lint and Atmel AVR Studio 5

From our perspective one of the more intriguing embedded environments to appear recently is Atmel's AVR Studio 5.

When I first saw a screenshot of this IDE (it was mentioned in a post in the CodeProject Lounge) it was immediately obvious that this was some sort of Visual Studio derivative.

In fact, although it uses GCC toolchains, the environment is based on the Visual Studio 2010 isolated shell (which incidentally is something we briefly considered using ourselves for a future standalone GUI version of Visual Lint, but decided against because of its complexity and the size of the download).

It obviously occured to us then that as a Visual Studio derivative, it shouldn't be too difficult to get Visual Lint running within it. The first step was obviously to install the IDE in a VM (XP SP3 - doesn't XP look a bit old these days...?) and experiment with some projects.

AVR Studio 5 codebases uses the Visual Studio 2010 solution file format (albeit rebadged as a .avrsln file) and a new MSBuild based project file format (.avrgccproj), so the first thing we obviously had to do was implement parsers for these files (something that will also benefit LintProject Pro, of course). Once that was done, we turned our attention to getting Visual Lint to load within the IDE itself.

This turned out to be fairly straightforward. Although AVR Studio 5 does not seem to support COM add-in registration in HKEY_LOCAL_MACHINE (which is how the Visual Lint add-in registers in Visual Studio), the corresponding registration in HKEY_CURRENT_USER\Software\Atmel\AVRStudio\5.0\AddIns does work. Although this is problematical from an installation point of view (see my previous post on the Visual Studio 11 Developer Preview) it is not a showstopper by any means.

With manual add-in registration in place, Visual Lint loaded within the IDE. Although a few minor tweaks were needed to work around issues such as AVR reporting itself as "Visual Studio Express Edition, version 1.0" (which caused the version detection code in Visual Lint to default to 16 colour command bitmaps!) those were easily addressed.

As a result, we now have AVR Studio 5 running with a development build of Visual Lint:

Visual Lint running within AVR Studio 5

Visual Lint running within AVR Studio 5

Although we still have quite a bit to do (not least the code editor markers and installer) before AVR Studio 5 can become a supported host environment for Visual Lint this is a very promising start. Needless to say, beta testers are welcome.

Visual Lint and Atmel AVR Studio 5

From our perspective one of the more intriguing embedded environments to appear recently is Atmel's AVR Studio 5.

When I first saw a screenshot of this IDE (it was mentioned in a post in the CodeProject Lounge) it was immediately obvious that this was some sort of Visual Studio derivative.

In fact, although it uses GCC toolchains, the environment is based on the Visual Studio 2010 isolated shell (which incidentally is something we briefly considered using ourselves for a future standalone GUI version of Visual Lint, but decided against because of its complexity and the size of the download).

It obviously occured to us then that as a Visual Studio derivative, it shouldn't be too difficult to get Visual Lint running within it. The first step was obviously to install the IDE in a VM (XP SP3 - doesn't XP look a bit old these days...?) and experiment with some projects.

AVR Studio 5 codebases uses the Visual Studio 2010 solution file format (albeit rebadged as a .avrsln file) and a new MSBuild based project file format (.avrgccproj), so the first thing we obviously had to do was implement parsers for these files (something that will also benefit LintProject Pro, of course). Once that was done, we turned our attention to getting Visual Lint to load within the IDE itself.

This turned out to be fairly straightforward. Although AVR Studio 5 does not seem to support COM add-in registration in HKEY_LOCAL_MACHINE (which is how the Visual Lint add-in registers in Visual Studio), the corresponding registration in HKEY_CURRENT_USER\Software\Atmel\AVRStudio\5.0\AddIns does work. Although this is problematical from an installation point of view (see my previous post on the Visual Studio 11 Developer Preview) it is not a showstopper by any means.

With manual add-in registration in place, Visual Lint loaded within the IDE. Although a few minor tweaks were needed to work around issues such as AVR reporting itself as "Visual Studio Express Edition, version 1.0" (which caused the version detection code in Visual Lint to default to 16 colour command bitmaps!) those were easily addressed.

As a result, we now have AVR Studio 5 running with a development build of Visual Lint:

Visual Lint running within AVR Studio 5

Visual Lint running within AVR Studio 5

Although we still have quite a bit to do (not least the code editor markers and installer) before AVR Studio 5 can become a supported host environment for Visual Lint this is a very promising start. Needless to say, beta testers are welcome.

Visual Lint and Atmel AVR Studio 5

From our perspective one of the more intriguing embedded environments to appear recently is Atmel's AVR Studio 5. When I first saw a screenshot of this IDE (it was mentioned in a post in the CodeProject Lounge) it was immediately obvious that this was some sort of Visual Studio derivative. In fact, although it uses GCC toolchains, the environment is based on the Visual Studio 2010 isolated shell (which incidentally is something we briefly considered using ourselves for a future standalone GUI version of Visual Lint, but decided against because of its complexity and the size of the download). It obviously occured to us then that as a Visual Studio derivative, it shouldn't be too difficult to get Visual Lint running within it. The first step was obviously to install the IDE in a VM (XP SP3 - doesn't XP look a bit old these days...?) and experiment with some projects. AVR Studio 5 codebases uses the Visual Studio 2010 solution file format (albeit rebadged as a .avrsln file) and a new MSBuild based project file format (.avrgccproj), so the first thing we obviously had to do was implement parsers for these files (something that will also benefit LintProject Pro, of course). Once that was done, we turned our attention to getting Visual Lint to load within the IDE itself. This turned out to be fairly straightforward. Although AVR Studio 5 does not seem to support COM add-in registration in HKEY_LOCAL_MACHINE (which is how the Visual Lint add-in registers in Visual Studio), the corresponding registration in HKEY_CURRENT_USER\Software\Atmel\AVRStudio\5.0\AddIns does work. Although this is problematical from an installation point of view (see my previous post on the Visual Studio 11 Developer Preview) it is not a showstopper by any means. With manual add-in registration in place, Visual Lint loaded within the IDE. Although a few minor tweaks were needed to work around issues such as AVR reporting itself as "Visual Studio Express Edition, version 1.0" (which caused the version detection code in Visual Lint to default to 16 colour command bitmaps!) those were easily addressed. As a result, we now have AVR Studio 5 running with a development build of Visual Lint:
Visual Lint running within AVR Studio 5

Visual Lint running within AVR Studio 5
Although we still have quite a bit to do (not least the code editor markers and installer) before AVR Studio 5 can become a supported host environment for Visual Lint this is a very promising start. Needless to say, beta testers are welcome.

Useful collection of Qt debug visualizers for Visual Studio

I had to reinstall VS2010 at work and because I clearly didn’t think this all the way through, forgot to save my autoexp.dat file before removing the old installation. And of course I didn’t realise what had happened until I had to dig deeper into some Qt GUI code that wasn’t quite working as expected, and of course I was prompted with the raw data. Fortunately a quick search on Google led me to this page Human Machine Teaming Lab | Knowledge / Qt that contains a very comprehensive set of visualisers.

Power series for PCA

The book says estimate the value of the eigenvector, then iterate.
But my vector cycled as transpose(1,1), transpose(-1, 1), transpose(1, 1) which is a bit of a problem.
Oh for precise instructions.
I'll report back when I find a suitable estimate for the starting value.

Power series for PCA

The book says estimate the value of the eigenvector, then iterate.
But my vector cycled as transpose(1,1), transpose(-1, 1), transpose(1, 1) which is a bit of a problem.
Oh for precise instructions.
I'll report back when I find a suitable estimate for the starting value.

The Champion, the Chief and the Manager

Successful product development projects are often characterized by having an enthusiastic product champion with solid domain knowledge, a visible and proud chief engineer, and a clever and supportive project manager. And of course, the most important thing, a group of exceptional developers. From an organizational point of view it makes sense to require that all projects should clearly identify these three roles:

The Champion: The product champion is a person that dreams about the product, has a vision about how it can be used and can answer questions about what is important and what is less important. The product champion is required to have a deep and solid domain knowledge and will often play the role of a customer proxy in the project. This position can only be held by a person that is deeply devoted and has a true passion for the product to be created. The product champion is the main interface between the project and the customer/users. (Sometimes also known as: Product Manager, Project Owner, Customer Proxy…)

The Chief: The chief engineer is a technical expert that has a vision of the complete solution and is always ready to defend this vision. At any time, the chief engineer should be able, and willing to stand up to proudly describe the solution and explain how everything fits together. He/she should feel responsible for technological decisions that the exceptional developers do, but also make sure that the solution is supporting the business strategy. The chief engineer is the main communication channel between this project and other projects. (Sometimes also known as: System Architect, Tech Lead, Shusa, …)

The Manager: The project manager is a person that leads a team to success by managing the resources on a project in an effective and sensible way. He/she will be responsible for actively discovering and removing impediments. The project manager is the main interface between the project and corporate management. (Sometimes also known as: Scrum Master, Team Leader, …)

Of course, for very small projects these three roles can be fulfilled by one person, but for projects of some size there should be three people filling these three roles: one product champion, one chief engineer and one project manager. These three people must work together as a team, form an allround defence (aka kringvern) around the project, while being available to the developers at any time. Their task is to “protect” and “promote” the project to the outside world so that the exceptional developers can focus on doing the job.

I believe that identifying these three roles is the only thing an organization needs to impose in order to increase the chance of success. Then the team of exceptional developers together with their servants decide everything else, including which methodology and technology to use.

Visual Studio 2010 SP1 has been released

For those who are using Visual Studio 2010, the service pack has now been officially released: Visual Studio 2010 Service Pack 1 General Availability - Visual C++ Team Blog - Site Home - MSDN Blogs Edit: The download like doesn’t seem to work for me yet, given that it’s only gone General Availability today it might be worth checking back a little later. Edit again - we have a general availability download link: http://www.

If your VS2010 C++ build is constantly rebuilding a project that hasn’t changed

Check if you’re seeing the following output in the build pane: InitializeBuildStatus: Creating ".unsuccessfulbuild" because "AlwaysCreate" was specified. I’ve just fixed a bunch of these errors in one of our solutions here and all of these were caused by one of two issues: The project file referenced files that were no present in the source tree A custom build step either was supposed to generate a file but didn’t, or the file ended up in the wrong place In order to find out if there are missing files that trigger the perma-rebuild, you’ll also have to enable Visual Studio’s debug output as described in this stackoverflow answer.

How to view undecorated DLL-exported C++ symbols in Visual Studio 2010

Yes, it’s one of those “note to self” posts, but I keep forgetting how to do it. As the first step, you run dumpbin /EXPORTS and redirect the output into a file because the utility that unmangles the names (undname.exe) doesn’t appear to be able to take piped input via stdin. Then, run undname , with being the file that contains the exported symbols. At least that way the symbols become mostly readable.

Boost.Log, preventing the ‘unhandled exception’ in Windows 7 when attempting to log to the event log

I recently ran into a requirements for retrofitting a logging library to an existing project. My first instinct was to throw Pantheios at it as I’ve used it before and It Just Worked. Unfortunately in this case, we needed the ability to log to more than two event sinks and it looked like this was getting a little awkward with Pantheios, which prompted me to look at Boost.Log. After some digging through the documentation and the samples, I managed to get the logging going to the three event sinks we needed.

A couple of noteworthy links

It’s bit of a link roundup from the past couple of months. Most of you probably saw these already as I’d think you’re probably reading the same blogs. C++ links VS2010 SP1 Beta: What’s in it for C++ developers. While I’m not going to chance installing the beta on my main developer workstation, it looks like there are some interesting features in the service pack. I hope that the IDE stability has also been improved.

Sometimes, std::set just doesn’t cut it from a performance point of view

A piece of code I recently worked with required data structures that hold unique, sorted data elements. The requirement for the data being both sorted and unique came from it being fed into std::set_intersection() so using an std::set seemed to be an obvious way of fulfilling these requirements. The code did fulfill all the requirements but I found the performance somewhat wanting in this particular implementation (Visual Studio 2008 with the standard library implementation shipped by Microsoft).

Quick tip if you see ‘bad DLL or entry point msobj80.dll’ when building software with VS2008

Try stopping mspdbsrv.exe (the process that generates the pdb files during a build) if it is still running. My understanding is that it’s supposed to shut down at the end of the compilation but it seems that it can turn into a zombie process and if the latter happens, you can get the above error when linking your binaries. Anyway, I just ran into this issue and stopping the process via the Task Manager resolved the issue for me.

On combining #import and /MP in C++ builds with VS2010

I’m currently busy porting a large native C++ project from VS2008 to VS2010 and one of the issues I keep running into was build times. The VS2008 build uses a distributed build system; Unfortunately the vendor doesn’t support VS2010 yet, so I couldn’t use the same infrastructure. In order to get a decent build speed, I started exploring MSBuild’s ability to build projects in parallel (which is fairly similar to VS2008’s ability to build projects in parallel) and the C++ compiler’s ability to make use of multiple processors/cores, aka the /MP switch.

Using CEDET-1.0 pre7 with Emacs 23.2

It’s been mentioned in several places that GNU Emacs versions sometime after 23.1.50 do come with an integrated version of CEDET. While I think that’s a superb idea it unfortunately managed to break my setup, which relies on a common set of emacs-lisp files that I hold under version control and distribute across the machines I work on. Those machines have different versions of GNU-based Emacsen (pure GNU, Emacs/W32, Carbon Emacs etc) so I can’t rely on the default CEDET.

About

I am a software engineer and occasional development manager with over 25 years experience writing production code, mostly in C++. During that time I’ve worked on anything from Windows device drivers when people said you couldn’t write those in C++, to financial trading applications. I have an interest in programming languages in general and am a firm believer that you cannot call yourself an experienced software engineer if you aren’t able to write good code in multiple programming languages.

Welcome back to the new blog, almost the same as the old blog

The move to the other side of the Atlantic from the UK is almost complete, I’m just waiting for my household items - and more importantly, my computer books etc - to turn up. So it’s time to start blogging again in the next few weeks. Due to some server trouble in the UK, combined with the fact that I do like Serendipity as a blogging system but was never 100% happy with it, I’ve switched to using WordPress on a server here in the US.

Solid C++ Code by Example

Sometimes I see code that is perfectly OK according to the definition of the language but which is flawed because it breaks too many established idioms and conventions of the language. I just gave a 90 minute workshop about Solid C++ Code at the ACCU 2010 conference in Oxford.

When discussing solid code it is important to work on “real” problems, not just toy examples and coding katas because they lack the required complexity to make discussions interesting. So, as a preparation I had developed, from scratch, an NTLM Authentication Library (pal) that can be used by a client to do NTLM authentication when retrieving a protected webpage on an IIS server. Then I picked out a few files, the encoding and decoding of NTLM messages, and tried to write it as solid as possible after useful discussions with ACCU friends and some top coders within my company. Then I “doped” the code, I injected impurities and bad stuff into the code, to produce these handouts. At the ACCU talk/workshop the audience read through the “doped” code and came up with things that could be improved while I did online coding (in Emacs of course) fixing the issues as they popped up. With loads of solid C++ coders in the room, I think we found most of the issues worth caring about, and we ended up with something that can be considered to be solid C++, something that appears to have been developed by somebody who cares about high quality code. Here are the slides that I used to summarize our findings. Feel free to use these slides for whatever you want. Perhaps you would like to run a similar talk in your development team? Contact me if you want the complete source code for the authentication library, or if you want to discuss ideas for running a similar talk yourself. I plan to publish the code on githup soon – so stay tuned.

UPDATE June 2010: The PAL library is now published on github. A much improved slide set is also available on slideshare.

Hard Work Does Not Pay Off

As a programmer, you’ll find that working hard often does not pay off. You might fool yourself and a few colleagues into believing that you are contributing a lot to a project by spending long hours at the office. But the truth is that by working less, you might achieve more – sometimes much more. If you are trying to be focused and “productive” for more than 30 hours a week, you are probably working too hard. You should consider reducing your workload to become more effective and get more done.

This statement may seem counterintuitive and even controversial, but it is a direct consequence of the fact that programming and software development as a whole involve a continuous learning process. As you work on a project, you will understand more of the problem domain and, hopefully, find more effective ways of reaching the goal. To avoid wasted work, you must allow time to observe the effects of what you are doing, reflect on the things that you see, and change your behavior accordingly.

Professional programming is usually not like running hard for a few kilometers, where the goal can be seen at the end of a paved road. Most software projects are more like a long orienteering marathon. In the dark. With only a sketchy map as guidance. If you just set off in one direction, running as fast as you can, you might impress some, but you are not likely to succeed. You need to keep a sustainable pace, and you need to adjust the course when you learn more about where you are and where you are heading.

In addition, you always need to learn more about software development in general and programming techniques in particular. You probably need to read books, go to conferences, communicate with other professionals, experiment with new implementation techniques, and learn about powerful tools that simplify your job. As a professional programmer, you must keep yourself updated in your field of expertise — just as brain surgeons and pilots are expected to keep themselves up to date in their own fields of expertise. You need to spend evenings, weekends, and holidays educating yourself; therefore, you cannot spend your evenings, weekends, and holidays working overtime on your current project. Do you really expect brain surgeons to perform surgery 60 hours a week, or pilots to fly 60 hours a week? Of course not: preparation and education are an essential part of their profession.

Be focused on the project, contribute as much as you can by finding smart solutions, improve your skills, reflect on what you are doing, and adapt your behavior. Avoid embarrassing yourself, and our profession, by behaving like a hamster in a cage spinning the wheel. As a professional programmer, you should know that trying to be focused and “productive” 60 hours a week is not a sensible thing to do. Act like a professional: prepare, effect, observe, reflect, and change.

[This is a reprint of a chapter that I wrote for the newly released O’Reilly book 97 Things Every Programmer Should Know]

Solving a Rubik’s cube in less than 60 seconds

A couple of months ago I bought a Rubik’s cube in a nearby shop and after reading some guides on the net I learned how to solve it. A few hours later I could solve it in about 4 minutes all by myself. After a few days of practice I was down to about 2 minutes, but it was difficult to see how I could improve much further using the beginners method I started out with. My cube and dexterity does not allow me to do more than about 2 moves per second so I realized that I had to reduce the number of moves, rather than speeding up my fingers. After reading several websites about speedsolving techniques I set my self a tough goal – to become a sub-60 cuber. I was determined to study and practice the art of solving the cube until I could solve a Rubik’s cube in less than 60 seconds on average.

I can now often solve it in less than 60 seconds, but I am not stable enough to call myself a sub-60 cuber yet, but I am very close. Give me a few more weeks (or months) and I will get there. While playing with the cube on the bus, at work, at home, in the pub, basically everywhere, all the time, I sometimes meet other geeks that want to learn how to solve the cube fast as well. So I thought I should write up a guide about how to get started.

If you do not know how to solve the cube you need to study one of a billion guides that are available on the net. Here is a beginner solution by Leyan Lo that I recommend. Once you can solve the cube without referring to a guide, you can start to read more advanced stuff. The ultimate guide is written by Jessica Fridrich, but it is not easy to read. I found CubeFreak by Shotaro Makisumi to be the most useful site out there.

After studying these sites, as well as hundreds of other sites and watching plenty of youtube videos, I have ended up with a simplified Fridrich method with a four-look last layer. Here is what I do to solve it in less than 60 seconds:

1. Solve the extended cross ~5 sec (always a white cross)
2. Solve the first two layers (F2L) ~30 sec (keep cross on bottom)
3. Orient the last layer edges ~5 sec (1 out of 3 algorithms)
4. Orient the last layer corners ~5 sec (1 out of 7 algorithms)
5. Permute the last layer corners ~5 sec (1 out of 2 algorithms)
6. Permute the last layer edges ~5 sec (1 out of 4 algorithms)

My current focus is to improve the F2L step as I am still struggling to get under 30 seconds, but I am confident that with some more practice I will manage to get closer to 20 seconds and then I can label myself a sub-60 cuber.

For further inspiration, here is a video of a sub-120 cuber and a sub-10 cuber.

Happy cubing!

The homebuilt NAS/home server, revisited

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog. I’ve blogged building my own NAS/home server before, see here, here, here and here. After a few months, I think it might be time for an interim update.

Building a new home NAS/home server, part IV

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog. I’ve done some more performance testing and while I’m not 100% happy with the results, I decided to keep using FreeBSD with zfs on the server for the time being.

Building a new home NAS/home server, part III

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog. Unfortunately the excitement from seeing OpenSolaris’s disk performance died down pretty quickly when I noticed that putting some decent load on the network interface resulted in the network card locking up after a little while.

Reblog: Building a new home NAS/home server, part II

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul of these posts so I wanted to consolidate all the articles on the same blog. The good news is that the hardware seems to be behaving it for a while now and everything appears to Just Work. FreeBSD makes things easy for me in this case as I’m very familiar with it so I only spent a few hours getting everything set up.

Reblog: Building a new home NAS/home server, Part I

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog. Up to now I’ve mostly been using recycled workstations as my home mail, SVN and storage server. Nothing really wrong with that as most workstations are fast enough but I’m running into disk space issues again after I started backing up all the important machines onto my server.

The joy of using outdated C++ compiler versions

Thud, thud, thud… The sound of the developer’s head banging on the desk late at night. What happened? Well, I had a requirement to make use of some smart pointers to handle a somewhat complicated resource management issue that was mostly being ignored in the current implementation, mainly on the grounds of it being slightly to complicated to handle successfully using manual pointer management. The result - not entirely unexpected - was a not so nice memory leak.