Resizable UIWindow on iOS

I came across a Todo item in my inbox (I often email myself things todo) whilst tidying it up following Christmas, entitled "Make a resizable UIWindow". I can remember sending this to myself but not what I was reading or watching that prompted me to do it other than I thought it must be possible to play around with the default style of main UIWindow that an iOS App displays most of its content in.

Normally, when running an iOS App the main UIWindow is black and takes up the entire devices screen. It is possible to have more than one UIWindow but the usual case for this is when content is being displayed on a secondary display, e.g. AirPlay. On MacOS it is common to have many instances of UIWindow with these being displayed on the Mac's desktop. It's normal to think of an iOS device as having a desktop as when using an App (or multiple if using Multi-tasking) then the desktop/background is completely obscured.

None the less, UIWindow is a subclass of UIView. This inherently means it has a Frame, i.e. a position in it's parent space plus a width & height. I wondered if these could be altered - they can!

I wrote a very simple single page App which contains a UIView that by default has the screen dimensions and is coloured blue, on top this, roughly in the centre is a red rectangle (a child UIView).

There are a set of 3 gesture recognisers (pinch, rotate & pan) that can be applied to either the Red Rect, the View or the underlying Window. Which one they're applied too is set using the Picker control hosted by the UIView.



The next picture shows all 3 components resized, rotated and moved around. Note how the Status Bar has almost disappeared due the black text colour now being on a black background (though the green battery indicator is still present) as no UIWindow is there.



By dragging the Window over the Status Bar it's revealed again.



Next, the Red Rectangle has been completely detached from the it's parent UIView. At this point touch events do not seem to be sent to it any more.



Taking this further my moving (via the pan gesture) the View from off the Window this too stops receiving input. At this one nothing can be done with the App.



That's not completely true. The App can be rotated. This has some bizarre results probably not helped by there being no constraints.


The code for the App is available on GitHub though the code for ViewController is below



The only slightly strange things happening here are:


  • The re-assignment of the Gesture Recognizers to the selected UIView
  • For each of the methods invoked by the Gesture Recognizers how they 'reset' the main parameter modified by the gesture after each invocation.
If you run the project, as the there are multiple Gesture Recognizers but multiple simultaneous Gesture Recognition support hasn't been enabled then in particular pinch & rotate must be discreet gestures. Also, rotation can be fiddly to get working especially if UI element has been pinched to a small size.

I'm not really aware of any practical application of this. It might be interesting to try have 2 UIWindows displayed at the same time though I'm not sure this is possible and could easily be achieved with 2 UIViews. Perhaps to have 2 sub-apps each in its own UIWindow which can be swapped between.

Unit Testing with Mocha, a local instance of dynamoDB & Promises

I'm writing the backend for my current iOS App in Javascript using node.js, AWS Lambda along with DynamoDB.

My AWS Lambda code is mostly AWS Lambda agnostic except for the initial handler methods, this makes them fairly testable outside of AWS. However, they depend on the DynamoDB. It's quite easy to write Unit Tests that run against a live version of DynamoDB but I wanted to run against a local instance, ideally an in-memory instance so that it would be quick (not that running against a real instance is that slow) and so that I could have a clean dB each time.

NOTES

  • As these tests are running against a dB it might be more accurate to call them Integration Tests implemented using a Unit Testing framework but I'll refer to them as Unit Tests (UTs).
  • This is running on MacOS
  • I don't Unit Test the actual AWS Lambda function. Instead I export the underlying functions & objects that the AWS Lambda uses and Unit Test these.
  • I'm a JavaScript, Node, AWS n00b so this if you spot something wrong or that'd bad please comment.
  • I don't like the callback pyramid so I use Promises where I can. I would use async and await but the latest version of Node that AWS Lambda supports doesn't support them :-(

In order to run this locally you'll need:
My goal was to have a clean dB for each individual Unit Test. The simplest way to achieve this I thought was to create an in-memory of dynamoDB and destroy it after each Unit Test.


This uses the child_process npm package to create an instance before each test, store the handle to the process in local-ish variable and following the tasks just kill it. The important points here are that the '-inMemory' option is used meaning that when the dB instance is killed and another re-started everything is effectively wiped without having to do everything.

The problem I had with this approach is that in addition to creating the dB each time I also needed to create a table. Whilst the documentation for local dynamoDB says that one of the differences between the AWS hosted & the local versions is that CreateTable completes immediately it seems that the function does indeed complete immediately the table isn't immediately available. This meant the UT using the table often failed with:

1) awsLambdaToTest The querying an empty db for a user Returns{}:
     ResourceNotFoundException: Cannot do operations on a non-existent table

I'm going to jump ahead and show the completed Unit Test file and explain the various things I had to do in order to get it working. This shows the tests.


before()/after() - creating/destroying dynamoDB


Rather than attempting to create & destroy the dB for each Unit Test I settled with creating it once per  Unit Test file. This is handled in the begin() & after() functions. Here the local instance of dynamoDB is spawned using the child_process package and reference to the process retained. This is then used to kill it afterwards. The important point to note here is the use of the sleep package & function.

I found when I had multiple test files, each with their own begin() & after() functions that did the same as these, even though kill had purported to have killed the processed (I checked the killed flag) it seemed the process hadn't died immediately. This meant that the before() function in the next set of tests would succesfully connect to the dying instance of dynamoDB. Then later when any operation was performed it would just hang until Mocha timed-out the Unit Test/before handler. I tried various ways to detect that the process was really dead but none worked so settled for a sleep.

beforeEach()/afterEach() - creating/destroying the table

Where possible I use Promises. Mocha handles promises quite simply for both hooks (the before*/after* functions) and Unit Tests. The key is to make sure to return the final promise (or pass in the done parameter & call it - though I don't use this mechanism).

Looking at the beforeEach() function createTable() is called which returns a promise (from the AWS.Request type that aws-sdk.DynamoDB.createTable() returns. This promise is then chained too by the synchronous waitFor method. This polls the dB for state of the table. The returned promise will not complete until the table has been created and waitFor has completed.

I am not convinced that waitFor is needed. According to the AWS vs Local DynamoDB guide for local instances tables are created immediately. I added this check as occasionally I was getting resources errors like the one earlier. However, I think the cause for that was because I forgot the return statement before the call to createTable() meaning no Promise was returned to Mocha so it thought the beforeEach() function had completed. I have removed this since in my real Unit Tests and they all seem to work.

Unit Tests

That's really it. The hard part wasn't writing the UTs but getting a local instance of DynamoDB running with the table that the functions to test used in the correct state. Again, due to the functions being tested usually returning promises themselves it is necessary to return Promise. The assertion(s) are made synchronously in a then continuation chained to Promise returned from the function being tested and Promise from the whole chain returned.

If an assertion returns false then even though it's within a continuation Mocha detects this and the test fails. If the function under test throws then Mocha also catches this and the test fails.

And finally

timeout

There's also the this.timeout(5000); at the top of the tests for this file. By default Mocha has 2 second timeout for all tests and hooks. By having the 1 second sleep for starting the dB it's possible for the hook to timeout then causing everything else to fail. The 5 second timeout protects against this.

localhost

When creating the local instance it uses the default port of 8000 and is accessible via the localhost hostname alias. This is used to access the dB when creating the reference to it in before(). A local aws.config is also constructed in each of the functions that access the dB in the actual code under test below.

The actual code being tested


Unit Testing with Mocha, a local instance of dynamoDB & Promises

I'm writing the backend for my current iOS App in Javascript using node.js, AWS Lambda along with DynamoDB.

My AWS Lambda code is mostly AWS Lambda agnostic except for the initial handler methods, this makes them fairly testable outside of AWS. However, they depend on the DynamoDB. It's quite easy to write Unit Tests that run against a live version of DynamoDB but I wanted to run against a local instance, ideally an in-memory instance so that it would be quick (not that running against a real instance is that slow) and so that I could have a clean dB each time.

NOTES

  • As these tests are running against a dB it might be more accurate to call them Integration Tests implemented using a Unit Testing framework but I'll refer to them as Unit Tests (UTs).
  • This is running on MacOS
  • I don't Unit Test the actual AWS Lambda function. Instead I export the underlying functions & objects that the AWS Lambda uses and Unit Test these.
  • I'm a JavaScript, Node, AWS n00b so this if you spot something wrong or that'd bad please comment.
  • I don't like the callback pyramid so I use Promises where I can. I would use async and await but the latest version of Node that AWS Lambda supports doesn't support them :-(

In order to run this locally you'll need:
My goal was to have a clean dB for each individual Unit Test. The simplest way to achieve this I thought was to create an in-memory of dynamoDB and destroy it after each Unit Test.


This uses the child_process npm package to create an instance before each test, store the handle to the process in local-ish variable and following the tasks just kill it. The important points here are that the '-inMemory' option is used meaning that when the dB instance is killed and another re-started everything is effectively wiped without having to do everything.

The problem I had with this approach is that in addition to creating the dB each time I also needed to create a table. Whilst the documentation for local dynamoDB says that one of the differences between the AWS hosted & the local versions is that CreateTable completes immediately it seems that the function does indeed complete immediately the table isn't immediately available. This meant the UT using the table often failed with:

1) awsLambdaToTest The querying an empty db for a user Returns{}:
     ResourceNotFoundException: Cannot do operations on a non-existent table

I'm going to jump ahead and show the completed Unit Test file and explain the various things I had to do in order to get it working. This shows the tests.


before()/after() - creating/destroying dynamoDB


Rather than attempting to create & destroy the dB for each Unit Test I settled with creating it once per  Unit Test file. This is handled in the begin() & after() functions. Here the local instance of dynamoDB is spawned using the child_process package and reference to the process retained. This is then used to kill it afterwards. The important point to note here is the use of the sleep package & function.

I found when I had multiple test files, each with their own begin() & after() functions that did the same as these, even though kill had purported to have killed the processed (I checked the killed flag) it seemed the process hadn't died immediately. This meant that the before() function in the next set of tests would succesfully connect to the dying instance of dynamoDB. Then later when any operation was performed it would just hang until Mocha timed-out the Unit Test/before handler. I tried various ways to detect that the process was really dead but none worked so settled for a sleep.

beforeEach()/afterEach() - creating/destroying the table

Where possible I use Promises. Mocha handles promises quite simply for both hooks (the before*/after* functions) and Unit Tests. The key is to make sure to return the final promise (or pass in the done parameter & call it - though I don't use this mechanism).

Looking at the beforeEach() function createTable() is called which returns a promise (from the AWS.Request type that aws-sdk.DynamoDB.createTable() returns. This promise is then chained too by the synchronous waitFor method. This polls the dB for state of the table. The returned promise will not complete until the table has been created and waitFor has completed.

I am not convinced that waitFor is needed. According to the AWS vs Local DynamoDB guide for local instances tables are created immediately. I added this check as occasionally I was getting resources errors like the one earlier. However, I think the cause for that was because I forgot the return statement before the call to createTable() meaning no Promise was returned to Mocha so it thought the beforeEach() function had completed. I have removed this since in my real Unit Tests and they all seem to work.

Unit Tests

That's really it. The hard part wasn't writing the UTs but getting a local instance of DynamoDB running with the table that the functions to test used in the correct state. Again, due to the functions being tested usually returning promises themselves it is necessary to return Promise. The assertion(s) are made synchronously in a then continuation chained to Promise returned from the function being tested and Promise from the whole chain returned.

If an assertion returns false then even though it's within a continuation Mocha detects this and the test fails. If the function under test throws then Mocha also catches this and the test fails.

And finally

timeout

There's also the this.timeout(5000); at the top of the tests for this file. By default Mocha has 2 second timeout for all tests and hooks. By having the 1 second sleep for starting the dB it's possible for the hook to timeout then causing everything else to fail. The 5 second timeout protects against this.

localhost

When creating the local instance it uses the default port of 8000 and is accessible via the localhost hostname alias. This is used to access the dB when creating the reference to it in before(). A local aws.config is also constructed in each of the functions that access the dB in the actual code under test below.

The actual code being tested


Debugging AWS Lambda functions locally using VS Code and lambda-local

I've just started using AWS Lambda with node.js. I was able to develop these locally using the lambda-local npm package, e.g. with node.js installed (via brew) and lambda-local installed (using npm) then the following "hello, world" example is run as follows:

hellolambda.js

'use strict';

console.log('Loading function');

exports.handler = (event, context, callback) => {
    console.log('Received event:', JSON.stringify(event, null, 2));
    console.log('value1 =', event.key1);
    console.log('value2 =', event.key2);
    console.log('value3 =', event.key3);
    callback(null, event.key1);  // Echo back the first key value
    //callback('Something went wrong');

};

defaultevent.js

module.exports =
{
"key1": "hello",
"key2": "lambda",
"key3": "node"

};

/usr/local/bin/lambda-local -l hellolambda.js -e default event.js

Loading function
info: Logs
info: ------
info: START RequestId: d683128b-ac14-93c3-b2c1-5541f3bb3fda
Received event: {
  "key1": "hello",
  "key2": "lambda",
  "key3": "node"
}
value1 = hello
value2 = lambda
value3 = node
info: END
info: Message
info: ------
info: hello
info: -----

info: lambda-local successfully complete.

Rather than use bash and vi (I'm running on MacOS) I wanted to use some sort of IDE. VS Code seemed ideal as it's free and it also has builtin node.js debugging. Using it for editing is very simple, just open the folder containing the source. In this case ~/tmp/hellolambda

However, switching to the debugging section and creating the default launch configuration where VS Code will launch node with the specify file as the program doesn't do much good.



This is because when running a lambda locally using local-lambda the program that node needs to run is the local lambda environment that local-lambda creates and for it to launch the lambda function.

This can be simply configured by specifying the local-lambda script as the program (it's a node script) and then passing the lambda script and the event data as arguments using the args key (which isn't included when using the VS Code option to add a configuration). The original example above can be launched using the following configuration.

In the output window at the bottom the results of executing the lambda are shown. Breakpoints can be set and hit.

It's important that each command line argument, i.e. the option and the value are specified separately. Even though '-l' and its value are a pair they are separate command line arguments (2 in total) where "-l ${workspaceRoot}/hellolambda.js" is a single argument.

NOTE: The lambdas I'm writing are also using the AWS DynamoDB. Using a local instance of DynamoDB along with installing the AWS SDK via npm I've been able to successfully invoke local lambdas that have used the local instance of the DB.

Debugging AWS Lambda functions locally using VS Code and lambda-local

I've just started using AWS Lambda with node.js. I was able to develop these locally using the lambda-local npm package, e.g. with node.js installed (via brew) and lambda-local installed (using npm) then the following "hello, world" example is run as follows:

hellolambda.js

'use strict';

console.log('Loading function');

exports.handler = (event, context, callback) => {
    console.log('Received event:', JSON.stringify(event, null, 2));
    console.log('value1 =', event.key1);
    console.log('value2 =', event.key2);
    console.log('value3 =', event.key3);
    callback(null, event.key1);  // Echo back the first key value
    //callback('Something went wrong');

};

defaultevent.js

module.exports =
{
"key1": "hello",
"key2": "lambda",
"key3": "node"

};

/usr/local/bin/lambda-local -l hellolambda.js -e default event.js

Loading function
info: Logs
info: ------
info: START RequestId: d683128b-ac14-93c3-b2c1-5541f3bb3fda
Received event: {
  "key1": "hello",
  "key2": "lambda",
  "key3": "node"
}
value1 = hello
value2 = lambda
value3 = node
info: END
info: Message
info: ------
info: hello
info: -----

info: lambda-local successfully complete.

Rather than use bash and vi (I'm running on MacOS) I wanted to use some sort of IDE. VS Code seemed ideal as it's free and it also has builtin node.js debugging. Using it for editing is very simple, just open the folder containing the source. In this case ~/tmp/hellolambda

However, switching to the debugging section and creating the default launch configuration where VS Code will launch node with the specify file as the program doesn't do much good.



This is because when running a lambda locally using local-lambda the program that node needs to run is the local lambda environment that local-lambda creates and for it to launch the lambda function.

This can be simply configured by specifying the local-lambda script as the program (it's a node script) and then passing the lambda script and the event data as arguments using the args key (which isn't included when using the VS Code option to add a configuration). The original example above can be launched using the following configuration.

In the output window at the bottom the results of executing the lambda are shown. Breakpoints can be set and hit.

It's important that each command line argument, i.e. the option and the value are specified separately. Even though '-l' and its value are a pair they are separate command line arguments (2 in total) where "-l ${workspaceRoot}/hellolambda.js" is a single argument.

NOTE: The lambdas I'm writing are also using the AWS DynamoDB. Using a local instance of DynamoDB along with installing the AWS SDK via npm I've been able to successfully invoke local lambdas that have used the local instance of the DB.

Swift’s defer statement is funkier than I thought

Swift 2.0 introduced the defer keyword. I've used this a little but only in a simple way, basically when I wanted to make sure some code would be executed regardless of where control left the function, e.g.

private func resetAfterError() throws
{
  defer
  {
    selectedIndex = 0
isError = false
}
  if /* condition */
  {
    // Do stuff
    return
  }

  if /* other condition */
  {
    // Do other stuff
    return
  }

  // Do default stuff
}

In my usage to date there has always been some code that should always be executed prior to the function's exit and additionally only one piece of code. Therefore I've always put the defer statement at the top of the function so when reading it's pretty obvious.

I was aware that if there were multiple defer statements then they'd be executed in reverse order but what I'd not given any thought to before was what happens if the defer statement isn't reached. In fact I'd just assumed it was more of a declaration that this code should always be executed on function exit and as I put mine right at the start of the function this was effectively the case.

However, for some functions (probably most) you don't want this. You only want the deferred code executing if some else as happened. This is shown simply in The Swift Programming Language book example:

  1. func processFile(filename: String) throws {
  2. if exists(filename) {
  3. let file = open(filename)
  4. defer {
  5. close(file)
  6. }
  7. while let line = try file.readline() {
  8. // Work with the file.
  9. }
  10. // close(file) is called here, at the end of the scope.
  11. }
  12. }

In this if the file is not opened then the deferred code should not be executed. Another very important usage is:

extension NSLock
{
  func synchronized<T>(@noescape closure: () throws -> T) rethrows -> T
  {
  self.lock()

    defer
    {
self.unlock()
    }
  return try closure()
  }
}

If the lock is never obtained then it should never be unlocked. In this case this shouldn't have as the self.lock() will not return until it obtains the lock but if that line were replaced with self.

This is how defer works. If the defer statement is never reached and/or encountered then the deferred code block will never be executed. This includes branches (if-statements etc.). The following example:

enum WhenToReturn
{
  case After0
  case After1
  case After2
}

func deferTest(whenToReturn: WhenToReturn, shouldBranch: Bool)
{
  print("Defer Test - whenToReturn:\(whenToReturn), shouldBranch:\(shouldBranch)")
  defer
  {
    print("defer 0")
  }
  print("0")
  if whenToReturn == WhenToReturn.After0
  {
    return
  }
  defer
  {
    print("defer 1")
  }
  print("1")
  if whenToReturn == WhenToReturn.After1
  {
    return
  }
  if shouldBranch
  {
    defer
    {
      print("shouldBranch")
    }
  }
  defer
  {
    print("defer 2")
  }

  print("3")
}

deferTest(WhenToReturn.After0, shouldBranch: false)
deferTest(WhenToReturn.After1, shouldBranch: true)
deferTest(WhenToReturn.After2, shouldBranch: false)
deferTest(WhenToReturn.After2, shouldBranch: true)

Results:

Defer Test - whenToReturn:After0, shouldBranch:false
0
defer 0

Defer Test - whenToReturn:After1, shouldBranch:true
0
1
defer 1
defer 0

Defer Test - whenToReturn:After2, shouldBranch:false
0
1
3
defer 2
defer 1
defer 0

Defer Test - whenToReturn:After2, shouldBranch:true
0
1
shouldBranch
3
defer 2
defer 1
defer 0

Program ended with exit code: 0


This shows that returning before and/or not branching results in defer statements not being encountered hence the deferred code is not executed.  This is no different to say a finally-block in C#. The reason for my initial confusion is that there is no additional content for a defer block as there is for a finally block, i.e. the presence of the try, e.g.

try
{
  // Try some stuff 
}
finally
{
  // Always do something having tried something regardless of whether it worked or not
}

Whereas the only and actual context of the defer block is it's position.

Swift’s defer statement is funkier than I thought

Swift 2.0 introduced the defer keyword. I've used this a little but only in a simple way, basically when I wanted to make sure some code would be executed regardless of where control left the function, e.g.

private func resetAfterError() throws
{
  defer
  {
    selectedIndex = 0
isError = false
}
  if /* condition */
  {
    // Do stuff
    return
  }

  if /* other condition */
  {
    // Do other stuff
    return
  }

  // Do default stuff
}

In my usage to date there has always been some code that should always be executed prior to the function's exit and additionally only one piece of code. Therefore I've always put the defer statement at the top of the function so when reading it's pretty obvious.

I was aware that if there were multiple defer statements then they'd be executed in reverse order but what I'd not given any thought to before was what happens if the defer statement isn't reached. In fact I'd just assumed it was more of a declaration that this code should always be executed on function exit and as I put mine right at the start of the function this was effectively the case.

However, for some functions (probably most) you don't want this. You only want the deferred code executing if some else as happened. This is shown simply in The Swift Programming Language book example:

  1. func processFile(filename: String) throws {
  2. if exists(filename) {
  3. let file = open(filename)
  4. defer {
  5. close(file)
  6. }
  7. while let line = try file.readline() {
  8. // Work with the file.
  9. }
  10. // close(file) is called here, at the end of the scope.
  11. }
  12. }

In this if the file is not opened then the deferred code should not be executed. Another very important usage is:

extension NSLock
{
  func synchronized<T>(@noescape closure: () throws -> T) rethrows -> T
  {
  self.lock()

    defer
    {
self.unlock()
    }
  return try closure()
  }
}

If the lock is never obtained then it should never be unlocked. In this case this shouldn't have as the self.lock() will not return until it obtains the lock but if that line were replaced with self.

This is how defer works. If the defer statement is never reached and/or encountered then the deferred code block will never be executed. This includes branches (if-statements etc.). The following example:

enum WhenToReturn
{
  case After0
  case After1
  case After2
}

func deferTest(whenToReturn: WhenToReturn, shouldBranch: Bool)
{
  print("Defer Test - whenToReturn:\(whenToReturn), shouldBranch:\(shouldBranch)")
  defer
  {
    print("defer 0")
  }
  print("0")
  if whenToReturn == WhenToReturn.After0
  {
    return
  }
  defer
  {
    print("defer 1")
  }
  print("1")
  if whenToReturn == WhenToReturn.After1
  {
    return
  }
  if shouldBranch
  {
    defer
    {
      print("shouldBranch")
    }
  }
  defer
  {
    print("defer 2")
  }

  print("3")
}

deferTest(WhenToReturn.After0, shouldBranch: false)
deferTest(WhenToReturn.After1, shouldBranch: true)
deferTest(WhenToReturn.After2, shouldBranch: false)
deferTest(WhenToReturn.After2, shouldBranch: true)

Results:

Defer Test - whenToReturn:After0, shouldBranch:false
0
defer 0

Defer Test - whenToReturn:After1, shouldBranch:true
0
1
defer 1
defer 0

Defer Test - whenToReturn:After2, shouldBranch:false
0
1
3
defer 2
defer 1
defer 0

Defer Test - whenToReturn:After2, shouldBranch:true
0
1
shouldBranch
3
defer 2
defer 1
defer 0

Program ended with exit code: 0


This shows that returning before and/or not branching results in defer statements not being encountered hence the deferred code is not executed.  This is no different to say a finally-block in C#. The reason for my initial confusion is that there is no additional content for a defer block as there is for a finally block, i.e. the presence of the try, e.g.

try
{
  // Try some stuff 
}
finally
{
  // Always do something having tried something regardless of whether it worked or not
}

Whereas the only and actual context of the defer block is it's position.

The Perils of debugging with return statements in languages without semi-colon statement terminators, i.e. Swift

This is a pretty obvious post but perhaps writing it will stop me falling prey to this issue.

When I'm debugging and I know that some code executed in a function is not to blame but is noisy in terms of what it causes to happen etc. I'll often just prevent it from being executed in order to simplify the system, e.g.

func foo()
{
/*
f()
g()
h()
// Do lots of other things...
*/
}

Sometimes I like to be quicker to I just put in an early return statement, i.e.

func foo()
{
return
f()
g()
h()
// Do lots of other things...
}

I must also go temporarily warning blind and ignore the following:


The effect of this is that rather than prevent everything after the return statement from executing it as per the warning the return statement takes f() as its argument and explicitly calls it returning its value, though not executing the remaining functions. In this case as foo() (and f() though it's not shown) is void that is nothing. In fact if foo() or f() had non-void return types this wouldn't compile.

The fix is easy. Just put a semi-colon after the return.

func foo()
{
return;
f()
g()
h()
// Do lots of other things...

}

I use this 'technique' when I'm debugging C++ where this works fine. This is slightly interesting as C++ has the same semantics. The following C++ code has the same problem as the Swift, in that this code also invokes f() as its return value.

void foo()
{
return
f();
g();
g();
}

I guess the reason it's not an issue with C++ (as much or at all) is that my muscle memory or something else is always wanting to terminate lines with semi-colons so the natural way to write the return would be 'return;' whereas in Swift without the semi-colon requirement it's natural not to hence this issue becomes slightly more prevalent.

The Perils of debugging with return statements in languages without semi-colon statement terminators, i.e. Swift

This is a pretty obvious post but perhaps writing it will stop me falling prey to this issue.

When I'm debugging and I know that some code executed in a function is not to blame but is noisy in terms of what it causes to happen etc. I'll often just prevent it from being executed in order to simplify the system, e.g.

func foo()
{
/*
f()
g()
h()
// Do lots of other things...
*/
}

Sometimes I like to be quicker to I just put in an early return statement, i.e.

func foo()
{
return
f()
g()
h()
// Do lots of other things...
}

I must also go temporarily warning blind and ignore the following:


The effect of this is that rather than prevent everything after the return statement from executing it as per the warning the return statement takes f() as its argument and explicitly calls it returning its value, though not executing the remaining functions. In this case as foo() (and f() though it's not shown) is void that is nothing. In fact if foo() or f() had non-void return types this wouldn't compile.

The fix is easy. Just put a semi-colon after the return.

func foo()
{
return;
f()
g()
h()
// Do lots of other things...

}

I use this 'technique' when I'm debugging C++ where this works fine. This is slightly interesting as C++ has the same semantics. The following C++ code has the same problem as the Swift, in that this code also invokes f() as its return value.

void foo()
{
return
f();
g();
g();
}

I guess the reason it's not an issue with C++ (as much or at all) is that my muscle memory or something else is always wanting to terminate lines with semi-colons so the natural way to write the return would be 'return;' whereas in Swift without the semi-colon requirement it's natural not to hence this issue becomes slightly more prevalent.

OAuth authentication on tvOS

Recently I've just published an Apple TV (tvOS) App to view photos stored on Microsoft OneDrive.



Implementing this on tvOS rather than iOS presented one unique challenge. The OneDrive REST API requires OAuth2 authentication in order to obtain an OAuth token which is then used for all the other calls.

Normally (well based on my limited experience) OAuth within Apps is handled by using a UIWebView along with delegate code that performs the OAuth handshake (image linked from IBM).



tvOS does not contain any form of web view, i.e. no UIWebView and no WKWebView (not that it would be that much use due to the lack of hooks). As the actual authentication is performed within the UIWebView by the authenticating 3rd party (Microsoft in this case requiring the user logs in with their Microsoft Account credentials) there's not a lot that can be done without it.

However, both iOS and tvOS are generally logged into using an Apple Id which is also used to login into iCloud and generally for an Apple TV owned by the same person who owns another iOS device these use the same Apple Id. Therefore, what I did was to write a very simple iOS App that:
  1. Performs the OAuth Authentication handshake
  2. Stores the resulting OAuth token in the iCloud KeyValue Store






When written this is usually synchronized to iCloud very quickly. On the other side the tvOS app reads the iCloud KeyValue Store checking to see if the OAuth token exists.


If it does then it can continue as per any other App that has successfully performed the OAuth handshake.


I believe that iCloud Storage and the process of writing to and reading from iCloud is secure. This is important as following the handshake the OAuth token acts effectively as a password. Each token obtained from Microsoft is valid for one hour so after that the user needs to perform the authentication from the iOS device again.

It is possible to request an OAuth refresh token which allows a client to update an expired token as long as access to OneDrive for the App has not been revoked. However, I prefer to err on the side caution at the moment. I also only request read-only access to OneDrive as well.

For this to work the same user (Apple Id) needs to be logged into both the Apple TV and the iOS device as the same user and additionally be signed into iCloud on these devices. From a programmatic perspective both Apps need the iCloud capability enabling but only Key-value storage.

However, you'll notice that CloudKit has also been enabled. This is so that CKContainer methods can be called. In particular (well only)

CKContainer.defaultContainer().accountStatusWithCompletionHandler

In order to establish whether the user is currently signed in to iCloud and any changes (signing in & out potentially as a different user) whilst the App is running.

Enabling this automatically creates an Entitlement file (named <AppName>.entitlements) and within it creates Key confusingly called 'iCloud Key-Value Store' with the default value of '$(TeamIdentifierPrefix)$(CFBundleIdentifier)' - this is not the key to access values you store BTW but is just the iCloud KV configuration. This will happen for both the iOS and tvOS Apps.

NOTE: The two collapsed keys are as a result of enabling CloudKit.


For both Apps to have access to the same iCloud Key-Value storage the results of expanding the ''$(TeamIdentifierPrefix)$(CFBundleIdentifier)' macros needs to be the same. For my App I've created  a single App that has both an iOS and tvOS component so their CFBundleIdentifier is the same. The TeamIdentifierPrefix is taken from your Developer Apple Id.

The first part of the value has to be $(TeamIdentifierPrefix) as in order to make the KV Storage secure this value forms part of the signing process. If you replaced the whole value with say 'BOB' then it won't build properly.


As such it's possible for all your Apps (published from the with the same Apple Id) to share iCloud Key-Value Storage contents.

Reading & writing is very simple. I just use a single Key-Value pair to read, write (& where necessary delete the token. This is accomplished using:

NSUbiquitousKeyValueStore.defaultStore().setDictionary(stuff.asDict(), forKey: "mykey")

to write and

let stuff = NSUbiquitousKeyValueStore.defaultStore().dictionaryRepresentation["mykey"] as? [String:AnyObject]

to read.

This example reads & writes a dictionary as I needed to set store a set of KV Pairs (a dictionary) as the value of a single KV-pair but fundamental data types can be stored directly too.

When starting the App, according to the docs it is important to call synchronize method to initiate timely iCloud synchronization.

When the tvOS based Apple TV was first released there were various articles about how to enable users to login to their accounts for certain apps. These often involved similar configuration requiring the user to use an iOS device to input a string of numbers presented by the tvOS App. However, this solutions was usually for Apps that managed their own accounts. This solution is similar in that solves the cumbersome entry problem but also enables the use of browser (UIWebView) based OAuth2 on a device that directly support it.

OAuth authentication on tvOS

Recently I've just published an Apple TV (tvOS) App to view photos stored on Microsoft OneDrive.



Implementing this on tvOS rather than iOS presented one unique challenge. The OneDrive REST API requires OAuth2 authentication in order to obtain an OAuth token which is then used for all the other calls.

Normally (well based on my limited experience) OAuth within Apps is handled by using a UIWebView along with delegate code that performs the OAuth handshake (image linked from IBM).



tvOS does not contain any form of web view, i.e. no UIWebView and no WKWebView (not that it would be that much use due to the lack of hooks). As the actual authentication is performed within the UIWebView by the authenticating 3rd party (Microsoft in this case requiring the user logs in with their Microsoft Account credentials) there's not a lot that can be done without it.

However, both iOS and tvOS are generally logged into using an Apple Id which is also used to login into iCloud and generally for an Apple TV owned by the same person who owns another iOS device these use the same Apple Id. Therefore, what I did was to write a very simple iOS App that:
  1. Performs the OAuth Authentication handshake
  2. Stores the resulting OAuth token in the iCloud KeyValue Store






When written this is usually synchronized to iCloud very quickly. On the other side the tvOS app reads the iCloud KeyValue Store checking to see if the OAuth token exists.


If it does then it can continue as per any other App that has successfully performed the OAuth handshake.


I believe that iCloud Storage and the process of writing to and reading from iCloud is secure. This is important as following the handshake the OAuth token acts effectively as a password. Each token obtained from Microsoft is valid for one hour so after that the user needs to perform the authentication from the iOS device again.

It is possible to request an OAuth refresh token which allows a client to update an expired token as long as access to OneDrive for the App has not been revoked. However, I prefer to err on the side caution at the moment. I also only request read-only access to OneDrive as well.

For this to work the same user (Apple Id) needs to be logged into both the Apple TV and the iOS device as the same user and additionally be signed into iCloud on these devices. From a programmatic perspective both Apps need the iCloud capability enabling but only Key-value storage.

However, you'll notice that CloudKit has also been enabled. This is so that CKContainer methods can be called. In particular (well only)

CKContainer.defaultContainer().accountStatusWithCompletionHandler

In order to establish whether the user is currently signed in to iCloud and any changes (signing in & out potentially as a different user) whilst the App is running.

Enabling this automatically creates an Entitlement file (named <AppName>.entitlements) and within it creates Key confusingly called 'iCloud Key-Value Store' with the default value of '$(TeamIdentifierPrefix)$(CFBundleIdentifier)' - this is not the key to access values you store BTW but is just the iCloud KV configuration. This will happen for both the iOS and tvOS Apps.

NOTE: The two collapsed keys are as a result of enabling CloudKit.


For both Apps to have access to the same iCloud Key-Value storage the results of expanding the ''$(TeamIdentifierPrefix)$(CFBundleIdentifier)' macros needs to be the same. For my App I've created  a single App that has both an iOS and tvOS component so their CFBundleIdentifier is the same. The TeamIdentifierPrefix is taken from your Developer Apple Id.

The first part of the value has to be $(TeamIdentifierPrefix) as in order to make the KV Storage secure this value forms part of the signing process. If you replaced the whole value with say 'BOB' then it won't build properly.


As such it's possible for all your Apps (published from the with the same Apple Id) to share iCloud Key-Value Storage contents.

Reading & writing is very simple. I just use a single Key-Value pair to read, write (& where necessary delete the token. This is accomplished using:

NSUbiquitousKeyValueStore.defaultStore().setDictionary(stuff.asDict(), forKey: "mykey")

to write and

let stuff = NSUbiquitousKeyValueStore.defaultStore().dictionaryRepresentation["mykey"] as? [String:AnyObject]

to read.

This example reads & writes a dictionary as I needed to set store a set of KV Pairs (a dictionary) as the value of a single KV-pair but fundamental data types can be stored directly too.

When starting the App, according to the docs it is important to call synchronize method to initiate timely iCloud synchronization.

When the tvOS based Apple TV was first released there were various articles about how to enable users to login to their accounts for certain apps. These often involved similar configuration requiring the user to use an iOS device to input a string of numbers presented by the tvOS App. However, this solutions was usually for Apps that managed their own accounts. This solution is similar in that solves the cumbersome entry problem but also enables the use of browser (UIWebView) based OAuth2 on a device that directly support it.

Git remote repos with OneDrive

I have various public git repositories on GitHub but I like to keep some source (usually my active App Store apps) private. Whilst it'd be nice to use GitHub private repositories, given my Apps are for fun and don't really make anything, don't require collaboration then the pricing is prohibitive.

However, I really like the idea of having an offsite copy of my repository. As it happens I have an Office 365 Subscription which comes with 1TB of OneDrive space. I use OneDrive on OSX to sync a bunch of folders. I could use a synchronised OneDrive for my work directory but I don't want OneDrive synchronising all the temporary build files etc every time I build.

It turns out the ideal solution is to create a remote clone in a OneDrive synchronised folder. In fact I have a dedicated OneDrive folder called 'src' that contains clones of all of my git repos. Then, each time I commit to the local git repository and push OneDrive performs the synchronisation. If it happens that the code I'm working on is public then adding a public GitHub repo is a doddle.

Having created a local Git repo (though usually Xcode does this when starting a new project) it's easy to create the OneDrive clone:

  1. cd /Users/Pete/OneDrive/src
  2. git clone --bare file:////Users/Pete/Projects/<ProjectName>/.git <ProjectName>.git
As the remote clone is really just a backup come master that will never be working repository I create a bare clone and suffix the directory name with '.git'. I think this is a fairly common convention.

At this point the source repository is cloned but the source is now a remote of the new repository rather than than the other way round. This is easy to fix:

cd-ing into <ProjectName>.git (my example Project is Photone> and running git remotes gives:

~/OneDrive/src/Photone.git[23]git remote -v
origin file:////Users/Pete/Projects/PhotoneViewer/.git (fetch)
origin file:////Users/Pete/Projects/PhotoneViewer/.git (push)

than running:
  1. git remote remove origin
Removes the relationship between the new remote and the original source. To implement the desired reverse relationship:


  1. cd /Users/Pete/Projects<ProjectName>
  2. git remote add OneDrive file:////Users/Pete/OneDrive/src/<ProjectName>.git
I name my OneDrive remote repos 'OneDrive'. This helps if I have multiple remotes.

git remote (for my current project) now gives:


~/Projects/PhotoneViewer[49]git remote -v
OneDrive file://Users/Pete/OneDrive/src/Photone.git (fetch)
OneDrive file://Users/Pete/OneDrive/src/Photone.git (push)


From this point onwards I use SourceTree. However, if you use the command line then a couple of extra steps are required otherwise git complains. 

Firstly, when pushing if you just want to do:

git push OneDrive you need to tell git that the new OneDrive remote is the master. This is done by:

git push --set-upstream OneDrive master

Secondly, unless you've already set the push.default setting or just use 'git push --all' then you'll need to decide which option you want. The help from git describes these well:

Git 2.0 from 'matching' to 'simple'. To squelch this message
and maintain the traditional behavior, use:

  git config --global push.default matching

To squelch this message and adopt the new behavior now, use:

  git config --global push.default simple

When push.default is set to 'matching', git will push local branches
to the remote branches that already exist with the same name.

Since Git 2.0, Git defaults to the more conservative 'simple'
behavior, which only pushes the current branch to the corresponding
remote branch that 'git pull' uses to update the current branch.

See 'git help config' and search for 'push.default' for further information.
(the 'simple' mode was introduced in Git 1.7.11. Use the similar mode
'current' instead of 'simple' if you sometimes use older versions of Git)

If you're using the remote clone as a backup then perhaps the original, i.e. matching behaviour is desirable. Whilst I use OneDrive this configuration should work for any other file synchronisation service, e.g. iCloud, DropBox or a standard mounted File System, e.g. Samba, NFS etc.

Git remote repos with OneDrive

I have various public git repositories on GitHub but I like to keep some source (usually my active App Store apps) private. Whilst it'd be nice to use GitHub private repositories, given my Apps are for fun and don't really make anything, don't require collaboration then the pricing is prohibitive.

However, I really like the idea of having an offsite copy of my repository. As it happens I have an Office 365 Subscription which comes with 1TB of OneDrive space. I use OneDrive on OSX to sync a bunch of folders. I could use a synchronised OneDrive for my work directory but I don't want OneDrive synchronising all the temporary build files etc every time I build.

It turns out the ideal solution is to create a remote clone in a OneDrive synchronised folder. In fact I have a dedicated OneDrive folder called 'src' that contains clones of all of my git repos. Then, each time I commit to the local git repository and push OneDrive performs the synchronisation. If it happens that the code I'm working on is public then adding a public GitHub repo is a doddle.

Having created a local Git repo (though usually Xcode does this when starting a new project) it's easy to create the OneDrive clone:

  1. cd /Users/Pete/OneDrive/src
  2. git clone --bare file:////Users/Pete/Projects/<ProjectName>/.git <ProjectName>.git
As the remote clone is really just a backup come master that will never be working repository I create a bare clone and suffix the directory name with '.git'. I think this is a fairly common convention.

At this point the source repository is cloned but the source is now a remote of the new repository rather than than the other way round. This is easy to fix:

cd-ing into <ProjectName>.git (my example Project is Photone> and running git remotes gives:

~/OneDrive/src/Photone.git[23]git remote -v
origin file:////Users/Pete/Projects/PhotoneViewer/.git (fetch)
origin file:////Users/Pete/Projects/PhotoneViewer/.git (push)

than running:
  1. git remote remove origin
Removes the relationship between the new remote and the original source. To implement the desired reverse relationship:


  1. cd /Users/Pete/Projects<ProjectName>
  2. git remote add OneDrive file:////Users/Pete/OneDrive/src/<ProjectName>.git
I name my OneDrive remote repos 'OneDrive'. This helps if I have multiple remotes.

git remote (for my current project) now gives:


~/Projects/PhotoneViewer[49]git remote -v
OneDrive file://Users/Pete/OneDrive/src/Photone.git (fetch)
OneDrive file://Users/Pete/OneDrive/src/Photone.git (push)


From this point onwards I use SourceTree. However, if you use the command line then a couple of extra steps are required otherwise git complains. 

Firstly, when pushing if you just want to do:

git push OneDrive you need to tell git that the new OneDrive remote is the master. This is done by:

git push --set-upstream OneDrive master

Secondly, unless you've already set the push.default setting or just use 'git push --all' then you'll need to decide which option you want. The help from git describes these well:

Git 2.0 from 'matching' to 'simple'. To squelch this message
and maintain the traditional behavior, use:

  git config --global push.default matching

To squelch this message and adopt the new behavior now, use:

  git config --global push.default simple

When push.default is set to 'matching', git will push local branches
to the remote branches that already exist with the same name.

Since Git 2.0, Git defaults to the more conservative 'simple'
behavior, which only pushes the current branch to the corresponding
remote branch that 'git pull' uses to update the current branch.

See 'git help config' and search for 'push.default' for further information.
(the 'simple' mode was introduced in Git 1.7.11. Use the similar mode
'current' instead of 'simple' if you sometimes use older versions of Git)

If you're using the remote clone as a backup then perhaps the original, i.e. matching behaviour is desirable. Whilst I use OneDrive this configuration should work for any other file synchronisation service, e.g. iCloud, DropBox or a standard mounted File System, e.g. Samba, NFS etc.

Swift Enums and Protocols

I'm trying to clear out my inbox before Christmas and I noticed an emails to myself entitled 'Enum question. Add protocol to enum?'.

The short answer is yes. The longer one. Take the following protocol

protocol Foo
{
func f() -> Void
}

A simple enum can be created that implements it:

enum Test: Foo
{
case One
func f()
{
print("Hello")
}

}

So the following short program:

let baz = Test.One

baz.f()

generates the output:

Hello
Program ended with exit code: 0

The protocol can also be implemented by extension so the following is equivalent and produces the same results:

extension Test: Foo
{
func f()
{
print("Hello")
}
}

It's not surprising that Swift's enum types support protocols. Off the top of my head I can't think of any clever reasons why you'd have enums implement a protocol. Given an enum is a first class type in Swift then it makes perfect sense. In fact it's documented on page 424 in The Swift Programming Language.

Swift Enums and Protocols

I'm trying to clear out my inbox before Christmas and I noticed an emails to myself entitled 'Enum question. Add protocol to enum?'.

The short answer is yes. The longer one. Take the following protocol

protocol Foo
{
func f() -> Void
}

A simple enum can be created that implements it:

enum Test: Foo
{
case One
func f()
{
print("Hello")
}

}

So the following short program:

let baz = Test.One

baz.f()

generates the output:

Hello
Program ended with exit code: 0

The protocol can also be implemented by extension so the following is equivalent and produces the same results:

extension Test: Foo
{
func f()
{
print("Hello")
}
}

It's not surprising that Swift's enum types support protocols. Off the top of my head I can't think of any clever reasons why you'd have enums implement a protocol. Given an enum is a first class type in Swift then it makes perfect sense. In fact it's documented on page 424 in The Swift Programming Language.

Things I learnt from Swift Summit

I attended the first Swift Summit on 21st of March; there were two days but I only went to the first. Here are some of the facts I learned:
  • Int is not a fundamental type as you would think of it in most languages. 
    • Instead it's a struct that derives SignedIntegerType with the actual value being an instance of the really fundamental type Bultin.World
    • Being a struct means it's a proper object hence its methods, the ability to extend (see later) and can (& does )implement protocols
  • Int can be extended
    • As it's an object (see above) it's possible to write extensions methods.
  • I have a far better understanding of what @autoclosure does now
    • It basically captures  the function rather than invoking it
  • nil is never actually treated as nil when used
And now for some opinion about the day.
  • People seem to be struggling with error handling
    • A couple of talks presented code to avoid pyramids of doom in regard to making a call, checking for success/failure and if successful continuing.
  • People think they're doing Functional Programming
    • Just because Swift supports Functional Programming style and some people use elements of FP they assume Swift is mainly a Functional language and that they are doing Functional Programming.
    • Passing functions arounds as first-class objects does not make your program functional
  • Some people now hate Objective-C

Things I learnt from Swift Summit

I attended the first Swift Summit on 21st of March; there were two days but I only went to the first. Here are some of the facts I learned:
  • Int is not a fundamental type as you would think of it in most languages. 
    • Instead it's a struct that derives SignedIntegerType with the actual value being an instance of the really fundamental type Bultin.World
    • Being a struct means it's a proper object hence its methods, the ability to extend (see later) and can (& does )implement protocols
  • Int can be extended
    • As it's an object (see above) it's possible to write extensions methods.
  • I have a far better understanding of what @autoclosure does now
    • It basically captures  the function rather than invoking it
  • nil is never actually treated as nil when used
And now for some opinion about the day.
  • People seem to be struggling with error handling
    • A couple of talks presented code to avoid pyramids of doom in regard to making a call, checking for success/failure and if successful continuing.
  • People think they're doing Functional Programming
    • Just because Swift supports Functional Programming style and some people use elements of FP they assume Swift is mainly a Functional language and that they are doing Functional Programming.
    • Passing functions arounds as first-class objects does not make your program functional
  • Some people now hate Objective-C

A slight enhancement on Developing tvOS Apps with Swift

Apple announced tvOS yesterday. Downloading Xcode 7.1 Beta comes with the SDK and simulator for tvOS apps. The official documentation starts to run through how to create a basic app but is doesn't mention where to place and load the JS from and the same for the TVML.

Fortunately and vert quickly Jamerson Quave put together a tutorial.

I followed the Apple docs but checked Jameson's tutorial to verify the missing declaration of

var appController: TVApplicationController?

from AppDelegate and also for the JS and then TVML loading. I don't understand and the docs don't seem to see where the JS & TVML should be loaded from. They seem to suggest it should be remote, i.e. not part of the App Bundle but I don't know why. Anyhow I thought I'd see if I could.

The following assumes you've got to the end of Jameson's tutorial.

Loading the JS file; that then loads the TVML is easy. Add main.js to your application and change the lines within application:didFinishLaunchingWithOptions in AppDelegate.swift from:

let jsFilePath = NSURL(string: "http://localhost:8000/main.js") let javascriptURL = jsFilePath! appControllerContext.javaScriptApplicationURL = javascriptURL!

To

guard let jsUrl = NSBundle.mainBundle().URLForResource("main", withExtension: "js") else
{
    return false
}

This just loads the Javascript file (main.js) from the bundle instead. It's not a great improvement but it removes one dependency on the local web server.

I then tried to add hello.tvml to the bundler and modify main.js to create the fetched (via XmlHttpRequest). Unfortunately I couldn't create the Document in the JS. It seems that the normally (I've not done JS in a long time so what do I know) available document object isn't available to more documents and/or elements can't be created.

An attempt to create one, i.e.

var otherDoc = Document()

gives

015-09-10 21:53:50.213 tv1[55699:1483712] ITML <Error>: Document is not a function. (In 'Document()', 'Document' is an instance of IKDOMDocumentConstructor) - file:///Users/pete/Library/Developer/CoreSimulator/Devices/C2E7E5BD-1823-48BF-89E9-D3A499EE778A/data/Containers/Bundle/Application/F9C514E1-1A95-46A8-83D1-1BC96BC9A220/tv1.app/main.js - line:18:25

The objects mentioned in the TVJS documentation don't seem to be able to create one either.

Anyway, hopefully another small step. Full source on github.

A good be being dumb here and another look at the docs & samples suggests that writing apps. via JS is just one way and that a more iOS like app. can be written. Perhaps this is similar to Windows Metro that had both a JS and .Net (C#) version of WinRT; & C++ for completion.

A slight enhancement on Developing tvOS Apps with Swift

Apple announced tvOS yesterday. Downloading Xcode 7.1 Beta comes with the SDK and simulator for tvOS apps. The official documentation starts to run through how to create a basic app but is doesn't mention where to place and load the JS from and the same for the TVML.

Fortunately and vert quickly Jamerson Quave put together a tutorial.

I followed the Apple docs but checked Jameson's tutorial to verify the missing declaration of

var appController: TVApplicationController?

from AppDelegate and also for the JS and then TVML loading. I don't understand and the docs don't seem to see where the JS & TVML should be loaded from. They seem to suggest it should be remote, i.e. not part of the App Bundle but I don't know why. Anyhow I thought I'd see if I could.

The following assumes you've got to the end of Jameson's tutorial.

Loading the JS file; that then loads the TVML is easy. Add main.js to your application and change the lines within application:didFinishLaunchingWithOptions in AppDelegate.swift from:

let jsFilePath = NSURL(string: "http://localhost:8000/main.js") let javascriptURL = jsFilePath! appControllerContext.javaScriptApplicationURL = javascriptURL!

To

guard let jsUrl = NSBundle.mainBundle().URLForResource("main", withExtension: "js") else
{
    return false
}

This just loads the Javascript file (main.js) from the bundle instead. It's not a great improvement but it removes one dependency on the local web server.

I then tried to add hello.tvml to the bundler and modify main.js to create the fetched (via XmlHttpRequest). Unfortunately I couldn't create the Document in the JS. It seems that the normally (I've not done JS in a long time so what do I know) available document object isn't available to more documents and/or elements can't be created.

An attempt to create one, i.e.

var otherDoc = Document()

gives

015-09-10 21:53:50.213 tv1[55699:1483712] ITML <Error>: Document is not a function. (In 'Document()', 'Document' is an instance of IKDOMDocumentConstructor) - file:///Users/pete/Library/Developer/CoreSimulator/Devices/C2E7E5BD-1823-48BF-89E9-D3A499EE778A/data/Containers/Bundle/Application/F9C514E1-1A95-46A8-83D1-1BC96BC9A220/tv1.app/main.js - line:18:25

The objects mentioned in the TVJS documentation don't seem to be able to create one either.

Anyway, hopefully another small step. Full source on github.

A good be being dumb here and another look at the docs & samples suggests that writing apps. via JS is just one way and that a more iOS like app. can be written. Perhaps this is similar to Windows Metro that had both a JS and .Net (C#) version of WinRT; & C++ for completion.

Release news: Hungry Bunny & KeyChainItemCRUDKit

Not a technical post today, just a bit of news on the things I've been working on.

Firstly, my latest SpriteKit game written in Swift is now available on the App Store. It's called Hungry Bunny and is effectively an endless runner/skill test. It's free with ads.



Secondly, now that Hungry Bunny is complete I've returned to working on another project. This is nowhere near complete but I got to the point where I needed to securely store an OAuth2 token on iOS. I came across the Keychain API. However, the API for this was long winded and I only wanted to use it in a simple manner. Therefore I created a Swift framework to provide CRUD access to it along with a higher level interface where any type conforming to NSCoding and be saved, loaded & deleted.

This is available on github and also as my first ever CocoaPod. All the docs are in the README plus there's an example iOS program (Single View App) and the Unit Tests.

Release news: Hungry Bunny & KeyChainItemCRUDKit

Not a technical post today, just a bit of news on the things I've been working on.

Firstly, my latest SpriteKit game written in Swift is now available on the App Store. It's called Hungry Bunny and is effectively an endless runner/skill test. It's free with ads.



Secondly, now that Hungry Bunny is complete I've returned to working on another project. This is nowhere near complete but I got to the point where I needed to securely store an OAuth2 token on iOS. I came across the Keychain API. However, the API for this was long winded and I only wanted to use it in a simple manner. Therefore I created a Swift framework to provide CRUD access to it along with a higher level interface where any type conforming to NSCoding and be saved, loaded & deleted.

This is available on github and also as my first ever CocoaPod. All the docs are in the README plus there's an example iOS program (Single View App) and the Unit Tests.

Drawing into bitmaps and saving as a PNG in Swift on OS X

Not an in depth post today. For a small iOS Swift/SpriteKit game I'm writing for fun I wanted a very basic grass sprite that could be scrolled; to create a parallax effect. This amounts to a 800x400 bitmap which contains sequential isosceles triangles of 40 pixels with random heights (of up to 400 pixels) and coloured using a lawn green colour.

Initially I was creating an SKShapeNode and creating the triangles above but when scrolling the redrawing of these hurt performance, especially when running on the iOS Simulator hence the desire to use a sprite.

I had a go at creating these with Photoshop. Whilst switching to a sprite improved performance the look of the triangles drawn by hand wasn't as good as the randomly generated ones. Therefore I thought I'd generate the sprite.

It wasn't really practical to do this on iOS as the file was needed in Xcode so I thought I'd try experimenting with a command line OS X (Cocoa) program in Swift. A GUI would possibly be nice to preview the results (and re-generate if needed) and to select the save-to file location but this solution sufficed.

I'd not done any non-iOS Swift development and never generated PNGs so various amounts of Googling and StackOverflow-ing was needed. Whilst the results of these searches were very helpful I didn't come across anything showing a complete program to create a bitmap, draw into it and then save so the finished program is presented below. It's also available as a gist.


1:  import Cocoa  
2:
3: private func saveAsPNGWithName(fileName: String, bitMap: NSBitmapImageRep) -> Bool
4: {
5: let props: [NSObject:AnyObject] = [:]
6: let imageData = bitMap.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: props)
7:
8: return imageData!.writeToFile(fileName, atomically: false)
9: }
10:
11: private func drawGrassIntoBitmap(bitmap: NSBitmapImageRep)
12: {
13: var ctx = NSGraphicsContext(bitmapImageRep: bitmap)
14:
15: NSGraphicsContext.setCurrentContext(ctx)
16:
17: NSColor(red: 124 / 255, green: 252 / 255, blue: 0, alpha: 1.0).set()
18:
19: let path = NSBezierPath()
20:
21: path.moveToPoint(NSPoint(x: 0, y: 0))
22:
23: for i in stride(from: 0, through: SIZE.width, by: 40)
24: {
25: path.lineToPoint(NSPoint(x: CGFloat(i + 20), y: CGFloat(arc4random_uniform(400))))
26: path.lineToPoint(NSPoint(x: i + 40, y: 0))
27: }
28:
29: path.stroke()
30: path.fill()
31:
32: }
33:
34: let SIZE = CGSize(width: 800, height: 400)
35:
36: if Process.arguments.count != 2
37: {
38: println("usage: grass <file>")
39: exit(1)
40: }
41:
42: let grass = NSBitmapImageRep(bitmapDataPlanes: nil, pixelsWide: Int(SIZE.width), pixelsHigh: Int(SIZE.height), bitsPerSample: 8, samplesPerPixel: 4, hasAlpha: true, isPlanar: false, colorSpaceName: NSDeviceRGBColorSpace, bytesPerRow: 0, bitsPerPixel: 0)
43:
44: drawGrassIntoBitmap(grass!)
45: saveAsPNGWithName(Process.arguments[1], grass!)
46:

Drawing into bitmaps and saving as a PNG in Swift on OS X

Not an in depth post today. For a small iOS Swift/SpriteKit game I'm writing for fun I wanted a very basic grass sprite that could be scrolled; to create a parallax effect. This amounts to a 800x400 bitmap which contains sequential isosceles triangles of 40 pixels with random heights (of up to 400 pixels) and coloured using a lawn green colour.

Initially I was creating an SKShapeNode and creating the triangles above but when scrolling the redrawing of these hurt performance, especially when running on the iOS Simulator hence the desire to use a sprite.

I had a go at creating these with Photoshop. Whilst switching to a sprite improved performance the look of the triangles drawn by hand wasn't as good as the randomly generated ones. Therefore I thought I'd generate the sprite.

It wasn't really practical to do this on iOS as the file was needed in Xcode so I thought I'd try experimenting with a command line OS X (Cocoa) program in Swift. A GUI would possibly be nice to preview the results (and re-generate if needed) and to select the save-to file location but this solution sufficed.

I'd not done any non-iOS Swift development and never generated PNGs so various amounts of Googling and StackOverflow-ing was needed. Whilst the results of these searches were very helpful I didn't come across anything showing a complete program to create a bitmap, draw into it and then save so the finished program is presented below. It's also available as a gist.


1:  import Cocoa  
2:
3: private func saveAsPNGWithName(fileName: String, bitMap: NSBitmapImageRep) -> Bool
4: {
5: let props: [NSObject:AnyObject] = [:]
6: let imageData = bitMap.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: props)
7:
8: return imageData!.writeToFile(fileName, atomically: false)
9: }
10:
11: private func drawGrassIntoBitmap(bitmap: NSBitmapImageRep)
12: {
13: var ctx = NSGraphicsContext(bitmapImageRep: bitmap)
14:
15: NSGraphicsContext.setCurrentContext(ctx)
16:
17: NSColor(red: 124 / 255, green: 252 / 255, blue: 0, alpha: 1.0).set()
18:
19: let path = NSBezierPath()
20:
21: path.moveToPoint(NSPoint(x: 0, y: 0))
22:
23: for i in stride(from: 0, through: SIZE.width, by: 40)
24: {
25: path.lineToPoint(NSPoint(x: CGFloat(i + 20), y: CGFloat(arc4random_uniform(400))))
26: path.lineToPoint(NSPoint(x: i + 40, y: 0))
27: }
28:
29: path.stroke()
30: path.fill()
31:
32: }
33:
34: let SIZE = CGSize(width: 800, height: 400)
35:
36: if Process.arguments.count != 2
37: {
38: println("usage: grass <file>")
39: exit(1)
40: }
41:
42: let grass = NSBitmapImageRep(bitmapDataPlanes: nil, pixelsWide: Int(SIZE.width), pixelsHigh: Int(SIZE.height), bitsPerSample: 8, samplesPerPixel: 4, hasAlpha: true, isPlanar: false, colorSpaceName: NSDeviceRGBColorSpace, bytesPerRow: 0, bitsPerPixel: 0)
43:
44: drawGrassIntoBitmap(grass!)
45: saveAsPNGWithName(Process.arguments[1], grass!)
46:

When a Swift immutable collection isn’t or is at least immutable-ish

Take the following code:

 struct Foo  
{
var value: Int
}

let foo = [Foo(value: 1), Foo(value: 2)]

By using 'let foo' an immutable array has been created. As such no members of this array can be replaced and nor can an element's contents be modified. As such both statements below result in compilation errors.

 foo[0] = Foo(value: 777) // error!  
foo[0].value = 8 // error!

If foo were declared var then both of the above statements would work.

This changes slightly when using reference types:

 class Bar  
{
var value: Int = 0

init(value: Int)
{
self.value = value
}
}

let bar = [Bar(value: 1), Bar(value: 2)]

bar[0] = Foo(value: 777) // error!
bar[0].value = 8 // allowed

The first case of trying to replace an element fails as before but modifying the instance referenced by that element is permitted. This is because the immutable collection holds a reference to the instance of Bar. Modifying the instance does not change the reference so the collection remains unchanged.

If you're making an effort to make your code as immutable (const) as possible (I'm from a C++ background so I endeavour to making everything as const I can) then this is a gotcha to lookout for.

The only way to make Bar properly const is to provide a private setter for the member, assuming it's defined in it's own source file (so this example doesn't work in a Plaground), i.e.

 class Baz  
{
private(set) var value: Int = 0

init(value: Int)
{
self.value = value
}

func f()
{
value = 88;
}
}

Which now prevents value being assigned too.

 let baz = [Baz(value: 1), Baz(value: 2)]  
baz[0].value = 8 // error!


Even if all the properties have private setters there might be methods on the reference type that modify the instance, such as f() above. Just having an immutable collection of reference types is not sufficient to stop these mutating the instances referred too. Invoking f() in the above collection will change the value to 88, i.e.

 println("\(baz[0].value)") // Gives 1  
baz[0].f()
println("\(baz[0].value)") // Gives 88


This differs to C++ where if a collection is const or is returned as a const reference (from a method) than not only is the container immutable but so are its contents. It would be good if Swift were to gain a mechanism that would mark the collection and its contents completely immutable other than having to use a value type.

Anyway, beware of making a collection of reference types immutable and assuming that both the collection and its contents cannot be modified!

When a Swift immutable collection isn’t or is at least immutable-ish

Take the following code:

 struct Foo  
{
var value: Int
}

let foo = [Foo(value: 1), Foo(value: 2)]

By using 'let foo' an immutable array has been created. As such no members of this array can be replaced and nor can an element's contents be modified. As such both statements below result in compilation errors.

 foo[0] = Foo(value: 777) // error!  
foo[0].value = 8 // error!

If foo were declared var then both of the above statements would work.

This changes slightly when using reference types:

 class Bar  
{
var value: Int = 0

init(value: Int)
{
self.value = value
}
}

let bar = [Bar(value: 1), Bar(value: 2)]

bar[0] = Foo(value: 777) // error!
bar[0].value = 8 // allowed

The first case of trying to replace an element fails as before but modifying the instance referenced by that element is permitted. This is because the immutable collection holds a reference to the instance of Bar. Modifying the instance does not change the reference so the collection remains unchanged.

If you're making an effort to make your code as immutable (const) as possible (I'm from a C++ background so I endeavour to making everything as const I can) then this is a gotcha to lookout for.

The only way to make Bar properly const is to provide a private setter for the member, assuming it's defined in it's own source file (so this example doesn't work in a Plaground), i.e.

 class Baz  
{
private(set) var value: Int = 0

init(value: Int)
{
self.value = value
}

func f()
{
value = 88;
}
}

Which now prevents value being assigned too.

 let baz = [Baz(value: 1), Baz(value: 2)]  
baz[0].value = 8 // error!


Even if all the properties have private setters there might be methods on the reference type that modify the instance, such as f() above. Just having an immutable collection of reference types is not sufficient to stop these mutating the instances referred too. Invoking f() in the above collection will change the value to 88, i.e.

 println("\(baz[0].value)") // Gives 1  
baz[0].f()
println("\(baz[0].value)") // Gives 88


This differs to C++ where if a collection is const or is returned as a const reference (from a method) than not only is the container immutable but so are its contents. It would be good if Swift were to gain a mechanism that would mark the collection and its contents completely immutable other than having to use a value type.

Anyway, beware of making a collection of reference types immutable and assuming that both the collection and its contents cannot be modified!

Book review: Swift Essentials

Disclaimer


I was asked to review: Swift Essentials by Alex Blewitt from Packt Publishing (I'd previously reviewed another book of theirs for the ACCU) and ideally publicise the review. In return I was given a free copy of the eBook and offered another free eBook of my choice upon publication of this review. However, no pressure was exerted and nor was any editorial control requested or given.

In short


It's a good book that provides a more than basic introduction to iOS development using Swift. My major criticism is the title, it does not do the book justice. The sub-title 'Get up and running lightning fast with this practical guide to building applications with Swift' is a far better description. This book is ideal if you're already able to program but would like to learn about iOS development with Swift rather than with Objective-C. It's more about learning iOS app development with Swift rather than about Swift.

In long


The offer to review this book came at good time for me. I'd read quite a lot of the Swift Programming Language book, read many blog articles, attended most of the Swift London meet ups (which incidentally gets a mention in the resource section of the book; so I may have inadvertently met the author) and published my first iOS App written entirely in Swift, a game called Boingy Splat. Therefore I was interested to see what someone else made of the language itself, working with it to create iOS apps and importantly to see parts and techniques I'd missed.

The books takes you from doing mainly pure Swift using REPL on the command line and then introducing and using Playgrounds to implementing a full app, albeit without persisting any data. Smaller apps demonstrating features that will be used in the final app are created along the way. These are small and snappy and allow something concrete to be created along the way, quickly! There is a section on creating Playground packages and adding documentation that seems far from essential; interesting though.

Some of the first chapter introducing the fundamental types and constructs can get a bit dull if you've been there with other languages but it's information that needs presenting. Importantly as soon as possible the Swift idioms (iteration, containers, enums, let versus var etc.) are presented. This is the information that existing developers need.

The Playgrounds chapters introduces a little Cocoa Touch by the way of displaying images. Other than the choice of the author's face as the content (ironic rather than egomaniacal I hope :-)) it's good to be already manipulating the OS with Swift rather than just 'doing' Swift.

From chapter 3 onwards (there are 7 in total) the focus switches to creating apps. The most important concepts (that will allow you to search for and understand more detailed documentation and blogs) are introduced, i.e. Table Views, Master-Detail, Storyboards (and creating purely programmatic views), Custom Views, MVC architecture and accessing remote data. No time is spent on app signing etc. which is an understandable omission but a mention and a reference could prove useful for those very new to iOS development.

The later chapters tend to introduce a new iOS features and new Swift feature making use of one with the other. Even Unit Testing gets a mention when introducing classes and using mocking as example of sub-classing. Likewise full and practical use of Swift's enums and extension methods are given in the remote data chapter.

I was pleased to see Storyboards getting central billing for creating UIs as this is now more or less de rigueur and was always my issue with the Big Nerd Ranch book. The use of Segues is also covered along with how to create the UI using Interface Builder. In this chapter and the rest of the book a fair amount of time is spent showing how to pass data from from View to another. It's a shame the Singleton anti-pattern is presented as a way to share data in the final program especially given the coupling this causes which makes Unit Testing as advocated earlier in the book hard to accomplish.

As this is a new book it was good to see it tackle Auto Layout and not just from the Interface Builder perspective but go so far to explain and use the Visual Format Language. It's quite a feat to present this succinctly and not just cop out with a reference to the official documentation. It's this information that takes a somewhat scary area of iOS development and removes the fear.

The remote data chapter is good in that talks about synchronous versus asynchronous calls and makes use of Swift Closures for the callbacks. It covers useful details such as having to update the UI on the main thread versus performing work on others and shows the mechanisms to achieve this. A small library of useful methods that can be used outside of the book is created in the process. Again, this isn't super detailed information but it's setting new iOS developers on the right course allowing common areas of pain and confusion to be avoided.

Other than title the I have only a couple of criticisms. Firstly, I kept finding references to things either just before they were presented or to unhighlighted aspects of the code. This suggested that in order to keep the size of the book down that various pages had been chopped but less than perfect re-editing had occurred. I was able to reconcile all of these but it meant spending time flicking back through pages. Secondly, in a similar vein whilst the book contains screenshots there are sections where some intermediate screenshots or those of the final effect would aid the description. It's arguable as to whether the persistence of data is an 'essential'. I think it would have been worth taking the page count to 250 (from the current 228) to cover basic persistence; 2nd edition?

Formatting


I read the eBook on iPad mini 2 using the Kindle reader app (mobi format); I haven't seen the paper version.

For technical eBooks, especially to be read on anything smaller than a full size tablet means the formatting, especially of the code is very important. The text is laid out well and most of the code is split-up so the differences between text and code highlight the other which works well. There were a few times when I needed to switch to landscape mode to make the code more readable.

Towards the end where a full program in larger sections is presented it breaks down as there is little text on the same page. Different colours would have been useful for the comments and given the use of inline lambdas (for asynchronous calls) then some other formatting or again another colour would have helped separate these from the code consuming them; though I did notice some use of bold. The use of ANSI rather K&R style braces (though this seems preferred by Apple and is more compact) may have help readability.

Conclusion


When Swift was first released (June '14) a lot of people thought that learning Swift would make then iOS developers whereas the real challenge (other than learning how to program) is to learn how to program against iOS. At that time learning just Swift by itself as a first language had no real benefit especially as all the books and blog articles used Objective-C plus the Cocoa Touch APIs were still rather Swift unfriendly.

Whilst knowing Objective-C is probably essential for a full-time iOS developer today and a basic knowledge will be useful for a long time to come, 8 months down the line learning iOS development just with Swift is extremely viable. This book will allow you to do this. It's not as comprehensive at the de facto book for learning iOS development: Big Nerd Ranch's iOS Programming but it is more than sufficient, uses the latest language and covers features introduced with Xcode 6, e.g. Auto Layout and demonstrates app development along the lines of how Apple sees it, by using Storyboards and Auto Layout. It's also significantly shorter!

Whilst it doesn't cover everything and what it does cover doesn't go into great detail it introduces and uses most of the key concepts of both Swift and iOS. To write an app you'll still need to read and refer to the official documentation and read blogs etc. but after reading this book there will not be much that is alien to you.

I'd recommend this book if you can already program and would like to learn how to develop for iOS. If you already know iOS but would like to learn Swift with familiar examples and see how to access the Cocoa APIs using Swift then this is also a good book. Even if you can't program and aspire to iOS development then this is a book that you could pick up and get a long with pretty soon after you'd got to grips with the basics of programming.

I think it is possible for this to become the 'go to' book for starting iOS development.

Book review: Swift Essentials

Disclaimer


I was asked to review: Swift Essentials by Alex Blewitt from Packt Publishing (I'd previously reviewed another book of theirs for the ACCU) and ideally publicise the review. In return I was given a free copy of the eBook and offered another free eBook of my choice upon publication of this review. However, no pressure was exerted and nor was any editorial control requested or given.

In short


It's a good book that provides a more than basic introduction to iOS development using Swift. My major criticism is the title, it does not do the book justice. The sub-title 'Get up and running lightning fast with this practical guide to building applications with Swift' is a far better description. This book is ideal if you're already able to program but would like to learn about iOS development with Swift rather than with Objective-C. It's more about learning iOS app development with Swift rather than about Swift.

In long


The offer to review this book came at good time for me. I'd read quite a lot of the Swift Programming Language book, read many blog articles, attended most of the Swift London meet ups (which incidentally gets a mention in the resource section of the book; so I may have inadvertently met the author) and published my first iOS App written entirely in Swift, a game called Boingy Splat. Therefore I was interested to see what someone else made of the language itself, working with it to create iOS apps and importantly to see parts and techniques I'd missed.

The books takes you from doing mainly pure Swift using REPL on the command line and then introducing and using Playgrounds to implementing a full app, albeit without persisting any data. Smaller apps demonstrating features that will be used in the final app are created along the way. These are small and snappy and allow something concrete to be created along the way, quickly! There is a section on creating Playground packages and adding documentation that seems far from essential; interesting though.

Some of the first chapter introducing the fundamental types and constructs can get a bit dull if you've been there with other languages but it's information that needs presenting. Importantly as soon as possible the Swift idioms (iteration, containers, enums, let versus var etc.) are presented. This is the information that existing developers need.

The Playgrounds chapters introduces a little Cocoa Touch by the way of displaying images. Other than the choice of the author's face as the content (ironic rather than egomaniacal I hope :-)) it's good to be already manipulating the OS with Swift rather than just 'doing' Swift.

From chapter 3 onwards (there are 7 in total) the focus switches to creating apps. The most important concepts (that will allow you to search for and understand more detailed documentation and blogs) are introduced, i.e. Table Views, Master-Detail, Storyboards (and creating purely programmatic views), Custom Views, MVC architecture and accessing remote data. No time is spent on app signing etc. which is an understandable omission but a mention and a reference could prove useful for those very new to iOS development.

The later chapters tend to introduce a new iOS features and new Swift feature making use of one with the other. Even Unit Testing gets a mention when introducing classes and using mocking as example of sub-classing. Likewise full and practical use of Swift's enums and extension methods are given in the remote data chapter.

I was pleased to see Storyboards getting central billing for creating UIs as this is now more or less de rigueur and was always my issue with the Big Nerd Ranch book. The use of Segues is also covered along with how to create the UI using Interface Builder. In this chapter and the rest of the book a fair amount of time is spent showing how to pass data from from View to another. It's a shame the Singleton anti-pattern is presented as a way to share data in the final program especially given the coupling this causes which makes Unit Testing as advocated earlier in the book hard to accomplish.

As this is a new book it was good to see it tackle Auto Layout and not just from the Interface Builder perspective but go so far to explain and use the Visual Format Language. It's quite a feat to present this succinctly and not just cop out with a reference to the official documentation. It's this information that takes a somewhat scary area of iOS development and removes the fear.

The remote data chapter is good in that talks about synchronous versus asynchronous calls and makes use of Swift Closures for the callbacks. It covers useful details such as having to update the UI on the main thread versus performing work on others and shows the mechanisms to achieve this. A small library of useful methods that can be used outside of the book is created in the process. Again, this isn't super detailed information but it's setting new iOS developers on the right course allowing common areas of pain and confusion to be avoided.

Other than title the I have only a couple of criticisms. Firstly, I kept finding references to things either just before they were presented or to unhighlighted aspects of the code. This suggested that in order to keep the size of the book down that various pages had been chopped but less than perfect re-editing had occurred. I was able to reconcile all of these but it meant spending time flicking back through pages. Secondly, in a similar vein whilst the book contains screenshots there are sections where some intermediate screenshots or those of the final effect would aid the description. It's arguable as to whether the persistence of data is an 'essential'. I think it would have been worth taking the page count to 250 (from the current 228) to cover basic persistence; 2nd edition?

Formatting


I read the eBook on iPad mini 2 using the Kindle reader app (mobi format); I haven't seen the paper version.

For technical eBooks, especially to be read on anything smaller than a full size tablet means the formatting, especially of the code is very important. The text is laid out well and most of the code is split-up so the differences between text and code highlight the other which works well. There were a few times when I needed to switch to landscape mode to make the code more readable.

Towards the end where a full program in larger sections is presented it breaks down as there is little text on the same page. Different colours would have been useful for the comments and given the use of inline lambdas (for asynchronous calls) then some other formatting or again another colour would have helped separate these from the code consuming them; though I did notice some use of bold. The use of ANSI rather K&R style braces (though this seems preferred by Apple and is more compact) may have help readability.

Conclusion


When Swift was first released (June '14) a lot of people thought that learning Swift would make then iOS developers whereas the real challenge (other than learning how to program) is to learn how to program against iOS. At that time learning just Swift by itself as a first language had no real benefit especially as all the books and blog articles used Objective-C plus the Cocoa Touch APIs were still rather Swift unfriendly.

Whilst knowing Objective-C is probably essential for a full-time iOS developer today and a basic knowledge will be useful for a long time to come, 8 months down the line learning iOS development just with Swift is extremely viable. This book will allow you to do this. It's not as comprehensive at the de facto book for learning iOS development: Big Nerd Ranch's iOS Programming but it is more than sufficient, uses the latest language and covers features introduced with Xcode 6, e.g. Auto Layout and demonstrates app development along the lines of how Apple sees it, by using Storyboards and Auto Layout. It's also significantly shorter!

Whilst it doesn't cover everything and what it does cover doesn't go into great detail it introduces and uses most of the key concepts of both Swift and iOS. To write an app you'll still need to read and refer to the official documentation and read blogs etc. but after reading this book there will not be much that is alien to you.

I'd recommend this book if you can already program and would like to learn how to develop for iOS. If you already know iOS but would like to learn Swift with familiar examples and see how to access the Cocoa APIs using Swift then this is also a good book. Even if you can't program and aspire to iOS development then this is a book that you could pick up and get a long with pretty soon after you'd got to grips with the basics of programming.

I think it is possible for this to become the 'go to' book for starting iOS development.

Optional chaining with dictionaries (in Swift)

Whilst learning Swift and SpriteKit I came across an issue with the documentation for SKNode.userData. In an early XCode beta it was specified as an implicitly unwrapped optional whereas now it is clearly an optional.

Therefore the code I originally used to access values was:

var node = SKNode()
...
if let userData = node.userData
{
    if let value = userData["key']
      // do something with value
}

Once I discovered optional chaining I was able to shorten this to:

var node = SKNode()
...
if let value = userData?["key"]
{
  // do something with value
}

I.e. Use optional chaining to check for existence of userData

This is not something I've seen written about before. Optional chaining  of subscripts is mentioned in The Swift Programming Language book but this only appears in reference to providing a custom subscript operator for a class as opposed to using it with stock types and a brief mention in connection with Dictionaries when the key is not present is also made, e.g.

var dict = ["foo": "hello"]
dict["bar"]? = "world"

The latter is interesting as without the '?' then "world" will be inserted along with the creation of the "bar" key whereas using optional chaining the key must already exist.

Both forms can be combined, e.g.

var dict : [String : String]?
dict?["bar"]?= "world"

which will only insert world if the key "bar" exists and dict exists.

The use of optional chaining with subscript types can lead, like lots of other uses of optional chaining to more readable & succinct code, this is just another example.

Optional chaining with dictionaries (in Swift)

Whilst learning Swift and SpriteKit I came across an issue with the documentation for SKNode.userData. In an early XCode beta it was specified as an implicitly unwrapped optional whereas now it is clearly an optional.

Therefore the code I originally used to access values was:

var node = SKNode()
...
if let userData = node.userData
{
    if let value = userData["key']
      // do something with value
}

Once I discovered optional chaining I was able to shorten this to:

var node = SKNode()
...
if let value = userData?["key"]
{
  // do something with value
}

I.e. Use optional chaining to check for existence of userData

This is not something I've seen written about before. Optional chaining  of subscripts is mentioned in The Swift Programming Language book but this only appears in reference to providing a custom subscript operator for a class as opposed to using it with stock types and a brief mention in connection with Dictionaries when the key is not present is also made, e.g.

var dict = ["foo": "hello"]
dict["bar"]? = "world"

The latter is interesting as without the '?' then "world" will be inserted along with the creation of the "bar" key whereas using optional chaining the key must already exist.

Both forms can be combined, e.g.

var dict : [String : String]?
dict?["bar"]?= "world"

which will only insert world if the key "bar" exists and dict exists.

The use of optional chaining with subscript types can lead, like lots of other uses of optional chaining to more readable & succinct code, this is just another example.

Resolving strong references between Swift and Objective-C classes – Using unowned and weak references from Swift to Objective-C classes

My Swift and SpriteKit exploration continues. At the moment I'm writing the collision handling code.

Rather than derive game objects from SKSpriteNode with each derived class containing the code for handling collisions with the other types of game objects I'm following something akin to a Component-Entity model.

I have per-game-object handler classes of which an instance of each is stored in the actual SKSpriteNode's userData dictionary. In turn each handler instance has a reference to the SKSpriteNode that references it. Given ARC is used this is a classical strong-reference chain which will prevent memory from being freed. The usual solution to this in Objective-C is to have one of the references be weak. In Swift there are two types of weak references: weak which is the same as in Objective-C and unowned (which I think is new). The difference is that an unowned reference can never be nil, i.e. it's optional whether a weak pointer reference an object but an unowned pointer must always reference something. As such the member variable is always defined using let and must be initialized, i.e. an init method is required.

The following code shows how I was intending to implement this. There is the strong reference from node.UserData to PhysicsActions and then the unowned reference back again from PhysicsActions.

class PhysicsActions
{
  unowned let node : SKSpriteNode

  init(associatedNode : SKSpriteNode)
  {
    // Store really weak (unowned) reference
    self.node = associatedNode
  }

  func onContact(other : PhysicsActions)
  {
     // Do stuff with node

  }
}

class func makeNode(imageNamed name: String) -> SKSpriteNode
{
  let node = SKSpriteNode(imageNamed: name)
  node.userData = NSMutableDictionary()
  // Store strong reference
  node.userData["action"] = makeActionFn(node)
  return node

}

However, when I went to use this code it crashed within the onContact method when it attempted to use the node. Changing this the reference type from unowned to weak fixed this, e.g.

weak let node : SKSpriteNode?

This verified that the rest of the code was ok so this seemed to look like another Swift/Objective-C interoperability issue. Firstly, I made a pure Swift example which is a simplified version from the The Swift Programming Language book.

class Foo
{
  var bar : Bar?

  func addBar(bar: Bar)
  {
    self.bar = bar
  }
}

class Bar
{
  unowned let foo : Foo
  init(foo : Foo)
  {
    self.foo = foo
  }

  func f()
  {
    println("foo:\(foo)")
  }
}

var foo : Foo? = Foo()
var bar = Bar(foo: foo!)
foo!.ç(bar)

bar.f()

Which works and results in:

foo:C14XXXUnownedTest3Foo (has 1 child)

Ok, not a fundamental problem but let's try having an unowned reference to an Objective-C class which is just like the real case as that's what SKSpriteNode is.

Foo2.h

@interface Foo2 : NSObject
@end

Foo2.m

@implementation Foo2

-(id)init
{
  return [super init];
}

@end

main.swift

class Bar2
{
  unowned let foo2 : Foo2
  init(foo2 : Foo2)
  {
    self.foo2 = foo2
  }

  func f()
  {
    println("foo2:\(foo2)")
  }
}

var foo2 = Foo2()
var bar2 = Bar2(foo2: foo2)
bar2.f()

Which when foo2.f() is invoked results in:

libswift_stdlib_core.dylib`_swift_abortRetainUnowned:
0x100142420:  pushq  %rbp
0x100142421:  movq   %rsp, %rbp
0x100142424:  leaq   0x17597(%rip), %rax       ; "attempted to retain deallocated object"
0x10014242b:  movq   %rax, 0x348be(%rip)       ; gCRAnnotations + 8
0x100142432:  int3   
0x100142433:  nopw   %cs:(%rax,%rax)

Again, changing unowned let foo2 : Foo2 to weak var foo2 : Foo2? works. 

I can't explain what the actual problem is but it looks like the enhanced weak reference support (unowned) only works with pure Swift classes. If you have cyclic references to Objective-C classes from Swift then don't use unowned. In fact writing the previous sentence led me to try the following:

let orig = Foo2()
unowned let other = orig
println("other:\(other)")
println("orig:\(orig)")

No cyclic references, just an ordinary reference counted instance of Foo2 (the Objective-C class) which is then assigned to an unowned reference. The final call to println will keep the instance around until the end. This crashes as per the previous example when the other variable is accessed. Changing the type assigned to orig from Foo2 (Objective-C) to Foo (Swift) make it work.

Therefore it seems unowned should not be used to refer to Objective-C classes, just Swift classes.



Resolving strong references between Swift and Objective-C classes – Using unowned and weak references from Swift to Objective-C classes

My Swift and SpriteKit exploration continues. At the moment I'm writing the collision handling code.

Rather than derive game objects from SKSpriteNode with each derived class containing the code for handling collisions with the other types of game objects I'm following something akin to a Component-Entity model.

I have per-game-object handler classes of which an instance of each is stored in the actual SKSpriteNode's userData dictionary. In turn each handler instance has a reference to the SKSpriteNode that references it. Given ARC is used this is a classical strong-reference chain which will prevent memory from being freed. The usual solution to this in Objective-C is to have one of the references be weak. In Swift there are two types of weak references: weak which is the same as in Objective-C and unowned (which I think is new). The difference is that an unowned reference can never be nil, i.e. it's optional whether a weak pointer reference an object but an unowned pointer must always reference something. As such the member variable is always defined using let and must be initialized, i.e. an init method is required.

The following code shows how I was intending to implement this. There is the strong reference from node.UserData to PhysicsActions and then the unowned reference back again from PhysicsActions.

class PhysicsActions
{
  unowned let node : SKSpriteNode

  init(associatedNode : SKSpriteNode)
  {
    // Store really weak (unowned) reference
    self.node = associatedNode
  }

  func onContact(other : PhysicsActions)
  {
     // Do stuff with node

  }
}

class func makeNode(imageNamed name: String) -> SKSpriteNode
{
  let node = SKSpriteNode(imageNamed: name)
  node.userData = NSMutableDictionary()
  // Store strong reference
  node.userData["action"] = makeActionFn(node)
  return node

}

However, when I went to use this code it crashed within the onContact method when it attempted to use the node. Changing this the reference type from unowned to weak fixed this, e.g.

weak let node : SKSpriteNode?

This verified that the rest of the code was ok so this seemed to look like another Swift/Objective-C interoperability issue. Firstly, I made a pure Swift example which is a simplified version from the The Swift Programming Language book.

class Foo
{
  var bar : Bar?

  func addBar(bar: Bar)
  {
    self.bar = bar
  }
}

class Bar
{
  unowned let foo : Foo
  init(foo : Foo)
  {
    self.foo = foo
  }

  func f()
  {
    println("foo:\(foo)")
  }
}

var foo : Foo? = Foo()
var bar = Bar(foo: foo!)
foo!.ç(bar)

bar.f()

Which works and results in:

foo:C14XXXUnownedTest3Foo (has 1 child)

Ok, not a fundamental problem but let's try having an unowned reference to an Objective-C class which is just like the real case as that's what SKSpriteNode is.

Foo2.h

@interface Foo2 : NSObject
@end

Foo2.m

@implementation Foo2

-(id)init
{
  return [super init];
}

@end

main.swift

class Bar2
{
  unowned let foo2 : Foo2
  init(foo2 : Foo2)
  {
    self.foo2 = foo2
  }

  func f()
  {
    println("foo2:\(foo2)")
  }
}

var foo2 = Foo2()
var bar2 = Bar2(foo2: foo2)
bar2.f()

Which when foo2.f() is invoked results in:

libswift_stdlib_core.dylib`_swift_abortRetainUnowned:
0x100142420:  pushq  %rbp
0x100142421:  movq   %rsp, %rbp
0x100142424:  leaq   0x17597(%rip), %rax       ; "attempted to retain deallocated object"
0x10014242b:  movq   %rax, 0x348be(%rip)       ; gCRAnnotations + 8
0x100142432:  int3   
0x100142433:  nopw   %cs:(%rax,%rax)

Again, changing unowned let foo2 : Foo2 to weak var foo2 : Foo2? works. 

I can't explain what the actual problem is but it looks like the enhanced weak reference support (unowned) only works with pure Swift classes. If you have cyclic references to Objective-C classes from Swift then don't use unowned. In fact writing the previous sentence led me to try the following:

let orig = Foo2()
unowned let other = orig
println("other:\(other)")
println("orig:\(orig)")

No cyclic references, just an ordinary reference counted instance of Foo2 (the Objective-C class) which is then assigned to an unowned reference. The final call to println will keep the instance around until the end. This crashes as per the previous example when the other variable is accessed. Changing the type assigned to orig from Foo2 (Objective-C) to Foo (Swift) make it work.

Therefore it seems unowned should not be used to refer to Objective-C classes, just Swift classes.



Beware: Despite the docs SKNode userData is optional

In the Swift documentation for SKNode the userData member (a dictionary) is defined as follows:

userData

A dictionary containing arbitrary data.

Declaration

SWIFT
var userData: NSMutableDictionary!

OBJECTIVE-C

@property(nonatomic, retain) NSMutableDictionary *userData

However, in Objective-C the userData member is initially nil. Given that this is the same class then it should also be in Swift and using it, e.g.

let node = SKSpriteNode(imageNamed: name)
node.userData["action"] = Action() // custom class

causes a crash:

fatal error: Can't unwrap Optional.None

This is because the it is in fact nil despite the '!' following the Swift definition. This must be a documentation bug. The correct code is:

let node = SKSpriteNode(imageNamed: name)
node.userData = NSMutableDictionary()
node.userData["action"] = Action() // custom class

Beware: Despite the docs SKNode userData is optional

In the Swift documentation for SKNode the userData member (a dictionary) is defined as follows:

userData

A dictionary containing arbitrary data.

Declaration

SWIFT
var userData: NSMutableDictionary!

OBJECTIVE-C

@property(nonatomic, retain) NSMutableDictionary *userData

However, in Objective-C the userData member is initially nil. Given that this is the same class then it should also be in Swift and using it, e.g.

let node = SKSpriteNode(imageNamed: name)
node.userData["action"] = Action() // custom class

causes a crash:

fatal error: Can't unwrap Optional.None

This is because the it is in fact nil despite the '!' following the Swift definition. This must be a documentation bug. The correct code is:

let node = SKSpriteNode(imageNamed: name)
node.userData = NSMutableDictionary()
node.userData["action"] = Action() // custom class

XCode 5 versus XCode 6 default generated Game (Sprite Kit) project and scene sizes

As a way to learn about Swift I've been trying to write a simple game using Sprite Kit. My initial plan was to just allow a ball to be dragged around the screen and ensure it was constrained by the screen edges. This was a little harder than originally envisioned. This was partly because the default Game project generated by Xcode 6 is slightly different to that generated by Xcode 5 and so none of the existing Sprite Kit tutorials referred to this area.

My main issue was that I could move the ball around and when the drag finished the ball was endowed with velocity so it travelled after release and despite writing some simple code to ensure if the additional travel would take it off the screen this would bound it.

func boundPosToScene(newPos : CGPoint) -> CGPoint
{
  var retVal
  let radius = ball.size.width / 2

  if newPos.xradius < 0
  {
    retVal.xradius
  }

  if newPos.xradius > self.size.width
  {
    retVal.x = self.size.widthradius
  }

  if newPos.yradius < 0
  {
    retVal.yradius;
  }

  if newPos.yradius > self.size.height
  {
    retVal.y = self.size.heightradius
  }

  return retVal;

}

Where ball is a member variable of the class containing this method that is an instance of SKSpriteNode.

Despite this the ball kept disappearing. Reading various online articles, tutorials & StackOverflow posts there seemed to be an issue with Sprite Kit projects always starting up in landscape mode. When I added some debug to this statement (& else where), i.e.


println("view.bounds\(view.bounds), self.frame:\(self.frame), self.size:\(self.size)")

I got:

view.bounds(0.0,0.0,320.0,568.0), scene:(0.0,0.0,1024.0,768.0), scene.size:(1024.0,768.0)

OK. The view looks good but the scene is in landscape and even if orientated to portrait it's a lot larger than the actual view. But why? There were lots of posts about the landscape issues but nothing about the non-aligned size with Xcode 5. Well, a quick test by generating projects with 5 & 6 (NOTE: I'm new to Sprite Kit as well as Swift so I hadn't used it with 5) shows that the generated 'Game' Xcode 5 projects programmatically creates a scene that is sized to the view:

- (void)viewDidLoad
{
    [super viewDidLoad];

    // Configure the view.
    SKView * skView = (SKView *)self.view;
    skView.showsFPS = YES;
    skView.showsNodeCount = YES;
    
    // Create and configure the scene.
    SKScene * scene = [MyScene sceneWithSize:skView.bounds.size];
    scene.scaleMode = SKSceneScaleModeAspectFill;
    
    // Present the scene.
    [skView presentScene:scene];
}

whereas with XCode 6 it uses an SKS file (called GameScene):

override func viewDidLoad() {
  super.viewDidLoad()

  if let scene = GameScene.unarchiveFromFile("GameScene") as? GameScene {
    // Configure the view.
    let skView = self.view as SKView
    skView.showsFPS = true
    skView.showsNodeCount = true
            
    /* Sprite Kit applies additional optimizations to improve rendering performance */
    skView.ignoresSiblingOrder = true
            
    /* Set the scale mode to scale to fit the window */
    scene.scaleMode = .AspectFill

  }
}

Looking at the GameScene.sks:



You can see that the default scene size is 1024x768, i.e. iPad in landscape mode. Changing this to 320x576 (for iPhone 4 inch) fixes the problem.

NOTE: When making this change make sure you explicitly save the changes. Just modifying the values and building/running the project often seems to result in them being reset. I make sure I navigate away from the Size boxes before pressing Cmd-S.

Of course this is inflexible as it doesn't size to device. As I'd like my app to run on multiple device types I'm probably better of following the Xcode 5 generated code or perhaps just adding re-sizing code once the Scene has been loaded from the SKS file; as I'm not sure what else that gives me.

If you do this. In GameViewController.swift you'll also probably want to change

scene.scaleMode = .AspectFill

to

scene.scaleMode = .ResizeFill

Anyway, this difference in the generated starter apps is something to be aware of & the fact that the initial landscape startup issue reported against Xcode 5 seems to be present though possibly in a different guise. Certainty in the generated app from Xcode 5 it all works fine for me. When I added logging code to display the View and Scene size they were both reported as 320x568.

XCode 5 versus XCode 6 default generated Game (Sprite Kit) project and scene sizes

As a way to learn about Swift I've been trying to write a simple game using Sprite Kit. My initial plan was to just allow a ball to be dragged around the screen and ensure it was constrained by the screen edges. This was a little harder than originally envisioned. This was partly because the default Game project generated by Xcode 6 is slightly different to that generated by Xcode 5 and so none of the existing Sprite Kit tutorials referred to this area.

My main issue was that I could move the ball around and when the drag finished the ball was endowed with velocity so it travelled after release and despite writing some simple code to ensure if the additional travel would take it off the screen this would bound it.

func boundPosToScene(newPos : CGPoint) -> CGPoint
{
  var retVal
  let radius = ball.size.width / 2

  if newPos.xradius < 0
  {
    retVal.xradius
  }

  if newPos.xradius > self.size.width
  {
    retVal.x = self.size.widthradius
  }

  if newPos.yradius < 0
  {
    retVal.yradius;
  }

  if newPos.yradius > self.size.height
  {
    retVal.y = self.size.heightradius
  }

  return retVal;

}

Where ball is a member variable of the class containing this method that is an instance of SKSpriteNode.

Despite this the ball kept disappearing. Reading various online articles, tutorials & StackOverflow posts there seemed to be an issue with Sprite Kit projects always starting up in landscape mode. When I added some debug to this statement (& else where), i.e.


println("view.bounds\(view.bounds), self.frame:\(self.frame), self.size:\(self.size)")

I got:

view.bounds(0.0,0.0,320.0,568.0), scene:(0.0,0.0,1024.0,768.0), scene.size:(1024.0,768.0)

OK. The view looks good but the scene is in landscape and even if orientated to portrait it's a lot larger than the actual view. But why? There were lots of posts about the landscape issues but nothing about the non-aligned size with Xcode 5. Well, a quick test by generating projects with 5 & 6 (NOTE: I'm new to Sprite Kit as well as Swift so I hadn't used it with 5) shows that the generated 'Game' Xcode 5 projects programmatically creates a scene that is sized to the view:

- (void)viewDidLoad
{
    [super viewDidLoad];

    // Configure the view.
    SKView * skView = (SKView *)self.view;
    skView.showsFPS = YES;
    skView.showsNodeCount = YES;
    
    // Create and configure the scene.
    SKScene * scene = [MyScene sceneWithSize:skView.bounds.size];
    scene.scaleMode = SKSceneScaleModeAspectFill;
    
    // Present the scene.
    [skView presentScene:scene];
}

whereas with XCode 6 it uses an SKS file (called GameScene):

override func viewDidLoad() {
  super.viewDidLoad()

  if let scene = GameScene.unarchiveFromFile("GameScene") as? GameScene {
    // Configure the view.
    let skView = self.view as SKView
    skView.showsFPS = true
    skView.showsNodeCount = true
            
    /* Sprite Kit applies additional optimizations to improve rendering performance */
    skView.ignoresSiblingOrder = true
            
    /* Set the scale mode to scale to fit the window */
    scene.scaleMode = .AspectFill

  }
}

Looking at the GameScene.sks:



You can see that the default scene size is 1024x768, i.e. iPad in landscape mode. Changing this to 320x576 (for iPhone 4 inch) fixes the problem.

NOTE: When making this change make sure you explicitly save the changes. Just modifying the values and building/running the project often seems to result in them being reset. I make sure I navigate away from the Size boxes before pressing Cmd-S.

Of course this is inflexible as it doesn't size to device. As I'd like my app to run on multiple device types I'm probably better of following the Xcode 5 generated code or perhaps just adding re-sizing code once the Scene has been loaded from the SKS file; as I'm not sure what else that gives me.

If you do this. In GameViewController.swift you'll also probably want to change

scene.scaleMode = .AspectFill

to

scene.scaleMode = .ResizeFill

Anyway, this difference in the generated starter apps is something to be aware of & the fact that the initial landscape startup issue reported against Xcode 5 seems to be present though possibly in a different guise. Certainty in the generated app from Xcode 5 it all works fine for me. When I added logging code to display the View and Scene size they were both reported as 320x568.

Subclassing Objective-C classes in Swift and the perils of Initializers

--- Begin update

1. The post below applies to Xcode 6 Beta 2. With Beta 3 & 4 the relationships between the Swift and Objective-C regarding the calling of super class initializers has been formalized.

1a. A Swift class can have multiple non-convenience initializers but in this case they must all property initialize any properties. Also, a non-convenience initializer cannot invoke another initializer, i.e. it cannot call self.init(...)

1b. When calling an Objective-C any initializer can be called but unlike what happened in Beta 2 if the initializer invoked is not the Objective-C 'designated' initializer and thus calls it this DOES NOT now result in a virtual like call to an initializer with that signature in the subclass, i.e. the main problem that led to the writing of this blog post.

2. If you want to subclass an SKSpriteNote in Beta 4 the following StackOverflow post shows how.

3. It seems that for some OS classes, in my case SpriteKit either Apple directly or some Xcode magic now provides a shim Swift wrapper of the Objective-C interface (the one for SKSpriteNode has a comment with a date of 2011 but when attempting to "show in Finder" nothing happens). As such all the initializers for SKSpriteNode are marked as convenience except for the 'designated' initializer, e.g.

class MySpriteNode : SKSpriteNode {}

3a. This is now essentially a Swift only project with the MySpriteNode now deriving from SKSpriteNode which is a Swift class. As such the Swift rules must be followed. This means that any convenience initializers in MySpriteNode can only call non-convenience initializers for MySpriteNode and these MUST in turn call a non-convenience initializers in the superclass. In the case of SKSpriteNode there is a only a single designated initializer which is:

   /**
     Designated Initializer
     Initialize a sprite with a color and the specified bounds.
     @param texture the texture to use (can be nil for colored sprite)
     @param color the color to use for tinting the sprite.
     @param size the size of the sprite in points
     */

    init(texture: SKTexture!, color: UIColor!, size: CGSize)

Therefore from a derived class it's impossible to use the super class convenience methods which is what Swift demands but is a shame as it often means re-implementing similar code in the derived class, as per the StackOverflow link.

It's not perfect by any means but it's a lot more consistent than Beta 2. I don't (yet) understand where the magic shim comes from and this is update is based on Beta 4 whereas of writing Beta 5 has just been released!

--- End update

As a way to learn Swift I decided to have a play with Sprite Kit. One of the first things I did was to create a subclass of SKSpriteNode. This has a very handy initializer:

init(imageNamed name: string) (in Swift)
-(instanceType)initWithImageNamed:(NString*)name (in Objective-C)

I then derived from this resulting in:

class Ball : SKSpriteNode
{
    init()
    {
        super.init(imageNamed: "Ball")
    }
}

and added it to my scene as follows:

var ball = Ball()
ball.position = CGPoint(x:CGRectGetMidX(self.frame), y:CGRectGetMidY(self.frame))
self.addChild(ball)

Having successfully written similar code using SKSpriteKitNode directly:

var ball = SKSpriteNode(imageNamed: "Ball")
ball.position = CGPoint(x:CGRectGetMidX(self.frame), y:CGRectGetMidY(self.frame))
self.addChild(ball)

I was expecting it to work. However I received the following runtime error:

fatal error: use of unimplemented initializer 'init(texture:)' for class 'Ploing.Ball'

The odd thing was that it was complaining about the lack of initializer in the Ball class. Why would calling a base class initializer result in a call back to the derived class? I wasn't sure but to work around this I decided to add the missing initializer:

init(texture: SKTexture!)
{
    super.init(texture: texture)
}

Having duly done this I then got the next error:

fatal error: use of unimplemented initializer 'init(texture:color:size:)' for class 'Ploing.Ball'

Ok, same process add that one to the Ball class. 

init(texture texture: SKTexture!, color color: UIColor!, size size: CGSize)
{
    super.init(texture: texture, color: color, size: size)
}

That time it worked but I was puzzled as to why. What seems to be happening is that the derived class is calling the non-designated initializer in the super class which is then calling the designated (or another non-designated one in between). However, when this initializer is called rather than calling the super class implementation it calls one in the derived class, whether it exists or not. In the case of Ball they didn't hence the error.

It turns out that in the middle of the the Initialization section of the The Swift Programming Language there is the following:

Initializer Inheritance and Overriding
Unlike subclasses in Objective-C, Swift subclasses do not not inherit their superclass initializers by default. Swift’s approach prevents a situation in which a simple initializer from a superclass is automatically inherited by a more specialized subclass and is used to create a new instance of the subclass that is not fully or correctly initialized.
If you want your custom subclass to present one or more of the same initializers as its superclass—perhaps to perform some customization during initialization—you can provide an overriding implementation of the same initializer within your custom subclass.
This explains it. Basically, in Swift, initialization methods work like virtual functions in fact super virtual functions. If a super class initializer is invoked and that in turns calls another initializer (designated or not) then it forwards the call to the derived class. The upshot of this seems to be that for any derived class in Swift it would need to re-implement all of the super classes initializers or at least any which may be invoked from the the other initializers.

Time for some experiments

I therefore set out to confirm this with a simple Swift example:

class Foo
{
    var one: Int;
    var two: Int;

    init(value value:Int)
    {
        self.init(value: value, AndAnother:7)
    }

    init(value value:Int, AndAnother value1:Int)
    {
        one = value;
        two = value1;
    }

    func print()
    {
        println("One:\(one), Two:\(two)")
    }
}

class Bar : Foo
{
    init()
    {
        super.init(value:1);
    }
}

let bar = Bar()
bar.print()

This wouldn't compile. 

Designated initializer for 'Foo' cannot delegate (with 'self.init'); did you mean this to be a convenience initializer?
Must call a designated initializer of the superclass 'Foo'

Whereas in Objective-C the designated initializer is effectively advisory, in Swift it's enforced. If a initializer isn't marked with the convenience  keyword then it seems (at least during compilation) all initializers are treated as being the designated initializor and not at the same time. In this case the compilation failed as there was no designated initializer for Bar.init to call and as Foo.init(value value:Int) wasn't marked as a connivence initializer it was assumed to the designated one and is thus forbidden from calling another initializer in its own class; somewhat paradoxically.

These two rules prevent issues that occur with Objective-C's advisory initializer. All but the designated initializer must be prefixed with the convenience keyword. Secondly this is the only initializer permitted to call the super class initializer meaning all the the convenience initializers can only (cross) call other convenience or the designated initializer for their class. Following these rules gives a working example:

class Foo
{
    var one: Int;
    var two: Int;

    convenience init(value value:Int)
    {
        self.init(value: value, AndAnother:7)
    }

    init(value value:Int, AndAnother value1:Int)
    {
        one = value;
        two = value1;
    }

    func print()
    {
        println("One:\(one), Two:\(two)")
    }
}

class Bar : Foo
{
    init()
    {
        super.init(value: 7, AndAnother: 8)
    }
}

let bar = Bar()
bar.print()

Which gives the results:

One:7, Two:8

It also completely bypasses the convenience method of the superclass rendering it useless for calling from a derived class.

As for explaining the original problem this doesn't help either as the new rules prevent the problem. Herein lies the clue though. It suggests the problem is not with Swift classes subclassing other Swift classes but when a Swift class subclasses an Objective-C class.

Subclassing Objective-C from Swift

Time for another experiment but this time using a base class written in Objective-C:

Base.h

@interface Base : NSObject

-(id)initWithValue:(NSInteger) value;
-(id)initWithValue:(NSInteger) value AndAnother:(NSInteger) value1;
-(void)print;

@end

Base.m

#import "Base.h"

@interface Base ()
@property NSInteger one;
@property NSInteger two;
@end

@implementation Base

-(id)initWithValue:(NSInteger) value
{
    return [self initWithValue:value AndAnother:2];
}

-(id)initWithValue:(NSInteger) value AndAnother:(NSInteger) value1
{
    if ([super init])
    {
        self.one = value;
        self.two = value1;
    }
    
    return self;
}

-(void)print
{
    NSLog(@"One:%d, Two:%d", (int)_one, (int)_two);
}

@end

main.swift

class Baz : Base
{
    init()
    {
        super.init(value: 7)
    }

    init(value value:Int, AndAnother value1:Int)
    {
        super.init(value: value, andAnother: value1);
    }
}

var baz = Baz()
baz.print()

Looking at Base it's obvious that the desginated initializer is initWithValue:(int)value AndAnother:(int)value1 whereas Baz is calling a 'convenience' initializer. This compiles happily but when run gives the following error:

fatal error: use of unimplemented initializer 'init(value:andAnother:)' for class 'Test.Baz'

This is the same type of error as in the original SKSpriteNote sample. Here, Baz's initializer is successfully invoking the Base.initWihValue:(int) value but when it invokes Base's designated initializer this is when the super virtual functionality of the initializers comes into play and an attempt is made to call this non-existent method on Baz. This is easily fixed by adding that method such as it calls the super class method as in:

class Baz : Base
{
    init()
    {
        super.init(value: 7)
    }

    init(value value:Int, AndAnother value1:Int)
    {
        super.init(value: value, andAnother: value1);
    }
}

Giving the result:

2014-06-07 17:40:18.632 Test[47394:303] One:7, Two:2

Alternatively and staying true to the Swift way the parameterless intializer which is obviously a convenience initializer should be marked as such. This then causes a compilation failure as this is now forbidden from calling the super class initialzer and instead must make a cross call to the designated initializer giving the following code:

class Baz : Base
{
    convenience init()
    {
        self.init(value: 7, AndAnother: 8)
    }

    init(value value:Int, AndAnother value1:Int)
    {
        super.init(value: value, andAnother: value1);
    }
}

Which gives the result:

2014-06-07 17:40:46.345 Test[47407:303] One:7, Two:8

This differs from the previous version as the connivence initializer now calls the designated one within Baz meaning the  -(id)initWithValue:(NSInteger) value is never called which is why the second value is nolonger 2.

Conclusion

Whilst this explains the behaviour I was seeing when subclassing SKSpriteNode and by providing a mirror of its initializers without marking any as convenience it solves the problem of using the convenient init(imageNamed name: string) method it doesn't do so particularly elegantly nor in a fully Swift style; as it leaves the class with a set of initializers none of which are marked convenience.

A designated initializer could be specified by marking all but the init(texture texture: SKTexture!, color color: UIColor!, size size: CGSize) method as convenience and having them cross call this one. However this would prevent the direct calling of init(imageNamed name: string) which is a lot more convenient than the other methods. 

Ironically, whilst Swift formalizes the designated initializer concept and it looks like it will work well for pure Swift code it can be somewhat inconvenient when subclassing Objective-C classes.

P.S. For those of us who have trouble spelling the keyword convenience is far from convenient. A keywoard to mark the designated initializer may well have been more convenient and probably easier to spell! 

Subclassing Objective-C classes in Swift and the perils of Initializers

--- Begin update

1. The post below applies to Xcode 6 Beta 2. With Beta 3 & 4 the relationships between the Swift and Objective-C regarding the calling of super class initializers has been formalized.

1a. A Swift class can have multiple non-convenience initializers but in this case they must all property initialize any properties. Also, a non-convenience initializer cannot invoke another initializer, i.e. it cannot call self.init(...)

1b. When calling an Objective-C any initializer can be called but unlike what happened in Beta 2 if the initializer invoked is not the Objective-C 'designated' initializer and thus calls it this DOES NOT now result in a virtual like call to an initializer with that signature in the subclass, i.e. the main problem that led to the writing of this blog post.

2. If you want to subclass an SKSpriteNote in Beta 4 the following StackOverflow post shows how.

3. It seems that for some OS classes, in my case SpriteKit either Apple directly or some Xcode magic now provides a shim Swift wrapper of the Objective-C interface (the one for SKSpriteNode has a comment with a date of 2011 but when attempting to "show in Finder" nothing happens). As such all the initializers for SKSpriteNode are marked as convenience except for the 'designated' initializer, e.g.

class MySpriteNode : SKSpriteNode {}

3a. This is now essentially a Swift only project with the MySpriteNode now deriving from SKSpriteNode which is a Swift class. As such the Swift rules must be followed. This means that any convenience initializers in MySpriteNode can only call non-convenience initializers for MySpriteNode and these MUST in turn call a non-convenience initializers in the superclass. In the case of SKSpriteNode there is a only a single designated initializer which is:

   /**
     Designated Initializer
     Initialize a sprite with a color and the specified bounds.
     @param texture the texture to use (can be nil for colored sprite)
     @param color the color to use for tinting the sprite.
     @param size the size of the sprite in points
     */

    init(texture: SKTexture!, color: UIColor!, size: CGSize)

Therefore from a derived class it's impossible to use the super class convenience methods which is what Swift demands but is a shame as it often means re-implementing similar code in the derived class, as per the StackOverflow link.

It's not perfect by any means but it's a lot more consistent than Beta 2. I don't (yet) understand where the magic shim comes from and this is update is based on Beta 4 whereas of writing Beta 5 has just been released!

--- End update

As a way to learn Swift I decided to have a play with Sprite Kit. One of the first things I did was to create a subclass of SKSpriteNode. This has a very handy initializer:

init(imageNamed name: string) (in Swift)
-(instanceType)initWithImageNamed:(NString*)name (in Objective-C)

I then derived from this resulting in:

class Ball : SKSpriteNode
{
    init()
    {
        super.init(imageNamed: "Ball")
    }
}

and added it to my scene as follows:

var ball = Ball()
ball.position = CGPoint(x:CGRectGetMidX(self.frame), y:CGRectGetMidY(self.frame))
self.addChild(ball)

Having successfully written similar code using SKSpriteKitNode directly:

var ball = SKSpriteNode(imageNamed: "Ball")
ball.position = CGPoint(x:CGRectGetMidX(self.frame), y:CGRectGetMidY(self.frame))
self.addChild(ball)

I was expecting it to work. However I received the following runtime error:

fatal error: use of unimplemented initializer 'init(texture:)' for class 'Ploing.Ball'

The odd thing was that it was complaining about the lack of initializer in the Ball class. Why would calling a base class initializer result in a call back to the derived class? I wasn't sure but to work around this I decided to add the missing initializer:

init(texture: SKTexture!)
{
    super.init(texture: texture)
}

Having duly done this I then got the next error:

fatal error: use of unimplemented initializer 'init(texture:color:size:)' for class 'Ploing.Ball'

Ok, same process add that one to the Ball class. 

init(texture texture: SKTexture!, color color: UIColor!, size size: CGSize)
{
    super.init(texture: texture, color: color, size: size)
}

That time it worked but I was puzzled as to why. What seems to be happening is that the derived class is calling the non-designated initializer in the super class which is then calling the designated (or another non-designated one in between). However, when this initializer is called rather than calling the super class implementation it calls one in the derived class, whether it exists or not. In the case of Ball they didn't hence the error.

It turns out that in the middle of the the Initialization section of the The Swift Programming Language there is the following:

Initializer Inheritance and Overriding
Unlike subclasses in Objective-C, Swift subclasses do not not inherit their superclass initializers by default. Swift’s approach prevents a situation in which a simple initializer from a superclass is automatically inherited by a more specialized subclass and is used to create a new instance of the subclass that is not fully or correctly initialized.
If you want your custom subclass to present one or more of the same initializers as its superclass—perhaps to perform some customization during initialization—you can provide an overriding implementation of the same initializer within your custom subclass.
This explains it. Basically, in Swift, initialization methods work like virtual functions in fact super virtual functions. If a super class initializer is invoked and that in turns calls another initializer (designated or not) then it forwards the call to the derived class. The upshot of this seems to be that for any derived class in Swift it would need to re-implement all of the super classes initializers or at least any which may be invoked from the the other initializers.

Time for some experiments

I therefore set out to confirm this with a simple Swift example:

class Foo
{
    var one: Int;
    var two: Int;

    init(value value:Int)
    {
        self.init(value: value, AndAnother:7)
    }

    init(value value:Int, AndAnother value1:Int)
    {
        one = value;
        two = value1;
    }

    func print()
    {
        println("One:\(one), Two:\(two)")
    }
}

class Bar : Foo
{
    init()
    {
        super.init(value:1);
    }
}

let bar = Bar()
bar.print()

This wouldn't compile. 

Designated initializer for 'Foo' cannot delegate (with 'self.init'); did you mean this to be a convenience initializer?
Must call a designated initializer of the superclass 'Foo'

Whereas in Objective-C the designated initializer is effectively advisory, in Swift it's enforced. If a initializer isn't marked with the convenience  keyword then it seems (at least during compilation) all initializers are treated as being the designated initializor and not at the same time. In this case the compilation failed as there was no designated initializer for Bar.init to call and as Foo.init(value value:Int) wasn't marked as a connivence initializer it was assumed to the designated one and is thus forbidden from calling another initializer in its own class; somewhat paradoxically.

These two rules prevent issues that occur with Objective-C's advisory initializer. All but the designated initializer must be prefixed with the convenience keyword. Secondly this is the only initializer permitted to call the super class initializer meaning all the the convenience initializers can only (cross) call other convenience or the designated initializer for their class. Following these rules gives a working example:

class Foo
{
    var one: Int;
    var two: Int;

    convenience init(value value:Int)
    {
        self.init(value: value, AndAnother:7)
    }

    init(value value:Int, AndAnother value1:Int)
    {
        one = value;
        two = value1;
    }

    func print()
    {
        println("One:\(one), Two:\(two)")
    }
}

class Bar : Foo
{
    init()
    {
        super.init(value: 7, AndAnother: 8)
    }
}

let bar = Bar()
bar.print()

Which gives the results:

One:7, Two:8

It also completely bypasses the convenience method of the superclass rendering it useless for calling from a derived class.

As for explaining the original problem this doesn't help either as the new rules prevent the problem. Herein lies the clue though. It suggests the problem is not with Swift classes subclassing other Swift classes but when a Swift class subclasses an Objective-C class.

Subclassing Objective-C from Swift

Time for another experiment but this time using a base class written in Objective-C:

Base.h

@interface Base : NSObject

-(id)initWithValue:(NSInteger) value;
-(id)initWithValue:(NSInteger) value AndAnother:(NSInteger) value1;
-(void)print;

@end

Base.m

#import "Base.h"

@interface Base ()
@property NSInteger one;
@property NSInteger two;
@end

@implementation Base

-(id)initWithValue:(NSInteger) value
{
    return [self initWithValue:value AndAnother:2];
}

-(id)initWithValue:(NSInteger) value AndAnother:(NSInteger) value1
{
    if ([super init])
    {
        self.one = value;
        self.two = value1;
    }
    
    return self;
}

-(void)print
{
    NSLog(@"One:%d, Two:%d", (int)_one, (int)_two);
}

@end

main.swift

class Baz : Base
{
    init()
    {
        super.init(value: 7)
    }

    init(value value:Int, AndAnother value1:Int)
    {
        super.init(value: value, andAnother: value1);
    }
}

var baz = Baz()
baz.print()

Looking at Base it's obvious that the desginated initializer is initWithValue:(int)value AndAnother:(int)value1 whereas Baz is calling a 'convenience' initializer. This compiles happily but when run gives the following error:

fatal error: use of unimplemented initializer 'init(value:andAnother:)' for class 'Test.Baz'

This is the same type of error as in the original SKSpriteNote sample. Here, Baz's initializer is successfully invoking the Base.initWihValue:(int) value but when it invokes Base's designated initializer this is when the super virtual functionality of the initializers comes into play and an attempt is made to call this non-existent method on Baz. This is easily fixed by adding that method such as it calls the super class method as in:

class Baz : Base
{
    init()
    {
        super.init(value: 7)
    }

    init(value value:Int, AndAnother value1:Int)
    {
        super.init(value: value, andAnother: value1);
    }
}

Giving the result:

2014-06-07 17:40:18.632 Test[47394:303] One:7, Two:2

Alternatively and staying true to the Swift way the parameterless intializer which is obviously a convenience initializer should be marked as such. This then causes a compilation failure as this is now forbidden from calling the super class initialzer and instead must make a cross call to the designated initializer giving the following code:

class Baz : Base
{
    convenience init()
    {
        self.init(value: 7, AndAnother: 8)
    }

    init(value value:Int, AndAnother value1:Int)
    {
        super.init(value: value, andAnother: value1);
    }
}

Which gives the result:

2014-06-07 17:40:46.345 Test[47407:303] One:7, Two:8

This differs from the previous version as the connivence initializer now calls the designated one within Baz meaning the  -(id)initWithValue:(NSInteger) value is never called which is why the second value is nolonger 2.

Conclusion

Whilst this explains the behaviour I was seeing when subclassing SKSpriteNode and by providing a mirror of its initializers without marking any as convenience it solves the problem of using the convenient init(imageNamed name: string) method it doesn't do so particularly elegantly nor in a fully Swift style; as it leaves the class with a set of initializers none of which are marked convenience.

A designated initializer could be specified by marking all but the init(texture texture: SKTexture!, color color: UIColor!, size size: CGSize) method as convenience and having them cross call this one. However this would prevent the direct calling of init(imageNamed name: string) which is a lot more convenient than the other methods. 

Ironically, whilst Swift formalizes the designated initializer concept and it looks like it will work well for pure Swift code it can be somewhat inconvenient when subclassing Objective-C classes.

P.S. For those of us who have trouble spelling the keyword convenience is far from convenient. A keywoard to mark the designated initializer may well have been more convenient and probably easier to spell! 

Keyboard configuration for Windows’ developers on OS X (& also IntelliJ)

Recently I've been doing some ActionScript programming. Rather than target a Flash Player app. I've been using ActionScript in combination with Adobe AIR in order to create an iOS app. This has meant I've been spending time in OS X and using IntelliJ with the the ActionScript/Flex/AIR plugin as my IDE.

Most of my previous work has been done on UNIX (so command lines & vi) and Windows. In particular I depend on the various Windows & Visual Studio editor key combinations plus the Insert, Delete, Home & End keys. For starters this means I use a PC keyboard with the iMac rather than the Apple keyboard as it lacks these keys; I'm also based in the UK so I use a British PC keyboard.

In addition to these keys I wanted the following combos to be available across all of OS X and any apps.
  • Alt-Tab to cycle through apps.
  • Ctrl-F for find.
  • Ctrl-S for save current document (I habitually press this whilst editing).
  • Ctrl-C & Ctrl-V for copy & paste.
  • Ctrl-Z for undo.
  • Obtain the correct behaviour for the '\|' key and the '`¬' key. They were swapped initially.
  • '@' & '"' correctly mapped.
Additionally, I wanted these combos to be available in IntelliJ:
  • Ctrl-Left Arrow & Ctrl-Right Arrow to move the previous/next word respectively
    • Plus their selected text equivalents.
  • Ctrl-Home/Ctrl-End to move to the top/bottom of the document being edited.
This post is a description of what I installed & configured to allow to
achieve this.

Configuring a British PC keyboard

The first step was to tell OS X I was using a PC keyboard, specifically a British one. This is achieved through the System Preferences->Keyboard->Input Sources.



Here new input sources can be added by clicking the '+'. I added 'British - PC'. Adding doesn't mean it will be used though. For this also check the 'Show Input menu in menu bar' option. This adds a country flag and the name of the input source. Clicking on this allows the input source to be changed. If you swap between a PC keyboard and the iMac keyboard (which I do from time to time) this is an easy way to swap.



What all this gives you is the '"' and '@' keys in the right place. Otherwise they're transposed. Note: backslash and backquote remain transposed.

Windows' key combos

The second step was obtaining the Windows' key combos. This requires mapping the Windows' combos to the corresponding OS X combos whilst preventing the Windows' combos being interpreted as something else. After some searching it seemed like the preferred solution to this is using a 3rd party program called KeyRemap4MacBook. According to various reviews it does the job well but configuring it, especially creating your own mappings is complicated. The former being down to the UI and the latter to the XML format. All these things are true but once you've got used to it, like a lot of things it's nowhere near as daunting as it first seems; and the document is very good too. Part of the motivation for this post is to record the configuration & steps for my benefit should I need to do it again.

KeyRemap4MacBook comes with a number of canned mappings. In addition to mapping across the board they can be limited to include or exclude a specific set of apps. In particular I make use of a set of pre-defined mappings from the 'For PC Users' section which won't be applied in VMs (generally running Windows, especially useful when running Windows 8 in Parallels from the Bootcamp partition) and terminals.

As I still use the Apply keyboard from time to time when I want to do very Apple-ly stuff I have the 'Don't remap  Apple's keyboards' option enabled.

What I use

The canned mappings I use from 'For PC Users' section are:
  • Use PC Style Copy/Paste
  • Use PC Style Undo
  • Use PC Style Save
  • Use PC Style Find
These can easily be seen these in KeyRemap4MacBook using the 'show enabled only' (from the many definitions) option:



Without doing very little work this meets the majority of my needs. In addition to the 'For PC Users' and 'General' section you may also notice the three re-mappings at the start. These are custom mappings I had to create. I'm not going to explain the XML format as this is available from the documentation. Instead, here are my custom mappings.

<?xml version="1.0"?>
<root>
 <appdef>
  <appname>INTELLIJ</appname>
  <equal>com.jetbrains.intellij</equal>
 </appdef>

 <replacementdef>
  <replacementname>MY_IGNORE_APPS</replacementname>
  <replacementvalue>VIRTUALMACHINE, TERMINAL, REMOTEDESKTOPCONNECTION, VNC, INTELLIJ</replacementvalue>
 </replacementdef>

 <replacementdef>
  <replacementname>MY_IGNORE_APPS_APPENIDX</replacementname>
  <replacementvalue>(Except in Virtual Machine, Terminal, RDC, VNC and IntelliJ)</replacementvalue>
 </replacementdef>


 <item>
  <name>Use PC style alt-TAB for application switching</name>
  <appendix>{{ MY_IGNORE_APPS_APPENIDX }}</appendix>
  <identifier>private.swap_alt-tab_and_cmd-tab</identifier>
  <not>{{ MY_IGNORE_APPS }}</not>
  <autogen>__KeyToKey__ KeyCode::TAB, ModifierFlag::OPTION_L, KeyCode::TAB, ModifierFlag::COMMAND_L</autogen>
 </item>

 <item>
  <name>Swap backslash and backquote for British PC keyboard</name>
  <identifier>private.swap_backslash_and_quote_for_britishpc</identifier>
  <autogen>__KeyToKey__ KeyCode::DANISH_DOLLAR, KeyCode::BACKQUOTE</autogen>
  <autogen>__KeyToKey__ KeyCode::BACKQUOTE, KeyCode::DANISH_DOLLAR</autogen>
 </item>

 <item>
  <name>Use PC Ctrl-Home/End to move to top/bottom of document</name>
  <appendix>{{ MY_IGNORE_APPS_APPENIDX }}</appendix>
  <identifier>private.use_PC_ctrl-home/end</identifier>
  <not>{{ MY_IGNORE_APPS }}</not>
  <autogen>__KeyToKey__ KeyCode::HOME, ModifierFlag::CONTROL_L, KeyCode::CURSOR_UP, ModifierFlag::COMMAND_L</autogen>
  <autogen>__KeyToKey__ KeyCode::HOME, ModifierFlag::CONTROL_R, KeyCode::CURSOR_UP, ModifierFlag::COMMAND_L</autogen>
  <autogen>__KeyToKey__ KeyCode::END, ModifierFlag::CONTROL_L, KeyCode::CURSOR_DOWN, ModifierFlag::COMMAND_L</autogen>
  <autogen>__KeyToKey__ KeyCode::END, ModifierFlag::CONTROL_R, KeyCode::CURSOR_DOWN, ModifierFlag::COMMAND_L</autogen>
 </item>

</root>

I didn't want these mappings other than swapping backslash and backquote to be applied in various apps. i.e. VMs, VNC & RDC (where Windows is running anyway) and Terminal where it interferes with bash. To enable this I used the <not> element giving a list of excluded apps. along with using the appendix element to state this in the description.

Rather than copy the list of apps. and description I used KeyRemap4MacBook's replacement macro feature. There is a list of builtin apps. that can be referred too but I also looked at the XML file from the source that contains the 'For PC Users' mapping.

The _L & _R refer to keys which appear twice: on the left & right side of the keyboard.

The format allows multiple mappings to be grouped. These don't have to be similar but this the intention, i.e. all the ctrl-home/end mappings are together. Each <autogen> entry is a separate mapping but they are enabled/disabled collectively.

The format isn't too bad. The weird thing from an XML perspective is the <autogen> element. This is source combo followed by combo to generate instead separated by a comma. I think it would be easier to understand if this element were broken down into child elements with say <to> and <from> elements.

This private.xml is also available as a GIST.

IntelliJ

IntelliJ complicates things slightly as it provides its own key-mapping functionality similar to that of KeyRemap4MacBook but solely for itself. This means that there can be a conflict with KeyRemap4MacBook.

I'm writing this a while after I originally implemented it. In fact part of the reason I'm writing this post at all is so I have a record of what's required. Since getting this working it looks like I've changed my IntelliJ Keymap (from Preferences). Originally it was set to 'Mac OS X' but is now set to 'Default'.

When is was set to 'Mac OS X' the KeyRemap4MacBook mappings worked well except that Ctrl-Home/End wouldn't work. This is because that combination is mapped to something else. Additionally the 'Mac OS X' mappings don't provide support for Ctrl-Left/Right-Arrow for hoping back and forth over words. My initial solution to this was to modify (by taking a copy) the 'Mac OS X' mapping:
  • Change 'Move Caret to Next Word' from 'alt ->' to 'ctrl->'.
  • Change 'Move Caret to Previous Word' from 'alt <-' to 'ctrl <-'.
  • Change 'Move Caret to Next Word with Selection' from 'alt shift ->' to 'ctrl shift ->'.
  • Change 'Move Caret to Previous Word with Selection' from 'alt shift <-' to 'ctrl shift <-'.
  • Change 'Move Caret to Text End' from 'cmd end' to 'ctrl end'.
  • Change 'Move Caret to Text Start' from 'cmd home' to 'ctrl home'.
However, it seems that the 'Default' key mappings are as per-Windows but when KeyRemap4MacBook is running they all conflict. In fact I may have missed this completely when initially figuring this out.

Therefore the far easier solution is to select the 'Default' IntelliJ mapping and using KeyRemap4MacBook make it aware of IntelliJ and exclude it from key re-mapping as per the other applications. This is the purpose of the appdef section in private.xml. KeyRemap4MacBook doesn't need definitions for other excluded apps. as these are built-in.

The mappings are not perfect. IntelliJ is great but this is now down to IntelliJ's mapping and having excluded it from KeyRemap4MacBook mapping. I still miss Ctrl-Left/Right-Arrow and Ctrl-Home/End in other apps. but hopefully this should just be a case of defining more mappings and the Ctrl-Z (undo) mapping effectiveness seems to vary.



Keyboard configuration for Windows’ developers on OS X (& also IntelliJ)

Recently I've been doing some ActionScript programming. Rather than target a Flash Player app. I've been using ActionScript in combination with Adobe AIR in order to create an iOS app. This has meant I've been spending time in OS X and using IntelliJ with the the ActionScript/Flex/AIR plugin as my IDE.

Most of my previous work has been done on UNIX (so command lines & vi) and Windows. In particular I depend on the various Windows & Visual Studio editor key combinations plus the Insert, Delete, Home & End keys. For starters this means I use a PC keyboard with the iMac rather than the Apple keyboard as it lacks these keys; I'm also based in the UK so I use a British PC keyboard.

In addition to these keys I wanted the following combos to be available across all of OS X and any apps.
  • Alt-Tab to cycle through apps.
  • Ctrl-F for find.
  • Ctrl-S for save current document (I habitually press this whilst editing).
  • Ctrl-C & Ctrl-V for copy & paste.
  • Ctrl-Z for undo.
  • Obtain the correct behaviour for the '\|' key and the '`¬' key. They were swapped initially.
  • '@' & '"' correctly mapped.
Additionally, I wanted these combos to be available in IntelliJ:
  • Ctrl-Left Arrow & Ctrl-Right Arrow to move the previous/next word respectively
    • Plus their selected text equivalents.
  • Ctrl-Home/Ctrl-End to move to the top/bottom of the document being edited.
This post is a description of what I installed & configured to allow to
achieve this.

Configuring a British PC keyboard

The first step was to tell OS X I was using a PC keyboard, specifically a British one. This is achieved through the System Preferences->Keyboard->Input Sources.



Here new input sources can be added by clicking the '+'. I added 'British - PC'. Adding doesn't mean it will be used though. For this also check the 'Show Input menu in menu bar' option. This adds a country flag and the name of the input source. Clicking on this allows the input source to be changed. If you swap between a PC keyboard and the iMac keyboard (which I do from time to time) this is an easy way to swap.



What all this gives you is the '"' and '@' keys in the right place. Otherwise they're transposed. Note: backslash and backquote remain transposed.

Windows' key combos

The second step was obtaining the Windows' key combos. This requires mapping the Windows' combos to the corresponding OS X combos whilst preventing the Windows' combos being interpreted as something else. After some searching it seemed like the preferred solution to this is using a 3rd party program called KeyRemap4MacBook. According to various reviews it does the job well but configuring it, especially creating your own mappings is complicated. The former being down to the UI and the latter to the XML format. All these things are true but once you've got used to it, like a lot of things it's nowhere near as daunting as it first seems; and the document is very good too. Part of the motivation for this post is to record the configuration & steps for my benefit should I need to do it again.

KeyRemap4MacBook comes with a number of canned mappings. In addition to mapping across the board they can be limited to include or exclude a specific set of apps. In particular I make use of a set of pre-defined mappings from the 'For PC Users' section which won't be applied in VMs (generally running Windows, especially useful when running Windows 8 in Parallels from the Bootcamp partition) and terminals.

As I still use the Apply keyboard from time to time when I want to do very Apple-ly stuff I have the 'Don't remap  Apple's keyboards' option enabled.

What I use

The canned mappings I use from 'For PC Users' section are:
  • Use PC Style Copy/Paste
  • Use PC Style Undo
  • Use PC Style Save
  • Use PC Style Find
These can easily be seen these in KeyRemap4MacBook using the 'show enabled only' (from the many definitions) option:



Without doing very little work this meets the majority of my needs. In addition to the 'For PC Users' and 'General' section you may also notice the three re-mappings at the start. These are custom mappings I had to create. I'm not going to explain the XML format as this is available from the documentation. Instead, here are my custom mappings.

<?xml version="1.0"?>
<root>
 <appdef>
  <appname>INTELLIJ</appname>
  <equal>com.jetbrains.intellij</equal>
 </appdef>

 <replacementdef>
  <replacementname>MY_IGNORE_APPS</replacementname>
  <replacementvalue>VIRTUALMACHINE, TERMINAL, REMOTEDESKTOPCONNECTION, VNC, INTELLIJ</replacementvalue>
 </replacementdef>

 <replacementdef>
  <replacementname>MY_IGNORE_APPS_APPENIDX</replacementname>
  <replacementvalue>(Except in Virtual Machine, Terminal, RDC, VNC and IntelliJ)</replacementvalue>
 </replacementdef>


 <item>
  <name>Use PC style alt-TAB for application switching</name>
  <appendix>{{ MY_IGNORE_APPS_APPENIDX }}</appendix>
  <identifier>private.swap_alt-tab_and_cmd-tab</identifier>
  <not>{{ MY_IGNORE_APPS }}</not>
  <autogen>__KeyToKey__ KeyCode::TAB, ModifierFlag::OPTION_L, KeyCode::TAB, ModifierFlag::COMMAND_L</autogen>
 </item>

 <item>
  <name>Swap backslash and backquote for British PC keyboard</name>
  <identifier>private.swap_backslash_and_quote_for_britishpc</identifier>
  <autogen>__KeyToKey__ KeyCode::DANISH_DOLLAR, KeyCode::BACKQUOTE</autogen>
  <autogen>__KeyToKey__ KeyCode::BACKQUOTE, KeyCode::DANISH_DOLLAR</autogen>
 </item>

 <item>
  <name>Use PC Ctrl-Home/End to move to top/bottom of document</name>
  <appendix>{{ MY_IGNORE_APPS_APPENIDX }}</appendix>
  <identifier>private.use_PC_ctrl-home/end</identifier>
  <not>{{ MY_IGNORE_APPS }}</not>
  <autogen>__KeyToKey__ KeyCode::HOME, ModifierFlag::CONTROL_L, KeyCode::CURSOR_UP, ModifierFlag::COMMAND_L</autogen>
  <autogen>__KeyToKey__ KeyCode::HOME, ModifierFlag::CONTROL_R, KeyCode::CURSOR_UP, ModifierFlag::COMMAND_L</autogen>
  <autogen>__KeyToKey__ KeyCode::END, ModifierFlag::CONTROL_L, KeyCode::CURSOR_DOWN, ModifierFlag::COMMAND_L</autogen>
  <autogen>__KeyToKey__ KeyCode::END, ModifierFlag::CONTROL_R, KeyCode::CURSOR_DOWN, ModifierFlag::COMMAND_L</autogen>
 </item>

</root>

I didn't want these mappings other than swapping backslash and backquote to be applied in various apps. i.e. VMs, VNC & RDC (where Windows is running anyway) and Terminal where it interferes with bash. To enable this I used the <not> element giving a list of excluded apps. along with using the appendix element to state this in the description.

Rather than copy the list of apps. and description I used KeyRemap4MacBook's replacement macro feature. There is a list of builtin apps. that can be referred too but I also looked at the XML file from the source that contains the 'For PC Users' mapping.

The _L & _R refer to keys which appear twice: on the left & right side of the keyboard.

The format allows multiple mappings to be grouped. These don't have to be similar but this the intention, i.e. all the ctrl-home/end mappings are together. Each <autogen> entry is a separate mapping but they are enabled/disabled collectively.

The format isn't too bad. The weird thing from an XML perspective is the <autogen> element. This is source combo followed by combo to generate instead separated by a comma. I think it would be easier to understand if this element were broken down into child elements with say <to> and <from> elements.

This private.xml is also available as a GIST.

IntelliJ

IntelliJ complicates things slightly as it provides its own key-mapping functionality similar to that of KeyRemap4MacBook but solely for itself. This means that there can be a conflict with KeyRemap4MacBook.

I'm writing this a while after I originally implemented it. In fact part of the reason I'm writing this post at all is so I have a record of what's required. Since getting this working it looks like I've changed my IntelliJ Keymap (from Preferences). Originally it was set to 'Mac OS X' but is now set to 'Default'.

When is was set to 'Mac OS X' the KeyRemap4MacBook mappings worked well except that Ctrl-Home/End wouldn't work. This is because that combination is mapped to something else. Additionally the 'Mac OS X' mappings don't provide support for Ctrl-Left/Right-Arrow for hoping back and forth over words. My initial solution to this was to modify (by taking a copy) the 'Mac OS X' mapping:
  • Change 'Move Caret to Next Word' from 'alt ->' to 'ctrl->'.
  • Change 'Move Caret to Previous Word' from 'alt <-' to 'ctrl <-'.
  • Change 'Move Caret to Next Word with Selection' from 'alt shift ->' to 'ctrl shift ->'.
  • Change 'Move Caret to Previous Word with Selection' from 'alt shift <-' to 'ctrl shift <-'.
  • Change 'Move Caret to Text End' from 'cmd end' to 'ctrl end'.
  • Change 'Move Caret to Text Start' from 'cmd home' to 'ctrl home'.
However, it seems that the 'Default' key mappings are as per-Windows but when KeyRemap4MacBook is running they all conflict. In fact I may have missed this completely when initially figuring this out.

Therefore the far easier solution is to select the 'Default' IntelliJ mapping and using KeyRemap4MacBook make it aware of IntelliJ and exclude it from key re-mapping as per the other applications. This is the purpose of the appdef section in private.xml. KeyRemap4MacBook doesn't need definitions for other excluded apps. as these are built-in.

The mappings are not perfect. IntelliJ is great but this is now down to IntelliJ's mapping and having excluded it from KeyRemap4MacBook mapping. I still miss Ctrl-Left/Right-Arrow and Ctrl-Home/End in other apps. but hopefully this should just be a case of defining more mappings and the Ctrl-Z (undo) mapping effectiveness seems to vary.



Using IntelliJ, Adobe ActionScript and AIR SDK to create & package iOS 7 apps.

Just a quick post. Lately I've been learning ActionScript. Having seen how easy it is to get an ActionScript project for Flash Player running on Android using Adobe AIR I wanted to do the same for my iPhone. Getting stuff running on the AIR emulator and on the iOS simulator (under OS X) and AIR was pretty easy. In my case this was using IntelliJ as the IDE (rather than Flash Builder) coupled with Flex 4.6 SDK. The real fun started when I started to package my application for submission to the App Store, in particular creating the App icons.

The version of the AIR SDK that comes with the Flex 4.6 SDK is 3.1. However this isn't aware of the new iOS 7 App icons. It would seem a simple matter of adding additional entries to the Application Descriptor file, i.e. to support the the 152x152 icon just add

<image152x152>icon152.png</image152x152>

to the <icon> section. Unfortunately the schema knows this isn't valid (well doesn't know about) and you end up with the following error:

error 103: application.icon.image152x152 is an unexpected element/attribute

To fix, the first step is to download & install the latest version AIR SDK which is 3.9 (4.0 beta aside). This does not mean download & install the latest version of the Flex SDK as this contains an older version of the AIR SDK. Also, as this needs installing on top of the one present in the existing Flex SDK installation do not download the installer version, instead use the zip (Windows) or tbz2 (OS X). The following link takes you to both: http://www.adobe.com/devnet/air/air-sdk-download.html

Then extract these within the Flex SDK (you might want to take a copy of this first but if things go wrong you can always re-download it). The easiest way is to just copy/move the archive to the Flex SDK directory and extract the files there which will overwrite the existing ones.

NOTE: Up to this point the same thing occurred on both Windows & OS X. The following steps only worked on OS X. In particular updating the scheme in the Application Descriptor didn't work and when reverted back to 3.1 (& support for iOS 7 App Icons removed) then packaging the app. was a problem as the AIR SDK seemed to be missing various binaries to create the ARM binaries. I haven't pursued this further as I was working on OS X at this point.

In theory everything should work now. However if you proceed to package the app. it will still give the same 103 error. This is because the scheme version number in the Application Descriptor needs updating. Most likely the line will be:

<application xmlns="http://ns.adobe.com/air/application/3.1">

the 3.1 needs changing to 3.9.

This may not fix the problem though. If you're using IntelliJ (sorry don't know about Flash Builder) and have selected the 'Generated' option for the Application Descriptor then it appears by default IntelliJ (AIR?) creates this with a version of 3.1. In this case you'll need to stop using this option. Instead choose the 'Custom template' and either create your own or have IntelliJ (AIR?) generate one for you. If you choose the latter option then IntelliJ offers a drop down to specify the version. However, it only lists 3.1 to 3.8. Therefore this will need manually changing to 3.9.


At this point it should be possible to successfully package an iOS app with iOS 7 App Icon support.

Using IntelliJ, Adobe ActionScript and AIR SDK to create & package iOS 7 apps.

Just a quick post. Lately I've been learning ActionScript. Having seen how easy it is to get an ActionScript project for Flash Player running on Android using Adobe AIR I wanted to do the same for my iPhone. Getting stuff running on the AIR emulator and on the iOS simulator (under OS X) and AIR was pretty easy. In my case this was using IntelliJ as the IDE (rather than Flash Builder) coupled with Flex 4.6 SDK. The real fun started when I started to package my application for submission to the App Store, in particular creating the App icons.

The version of the AIR SDK that comes with the Flex 4.6 SDK is 3.1. However this isn't aware of the new iOS 7 App icons. It would seem a simple matter of adding additional entries to the Application Descriptor file, i.e. to support the the 152x152 icon just add

<image152x152>icon152.png</image152x152>

to the <icon> section. Unfortunately the schema knows this isn't valid (well doesn't know about) and you end up with the following error:

error 103: application.icon.image152x152 is an unexpected element/attribute

To fix, the first step is to download & install the latest version AIR SDK which is 3.9 (4.0 beta aside). This does not mean download & install the latest version of the Flex SDK as this contains an older version of the AIR SDK. Also, as this needs installing on top of the one present in the existing Flex SDK installation do not download the installer version, instead use the zip (Windows) or tbz2 (OS X). The following link takes you to both: http://www.adobe.com/devnet/air/air-sdk-download.html

Then extract these within the Flex SDK (you might want to take a copy of this first but if things go wrong you can always re-download it). The easiest way is to just copy/move the archive to the Flex SDK directory and extract the files there which will overwrite the existing ones.

NOTE: Up to this point the same thing occurred on both Windows & OS X. The following steps only worked on OS X. In particular updating the scheme in the Application Descriptor didn't work and when reverted back to 3.1 (& support for iOS 7 App Icons removed) then packaging the app. was a problem as the AIR SDK seemed to be missing various binaries to create the ARM binaries. I haven't pursued this further as I was working on OS X at this point.

In theory everything should work now. However if you proceed to package the app. it will still give the same 103 error. This is because the scheme version number in the Application Descriptor needs updating. Most likely the line will be:

<application xmlns="http://ns.adobe.com/air/application/3.1">

the 3.1 needs changing to 3.9.

This may not fix the problem though. If you're using IntelliJ (sorry don't know about Flash Builder) and have selected the 'Generated' option for the Application Descriptor then it appears by default IntelliJ (AIR?) creates this with a version of 3.1. In this case you'll need to stop using this option. Instead choose the 'Custom template' and either create your own or have IntelliJ (AIR?) generate one for you. If you choose the latter option then IntelliJ offers a drop down to specify the version. However, it only lists 3.1 to 3.8. Therefore this will need manually changing to 3.9.


At this point it should be possible to successfully package an iOS app with iOS 7 App Icon support.

Capturing lvalue references in C++11 lambdas

Recently the question "what is the type of an lvalue reference when captured by reference in a C++11 lambda?" was asked. It turns out that it's a reference to whatever the original reference was too. This is just like taking a reference to an existing reference, e.g.

int foo = 7;
int& rfoo = foo;
int& rfoo1 = rfoo;
int& rfoo2 = rfoo1;

All references refer to foo rather than rfoo2->rfoo1->rfoo->foo meaning the following code

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 << ", rfoo2:" << rfoo2 
          << '\n';
++foo;

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 << ", rfoo2:" << rfoo2 
          << '\n';

std::cout << "&foo:" << &foo << ", &rfoo:" << &rfoo 
          << ", &rfoo1:" << &rfoo1 << ", &rfoo2:" << &rfoo2 
          << '\n';

Which gives:

foo:7, rfoo:7, rfoo1:7, rfoo2:7
foo:8, rfoo:8, rfoo1:8, rfoo2:8
&foo:00D3FB0C, &rfoo:00D3FB0C, &rfoo1:00D3FB0C, &rfoo2:00D3FB0C

I.e. all the references are aliases for the original foo hence the same value is displayed including when the original is modified and that the address of each variable is the same, that of foo.

There is nothing surprising here it's just basic C++ but it's along time since I've thought about it which is why with lambdas, l-value, r-value and universal references I sometimes I do a double take on what was once obvious.

The same happens with lambda capture but it's a slightly more interesting story. Take the following example:

int foo = 99;
int& rfoo = foo;
int& rfoo1 = foo;

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 
          << '\n';

std::cout << "&foo:" << &foo << ", &rfoo:" << &rfoo 
          << ", &rfoo1:" << &rfoo1 
          << '\n';

auto l = [foo, rfoo, &rfoo1]()
{
    std::cout << "foo:" << foo << '\n';
    std::cout << "rfoo:" << rfoo << '\n';
    std::cout << "rfoo1:" << rfoo1 << '\n';

    std::cout << "&foo:" << &foo << ", &rfoo:" 
              << &rfoo << ", &rfoo1:" << &rfoo1 
              << '\n';
};

foo = 100;

l();

Which gives:

foo:99, rfoo:99, rfoo1:99
&foo:00D3FB0C, &rfoo:00D3FB0C, &rfoo1:00D3FB0C
foo:99
rfoo:99
rfoo1:100
&foo:00D3FAE0, &rfoo:00D3FAE4, &rfoo1:00D3FB0C

To begin with it behaves as per the first example in that foo, rfoo and rfoo1 all give the same value as rfoo and rfoo1 are effectively aliases for foo as shown when displaying their addresses; they're all the same.

However, when these same variables are captured it's a different story: The capture of foo is of no surprise as this is by-value so displays the captured value of 99 despite the original foo being changed to 100 prior to the lambda being invoked. Its address is that of a new variable; a member of the lambda.

It starts to get interesting with the capture of rfoo. When the lambda is invoked this too displays 99, the original captured value. Also, its address is not that of the original foo. It seems that the reference itself has not been captured but rather what it refers too, in this case an int with the value of 99. It appears to have been magically dereferenced as part of the capture.

This is the correct behaviour and when thought about becomes somewhat obvious. It's just like assigning a variable from a reference, e.g.

int foo = 7;
int& rfoo = foo;
int bar = rfoo;

bar doesn't become an int& and  rfoo is magically dereferenced except in this scenario there is nothing magical at all, it's as expected. If int were replaced with auto, e.g.

auto bar = rfoo;

then it would be expected that bar is an int as auto strips of CV and reference qualifiers.

Finally, there is rfoo1. This too is odd as it is attempting to take a reference to a reference. As seen in the first example this is perfectly fine. The end effect is that there can't be a reference to reference and so on and all are aliases of the original variable.

This is pretty much what's happening here. It's irrelevant that the target of the capture is a reference. In the end the capture by reference is capture by reference of the underlying variable, i.e. what rfoo1 refers too, in this case foo not rfoo1 itself. This is demonstrated twofold by rfoo1 within the lambda displaying the updated value of foo and also that the address of rfoo1 within the lambda is that of foo outside it.

This is as per the standard section 5.1.2 Lambda expression sub-note 14:

An entity is captured by copy if it is implicitly captured and the capture-default is = or if it is explicitly
captured with a capture that does not include an &. For each entity captured by copy, an unnamed nonstatic
data member is declared in the closure type. The declaration order of these members is unspecified.
The type of such a data member is the type of the corresponding captured entity if the entity is not a
reference to an object, or the referenced type otherwise. [ Note: If the captured entity is a reference to a
function, the corresponding data member is also a reference to a function. —end note ]

The sentence in bold states that for a reference captured by value then the type of the captured value is the type referred to, i.e. the reference aspect as been removed the crucial part being "or the referenced type otherwise". (NOTE: I haven't experimented with references to functions).

Finally, a vivid example showing that a reference captured by value involves a dereference.

class Bar
{
private:
int mValue;

public:
Bar(const Bar&) : mValue(9999)
{
}

public:
Bar(const int value) : mValue(value) {}
int GetValue() const { return mValue; }
void SetValue(const int value) { mValue = value; }
};

Bar bar(1);
Bar& rbar = bar;
Bar& rbar1 = bar;

std::cout << "&bar:" << &bar << ", &rbar:" << &rbar<< ", &rbar1:" << &rbar1 << '\n';

auto l2 = [bar, rbar, &rbar1]()
{
std::cout << "bar:" << bar.GetValue() << '\n';
std::cout << "rbar:" << rbar.GetValue() << '\n';
std::cout << "rbar1:" << rbar1.GetValue() << '\n';

std::cout << "&bar:" << &bar << ", &rbar:" << &rbar<< ", &rbar1:" << &rbar1 << '\n';
};

bar.SetValue(2);

l2();

The class bar provides a crude copy-constructor that sets the stored value to 9999. The following output is similar to that in the previous example in that the addresses of bar and rbar in the lambda differ from that of bar showing they're copies whilst rbar1 is the same. Secondly, the value of mValue stored within Bar is shown as 9999 for the first two captured variables meaning they were copy-constructed.

&bar:00D3FB0C, &rbar:00D3FB0C, &rbar1:00D3FB0C
bar:9999
rbar:9999
rbar1:2
&bar:00D3FAE0, &rbar:00D3FAE4, &rbar1:00D3FB0C

Making the copy-construct private (by commenting out the seemingly unnecessary 'public:') prevents compilation.

1>------ Build started: Project: References, Configuration: Debug Win32 ------
1>  main.cpp
1>c:\users\pete\desktop\references\references\main.cpp(85): error C2248: 'Bar::Bar' : cannot access private member declared in class 'Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(59) : see declaration of 'Bar::Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(54) : see declaration of 'Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(59) : see declaration of 'Bar::Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(54) : see declaration of 'Bar'

Writing this post has clarified the situation for me, I hope it helps you as well.

The sample code is available here.

Capturing lvalue references in C++11 lambdas

Recently the question "what is the type of an lvalue reference when captured by reference in a C++11 lambda?" was asked. It turns out that it's a reference to whatever the original reference was too. This is just like taking a reference to an existing reference, e.g.

int foo = 7;
int& rfoo = foo;
int& rfoo1 = rfoo;
int& rfoo2 = rfoo1;

All references refer to foo rather than rfoo2->rfoo1->rfoo->foo meaning the following code

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 << ", rfoo2:" << rfoo2 
          << '\n';
++foo;

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 << ", rfoo2:" << rfoo2 
          << '\n';

std::cout << "&foo:" << &foo << ", &rfoo:" << &rfoo 
          << ", &rfoo1:" << &rfoo1 << ", &rfoo2:" << &rfoo2 
          << '\n';

Which gives:

foo:7, rfoo:7, rfoo1:7, rfoo2:7
foo:8, rfoo:8, rfoo1:8, rfoo2:8
&foo:00D3FB0C, &rfoo:00D3FB0C, &rfoo1:00D3FB0C, &rfoo2:00D3FB0C

I.e. all the references are aliases for the original foo hence the same value is displayed including when the original is modified and that the address of each variable is the same, that of foo.

There is nothing surprising here it's just basic C++ but it's along time since I've thought about it which is why with lambdas, l-value, r-value and universal references I sometimes I do a double take on what was once obvious.

The same happens with lambda capture but it's a slightly more interesting story. Take the following example:

int foo = 99;
int& rfoo = foo;
int& rfoo1 = foo;

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 
          << '\n';

std::cout << "&foo:" << &foo << ", &rfoo:" << &rfoo 
          << ", &rfoo1:" << &rfoo1 
          << '\n';

auto l = [foo, rfoo, &rfoo1]()
{
    std::cout << "foo:" << foo << '\n';
    std::cout << "rfoo:" << rfoo << '\n';
    std::cout << "rfoo1:" << rfoo1 << '\n';

    std::cout << "&foo:" << &foo << ", &rfoo:" 
              << &rfoo << ", &rfoo1:" << &rfoo1 
              << '\n';
};

foo = 100;

l();

Which gives:

foo:99, rfoo:99, rfoo1:99
&foo:00D3FB0C, &rfoo:00D3FB0C, &rfoo1:00D3FB0C
foo:99
rfoo:99
rfoo1:100
&foo:00D3FAE0, &rfoo:00D3FAE4, &rfoo1:00D3FB0C

To begin with it behaves as per the first example in that foo, rfoo and rfoo1 all give the same value as rfoo and rfoo1 are effectively aliases for foo as shown when displaying their addresses; they're all the same.

However, when these same variables are captured it's a different story: The capture of foo is of no surprise as this is by-value so displays the captured value of 99 despite the original foo being changed to 100 prior to the lambda being invoked. Its address is that of a new variable; a member of the lambda.

It starts to get interesting with the capture of rfoo. When the lambda is invoked this too displays 99, the original captured value. Also, its address is not that of the original foo. It seems that the reference itself has not been captured but rather what it refers too, in this case an int with the value of 99. It appears to have been magically dereferenced as part of the capture.

This is the correct behaviour and when thought about becomes somewhat obvious. It's just like assigning a variable from a reference, e.g.

int foo = 7;
int& rfoo = foo;
int bar = rfoo;

bar doesn't become an int& and  rfoo is magically dereferenced except in this scenario there is nothing magical at all, it's as expected. If int were replaced with auto, e.g.

auto bar = rfoo;

then it would be expected that bar is an int as auto strips of CV and reference qualifiers.

Finally, there is rfoo1. This too is odd as it is attempting to take a reference to a reference. As seen in the first example this is perfectly fine. The end effect is that there can't be a reference to reference and so on and all are aliases of the original variable.

This is pretty much what's happening here. It's irrelevant that the target of the capture is a reference. In the end the capture by reference is capture by reference of the underlying variable, i.e. what rfoo1 refers too, in this case foo not rfoo1 itself. This is demonstrated twofold by rfoo1 within the lambda displaying the updated value of foo and also that the address of rfoo1 within the lambda is that of foo outside it.

This is as per the standard section 5.1.2 Lambda expression sub-note 14:

An entity is captured by copy if it is implicitly captured and the capture-default is = or if it is explicitly
captured with a capture that does not include an &. For each entity captured by copy, an unnamed nonstatic
data member is declared in the closure type. The declaration order of these members is unspecified.
The type of such a data member is the type of the corresponding captured entity if the entity is not a
reference to an object, or the referenced type otherwise. [ Note: If the captured entity is a reference to a
function, the corresponding data member is also a reference to a function. —end note ]

The sentence in bold states that for a reference captured by value then the type of the captured value is the type referred to, i.e. the reference aspect as been removed the crucial part being "or the referenced type otherwise". (NOTE: I haven't experimented with references to functions).

Finally, a vivid example showing that a reference captured by value involves a dereference.

class Bar
{
private:
int mValue;

public:
Bar(const Bar&) : mValue(9999)
{
}

public:
Bar(const int value) : mValue(value) {}
int GetValue() const { return mValue; }
void SetValue(const int value) { mValue = value; }
};

Bar bar(1);
Bar& rbar = bar;
Bar& rbar1 = bar;

std::cout << "&bar:" << &bar << ", &rbar:" << &rbar<< ", &rbar1:" << &rbar1 << '\n';

auto l2 = [bar, rbar, &rbar1]()
{
std::cout << "bar:" << bar.GetValue() << '\n';
std::cout << "rbar:" << rbar.GetValue() << '\n';
std::cout << "rbar1:" << rbar1.GetValue() << '\n';

std::cout << "&bar:" << &bar << ", &rbar:" << &rbar<< ", &rbar1:" << &rbar1 << '\n';
};

bar.SetValue(2);

l2();

The class bar provides a crude copy-constructor that sets the stored value to 9999. The following output is similar to that in the previous example in that the addresses of bar and rbar in the lambda differ from that of bar showing they're copies whilst rbar1 is the same. Secondly, the value of mValue stored within Bar is shown as 9999 for the first two captured variables meaning they were copy-constructed.

&bar:00D3FB0C, &rbar:00D3FB0C, &rbar1:00D3FB0C
bar:9999
rbar:9999
rbar1:2
&bar:00D3FAE0, &rbar:00D3FAE4, &rbar1:00D3FB0C

Making the copy-construct private (by commenting out the seemingly unnecessary 'public:') prevents compilation.

1>------ Build started: Project: References, Configuration: Debug Win32 ------
1>  main.cpp
1>c:\users\pete\desktop\references\references\main.cpp(85): error C2248: 'Bar::Bar' : cannot access private member declared in class 'Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(59) : see declaration of 'Bar::Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(54) : see declaration of 'Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(59) : see declaration of 'Bar::Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(54) : see declaration of 'Bar'

Writing this post has clarified the situation for me, I hope it helps you as well.

The sample code is available here.

Windows 8 Pro on an early 2009 iMac 21.5 (Core 2 Duo)

A couple of weeks back I thought I'd have a go writing a Windows Store App.  To do this requires Windows 8.  At the time I was running Windows 7 Home Premium on an early 2009 iMac 21.5 (Core 2 Duo).  This had been installed using Boot Camp including install Boot Camp assistant and the drivers supplied by Apple.

To upgrade to Windows 8 I wanted to avoid a re-installation of all my apps. and data etc so I went with an in place upgrade.  This all seemed to work properly and soon I was running Windows 8 and could access the Windows Store App templates from Visual Studio.  However, soon after Windows 8 kept crashing, well freezing.  It got to the point that after every reboot I'd be lucky to get 5 minutes of up time between each freeze.

Given that Apple haven't provided Windows 8 drivers yet this wasn't exactly a surprise.  I decided to try and work around this by rebooting to OS X and using VMWare Fusion to access the Boot Camp partition.  Whilst rebooting in OS X I managed to corrupt the Windows installation.  I use a non-Apple wireless keyboard (as I need the insert, delete, home & end plus the easily accessible cursor keys for VS development) so holding down Alt to select the OS to boot into didn't work.  When I realized it was going back into Windows I just turned the machine off.  After a couple of times the Windows installation was toast!  To get back to the point of trying Fusion I had to do a fresh Windows install.  In this case installing a minimal Windows 7 installation: just enough to allow the download of Windows 8.  I then installed Windows 8 using the preserve nothing option.

Having now gone through the steps I wanted to avoid I decided to give the new installation a go via direct boot, i.e. no Fusion.  That was two weeks ago.  Since then I've re-installed all the apps. and my personal data and (fingers crossed) haven't had a single crash.  As the freezes were usually happening during some graphical operation e.g. a status bar updating I assumed the fault probably lay with the video drivers.  I didn't install Boot Camp assistant and in particular the Windows 7 drivers from OS X disc.  Well, I did install one.  After a while I noticed I wasn't getting any sound even though all the audio drivers and hardware claimed they were happy.  Eventually I installed by the Cirrus Logic driver which made the speakers work. I haven't gone anywhere near the NVIDIA drivers.

So, the whole point of this post is for those who run Windows via Boot Camp on early iMacs and want to run Windows 8 then perhaps a fresh install (or maybe uninstall the Boot Camp supplied drivers prior to upgrade) is probably the way to go.

Windows 8 Pro on an early 2009 iMac 21.5 (Core 2 Duo)

A couple of weeks back I thought I'd have a go writing a Windows Store App.  To do this requires Windows 8.  At the time I was running Windows 7 Home Premium on an early 2009 iMac 21.5 (Core 2 Duo).  This had been installed using Boot Camp including install Boot Camp assistant and the drivers supplied by Apple.

To upgrade to Windows 8 I wanted to avoid a re-installation of all my apps. and data etc so I went with an in place upgrade.  This all seemed to work properly and soon I was running Windows 8 and could access the Windows Store App templates from Visual Studio.  However, soon after Windows 8 kept crashing, well freezing.  It got to the point that after every reboot I'd be lucky to get 5 minutes of up time between each freeze.

Given that Apple haven't provided Windows 8 drivers yet this wasn't exactly a surprise.  I decided to try and work around this by rebooting to OS X and using VMWare Fusion to access the Boot Camp partition.  Whilst rebooting in OS X I managed to corrupt the Windows installation.  I use a non-Apple wireless keyboard (as I need the insert, delete, home & end plus the easily accessible cursor keys for VS development) so holding down Alt to select the OS to boot into didn't work.  When I realized it was going back into Windows I just turned the machine off.  After a couple of times the Windows installation was toast!  To get back to the point of trying Fusion I had to do a fresh Windows install.  In this case installing a minimal Windows 7 installation: just enough to allow the download of Windows 8.  I then installed Windows 8 using the preserve nothing option.

Having now gone through the steps I wanted to avoid I decided to give the new installation a go via direct boot, i.e. no Fusion.  That was two weeks ago.  Since then I've re-installed all the apps. and my personal data and (fingers crossed) haven't had a single crash.  As the freezes were usually happening during some graphical operation e.g. a status bar updating I assumed the fault probably lay with the video drivers.  I didn't install Boot Camp assistant and in particular the Windows 7 drivers from OS X disc.  Well, I did install one.  After a while I noticed I wasn't getting any sound even though all the audio drivers and hardware claimed they were happy.  Eventually I installed by the Cirrus Logic driver which made the speakers work. I haven't gone anywhere near the NVIDIA drivers.

So, the whole point of this post is for those who run Windows via Boot Camp on early iMacs and want to run Windows 8 then perhaps a fresh install (or maybe uninstall the Boot Camp supplied drivers prior to upgrade) is probably the way to go.

Specifying the directory to create SQL CE databases when using Entity Framework

In the last few posts I've been describing how to create instances of SQLCE in order to perform automated Integration Testing using NUnit and accessing the dB using Entity Framework.  I covered creating the dB using both Entity Framework and the SQL CE classes.  In particular I wanted control over the directory the dB was created in but I didn't want to tie to a specific location rather let it use the current working directory.

Using the Entity Framework's DbContext constructor that takes the name of a connection string or database name it's suddenly very easy to end up NOT creating the dB you expected where you expected it to be.  This post shows how to avoid these.  Generally speaking the use of the DbContext constructor that takes a Connection String should be avoided unless the name of a connection string from the .config file is being specified.

Example 1 - Using the SqlCeEngine class
1:  const string DB_NAME = "test1.sdf";  
2: const string DB_PATH = @".\" + DB_NAME; // Use ".\" for CWD or a specific path
3: const string CONNECTION_STRING = "data source=" + DB_PATH;
4:
5: using (var eng = new SqlCeEngine(CONNECTION_STRING))
6: {
7: eng.CreateDatabase();
8: }
9:
10: using (var conn = new SqlCeConnection(CONNECTION_STRING))
11: {
12: conn.Open(); // do stuff with db...
13: }
14:

The important thing to note is that the constructor for SqlCeEngine that takes an argument requires a Connection String, i.e. a string containing the "data source=...".  Just specifying the dB path is not sufficient.  To specify a specific directory  include the absolute or relative path.  To specify the current working directory, e.g. bin\debug then just use ".\".

Example 2 - Using DbContext (doesn't work)
1:  using (var ctx = new DbContext("test2.sdf"))  
2: {
3: ctx.Database.Create();
4: }

This code appears to work but doesn't create an instance of an SQL CE dB as desired.  Instead it creates a localDB instance in the user's home directory.  In my case: C:\Users\Pete\._test.sdf.mdf (& corresponding log file).  This is not really surprising as Entity Framework had no way of knowing that a SQL CE dB should be created.

Example 3 - Using DbContext (does work)
1:  Database.DefaultConnectionFactory =  
2: new SqlCeConnectionFactory(
3: "System.Data.SqlServerCe.4.0",
4: @".\", "");
5:
6: using (var ctx = new DbContext("test2.sdf"))
7: {
8: ctx.Database.Create();
9: // do stuff with ctx...
10: }

The difference between the last and this example is changing the default type of dB that EF should create.  As shown this is done by installing a different factory.

The 3rd parameter to SqlCeConnectionFactory is the directory that the dB should be created in.  Just like the first example specifying ".\" means the current working directory and specifying an absolute path to a directory will lead to them being created there.

NOTE: As per the post Integration Testing with NUnit and Entity Framework be aware that creating a dB using the Entity Framework results in the additional table '_MigrationHistory' being created which EF uses to keep the model and dB synchronized.

NOTE1: Whereas SqlCeEngine is a SQL CE class from the System.Data.SqlServerCe assembly, SqlCeConnectionFactory appears to be part of the System.Data.Entity assembly which is part of the Entity Framework.


In the above example the string passed to DbContext can be a name (of a connection string from the .config file) or a connection string.  In this case passing the name of the db, i.e. test2.sdf is equivalent to passing "data source=test2.sdf", well more or less.  If the '.sdf' suffix is omitted with "data source" then the resultant dB is called test2 but if just test2 is passed then the resulting dB will be called test2.sdf.

Example 4 - Using DbContext and the .config file
1:  using (var ctx = new DbContext("test5"))  
2: {
3: ctx.Database.Create();
4: }

App or Web .config
1:  <connectionStrings>  
2: <add name="test5"
3: providerName="System.Data.SqlServerCe.4.0"
4: connectionString="Data Source=test5.sdf"/>
5: </connectionStrings>

This time no factory is specified but the argument to DbContext is the name of a Connection String in the .config file.  As can be seen this contains similar information to that in the factory method enabling EF to create a dB of the correct type.

To use these the instances of these databases rather than calling the create method on the context just use the context directly or more likely in the case of EF a derived context which brings us to one last example.

Example 5 - Using a derived context and .config file
1:  public class TestCtx : DbContext  
2: {
3: }
4: using (var ctx = new TestCtx())
5: {
6: ctx.Database.Create();
7: }

App or Web .config
1:  <connectionStrings>  
2: <add name="TestCtx"
3: providerName="System.Data.SqlServerCe.4.0"
4: connectionString="Data Source=test6.sdf"/>
5: </connectionStrings>

If a derived context is created which will almost certainly be the case then if an instance of this is created and a dB created then EF will look for a Connection String in the .config file that has the same name as the context and take the information from there.

Specifying the directory to create SQL CE databases when using Entity Framework

In the last few posts I've been describing how to create instances of SQLCE in order to perform automated Integration Testing using NUnit and accessing the dB using Entity Framework.  I covered creating the dB using both Entity Framework and the SQL CE classes.  In particular I wanted control over the directory the dB was created in but I didn't want to tie to a specific location rather let it use the current working directory.

Using the Entity Framework's DbContext constructor that takes the name of a connection string or database name it's suddenly very easy to end up NOT creating the dB you expected where you expected it to be.  This post shows how to avoid these.  Generally speaking the use of the DbContext constructor that takes a Connection String should be avoided unless the name of a connection string from the .config file is being specified.

Example 1 - Using the SqlCeEngine class
1:  const string DB_NAME = "test1.sdf";  
2: const string DB_PATH = @".\" + DB_NAME; // Use ".\" for CWD or a specific path
3: const string CONNECTION_STRING = "data source=" + DB_PATH;
4:
5: using (var eng = new SqlCeEngine(CONNECTION_STRING))
6: {
7: eng.CreateDatabase();
8: }
9:
10: using (var conn = new SqlCeConnection(CONNECTION_STRING))
11: {
12: conn.Open(); // do stuff with db...
13: }
14:

The important thing to note is that the constructor for SqlCeEngine that takes an argument requires a Connection String, i.e. a string containing the "data source=...".  Just specifying the dB path is not sufficient.  To specify a specific directory  include the absolute or relative path.  To specify the current working directory, e.g. bin\debug then just use ".\".

Example 2 - Using DbContext (doesn't work)
1:  using (var ctx = new DbContext("test2.sdf"))  
2: {
3: ctx.Database.Create();
4: }

This code appears to work but doesn't create an instance of an SQL CE dB as desired.  Instead it creates a localDB instance in the user's home directory.  In my case: C:\Users\Pete\._test.sdf.mdf (& corresponding log file).  This is not really surprising as Entity Framework had no way of knowing that a SQL CE dB should be created.

Example 3 - Using DbContext (does work)
1:  Database.DefaultConnectionFactory =  
2: new SqlCeConnectionFactory(
3: "System.Data.SqlServerCe.4.0",
4: @".\", "");
5:
6: using (var ctx = new DbContext("test2.sdf"))
7: {
8: ctx.Database.Create();
9: // do stuff with ctx...
10: }

The difference between the last and this example is changing the default type of dB that EF should create.  As shown this is done by installing a different factory.

The 3rd parameter to SqlCeConnectionFactory is the directory that the dB should be created in.  Just like the first example specifying ".\" means the current working directory and specifying an absolute path to a directory will lead to them being created there.

NOTE: As per the post Integration Testing with NUnit and Entity Framework be aware that creating a dB using the Entity Framework results in the additional table '_MigrationHistory' being created which EF uses to keep the model and dB synchronized.

NOTE1: Whereas SqlCeEngine is a SQL CE class from the System.Data.SqlServerCe assembly, SqlCeConnectionFactory appears to be part of the System.Data.Entity assembly which is part of the Entity Framework.


In the above example the string passed to DbContext can be a name (of a connection string from the .config file) or a connection string.  In this case passing the name of the db, i.e. test2.sdf is equivalent to passing "data source=test2.sdf", well more or less.  If the '.sdf' suffix is omitted with "data source" then the resultant dB is called test2 but if just test2 is passed then the resulting dB will be called test2.sdf.

Example 4 - Using DbContext and the .config file
1:  using (var ctx = new DbContext("test5"))  
2: {
3: ctx.Database.Create();
4: }

App or Web .config
1:  <connectionStrings>  
2: <add name="test5"
3: providerName="System.Data.SqlServerCe.4.0"
4: connectionString="Data Source=test5.sdf"/>
5: </connectionStrings>

This time no factory is specified but the argument to DbContext is the name of a Connection String in the .config file.  As can be seen this contains similar information to that in the factory method enabling EF to create a dB of the correct type.

To use these the instances of these databases rather than calling the create method on the context just use the context directly or more likely in the case of EF a derived context which brings us to one last example.

Example 5 - Using a derived context and .config file
1:  public class TestCtx : DbContext  
2: {
3: }
4: using (var ctx = new TestCtx())
5: {
6: ctx.Database.Create();
7: }

App or Web .config
1:  <connectionStrings>  
2: <add name="TestCtx"
3: providerName="System.Data.SqlServerCe.4.0"
4: connectionString="Data Source=test6.sdf"/>
5: </connectionStrings>

If a derived context is created which will almost certainly be the case then if an instance of this is created and a dB created then EF will look for a Connection String in the .config file that has the same name as the context and take the information from there.

Integration Testing with NUnit and Entity Framework

This post gives a quick introduction into creating SQL CE dBs for performing Integration Tests using NUnit.

In the previous post Using NUnit and Entity Framework DbContext to programmatically create SQL Server CE databases and specify the databse directory a basic way was shown to how to create a new dB (using Entity Framework's DbContext) programmtically.  This was used to generate a new dB for a test hosted by NUnit.

The subsequent post Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to generate a SQL CE dB schema from an existing SQL Server database.

This post ties theprevious ones together.  As mentioned in the first post the reason for this is an attempt at what amounts to Integration Testing using NUnit.  I'm currently building a Repository and Unit Of Work abstraction on top of Entity Framework which will allow the isolation of the dB code (in fact it will isolate and abstract away most forms of data storage).  This means any business logic can be tested with a test-double that implements the Repository and UnitOfWork interfaces; which is straight forward Unit Testing.  The Integration Testing is to verify that the Repository and Unit Of Work implementations work correctly.

The rest of the post isn't focused on these two patterns; though it may mention them.  Instead it documents my further experience of using NUnit to writes tests that interact with dB via Entity Framework.  The premise for this is that a dB already exists.

As such the approach to using Entity Framework is a hybrid of Database First and Code First in that the dB schema exists and needs be maintained outside of EF and also that EF should not generate model classes, i.e. allowing the use of Code First POCOs.  This is possible as the POCOs can be defined, a connection made to dB and then the two are conflated via an EF DbContext.  It then seems that EF creates the model on the fly (internally compiles it) and as long as the POCO types map to the dB types then it all works as if by magic!

The advantage of doing it this way is that the existing dB is SQL Express based but for the Integration Testing a new dB can be created when needed, potentially one per test.  In order to keep the test dBs isolated from the real dB SQL Server Compact Edition (SQL Server CE V4) was used.  Therefore the requirement was for the EF code to be able to work with SQL Express and SQL CE with the primary definition of the schema taken from SQL Express.  It's not possible to use exactly the same schema as SQL CE only has a subset of the data-types provides by SQL CE.  However, the process described in the post 
Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to create semantically equivalent SQL.


From this point onwards it's assumed that an SQL file to create the dB has been generated.  Now create a new C# class library project and using the NUGet add Entity Framework, NUnit and SQL CE 4.0.  All my work has been with EF 4.3.1.  Following this drag the Model1.edmx.sqlce file from the project used to generate to new project.  You may wish to rename it, e.g. to test.sqlce.


Creating the database

The post Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to create a new CE dB per-test using the EF DbContext to do the hard work.  A different approach is now taken as the problem with creating a dB using DbContext is that in addition to creating any specified tables and indices etc. it also creates an additional table called '__MigrationHistory' which contains a description of the EF model used to create the dB.  The description of the problem caused by this will be delayed until the "Why DbContext is no longer used to create the database" section.  Suffice to say for the present using the new mechanism avoids the creation of this table.

The code below is the beginnings of a test class.  It is assumed all the tests need a fresh copy of the dB hence the creation is performed in the Setup method.  All this code does is create a SQL CE dB and then
creates the schema.

1:  [TestFixture]  
2: public class SimpleTests
3: {
4: const string DB_NAME = "test.sdf";
5: const string DB_PATH = @".\" + DB_NAME;
6: const string CONNECTION_STRING = "data source=" + DB_PATH;
7: [SetUp]
8: public void Setup()
9: {
10: DeleteDb();
11: using (var eng = new SqlCeEngine(CONNECTION_STRING))
12: eng.CreateDatabase();
13: using (var conn = new SqlCeConnection(CONNECTION_STRING))
14: {
15: conn.Open();
16: string sql=ReadSQLFromFile(@"C:\Users\Pete\work\Jub\EFTests\Test.sqlce");
17: string[] sqlCmds = sql.Split(new string[] { "GO" }, int.MaxValue, StringSplitOptions.RemoveEmptyEntries);
18: foreach (string sqlCmd in sqlCmds)
19: try
20: {
21: var cmd = conn.CreateCommand();
22:
23: cmd.CommandText = sqlCmd;
24: cmd.ExecuteNonQuery();
25: }
26: catch (Exception e)
27: {
28: Console.Error.WriteLine("{0}:{1}", e.Message, sqlCmd);
29: throw;
30: }
31: }
32: }
33: public void DeleteDb()
34: {
35: if (File.Exists(DB_PATH))
36: File.Delete(DB_PATH);
37: }
38: private string ReadSQLFromFile(string sqlFilePath)
39: {
40: using (TextReader r = new StreamReader(sqlFilePath))
41: {
42: return r.ReadToEnd();
43: }
44: }
45: }
46:
The dB file (Test.sdf) will be created in the current working directory.  As the test assembly is located in <project>\bin\debug which is where the NUnit test runner picks up the DLL from this directory this is where it is created.  If a specific directory is required then the '.\' can be replaced with the required path.

The Setup method is marked with NUnit's SetUp attribute meaning it will be invoked on a per-test basis creating a new dB instance for each test.  The DeleteDb method could be marked with [TearDown] attribute but at the moment any previous dB is deleted before creating a new one.  It would be fine to do both as a belt and braces approach.  The reason I didn't make it the TearDown method is so that I could inspect the dB following a test if needed.

SQL CE does not support batch execution of SQL scripts which is where it gets interesting as the SQL generated previously is in batch form.  The code reads the entire file into a string and determines each individual statement by splitting string on the 'GO' command that separates each SQL command.

To help understand the SQL the following is the diagram of the dB I'm working with.  All fields are strings except for the Ids which are numeric.
Each of these commands is then executed.  The previously generated SQL (the SQL for the dB I'm working with is below) will not work completely out of the box.  The ALTER and DROP statements at the beginning don't apply as the schema is being applied to an empty dB, these should be removed.  Interestingly the schema generation step for my dB seems to miss out a 'GO' between the penultimate and ultimate statement.  I had to add one by hand.  Finally, the comments at the end prove a problem as there is no terminating 'GO'.  Removing these fixes the problem.  In the code above the exception handler re-throws the exception after writing out the details.  For everything to proceed the SQL needs modifying to execute perfectly.  If the re-throw is removed then the code will tolerate individual command failures which in this context really just amount to warnings.

NOTE: Text highlighted in red has been removed and text in blue added.

-- --------------------------------------------------
-- Entity Designer DDL Script for SQL Server Compact Edition
-- --------------------------------------------------
-- Date Created: 07/29/2012 12:28:35
-- Generated from EDMX file: C:\Users\Pete\work\Jub\DummyWebApplicationToGenerateSQLServerCE4Script\Model1.edmx
-- --------------------------------------------------


-- --------------------------------------------------
-- Dropping existing FOREIGN KEY constraints
-- NOTE: if the constraint does not exist, an ignorable error will be reported.
-- --------------------------------------------------

    ALTER TABLE [RepComments] DROP CONSTRAINT [FK_RepComments_Reps];
GO

-- --------------------------------------------------
-- Dropping existing tables
-- NOTE: if the table does not exist, an ignorable error will be reported.
-- --------------------------------------------------

    DROP TABLE [RepComments];
GO
    DROP TABLE [Reps];
GO
    DROP TABLE [Roads];
GO

-- --------------------------------------------------
-- Creating all tables
-- --------------------------------------------------

-- Creating table 'RepComments'
CREATE TABLE [RepComments] (
    [CommentId] int IDENTITY(1,1) NOT NULL,
    [RepId] int  NOT NULL,
    [Comment] ntext  NOT NULL
);
GO

-- Creating table 'Reps'
CREATE TABLE [Reps] (
    [RepId] int IDENTITY(1,1) NOT NULL,
    [RepName] nvarchar(50)  NOT NULL,
    [RoadName] nvarchar(256)  NOT NULL,
    [HouseNumberOrName] nvarchar(50)  NOT NULL,
    [ContactTelNumber] nvarchar(20)  NOT NULL,
    [Email] nvarchar(50)  NULL
);
GO

-- Creating table 'Roads'
CREATE TABLE [Roads] (
    [Name] nvarchar(256)  NOT NULL
);
GO

-- --------------------------------------------------
-- Creating all PRIMARY KEY constraints
-- --------------------------------------------------

-- Creating primary key on [CommentId] in table 'RepComments'
ALTER TABLE [RepComments]
ADD CONSTRAINT [PK_RepComments]
    PRIMARY KEY ([CommentId] );
GO

-- Creating primary key on [RepId] in table 'Reps'
ALTER TABLE [Reps]
ADD CONSTRAINT [PK_Reps]
    PRIMARY KEY ([RepId] );
GO

-- Creating primary key on [Name] in table 'Roads'
ALTER TABLE [Roads]
ADD CONSTRAINT [PK_Roads]
    PRIMARY KEY ([Name] );
GO

-- --------------------------------------------------
-- Creating all FOREIGN KEY constraints
-- --------------------------------------------------

-- Creating foreign key on [RepId] in table 'RepComments'
ALTER TABLE [RepComments]
ADD CONSTRAINT [FK_RepComments_Reps]
    FOREIGN KEY ([RepId])
    REFERENCES [Reps]
        ([RepId])
    ON DELETE NO ACTION ON UPDATE NO ACTION;
GO
-- Creating non-clustered index for FOREIGN KEY 'FK_RepComments_Reps'
CREATE INDEX [IX_FK_RepComments_Reps]
ON [RepComments]
    ([RepId]);
GO

-- --------------------------------------------------
-- Script has ended
-- --------------------------------------------------

Getting the SQL into a state where it will run flawlessly is a little bit of a hassle but given the number of times it will be used subsequently it's job a big job, well for a small dB anyway.  To verify that your dB has been created as needed an quick and easy way to test is to comment out the call to DeleteDb() and after a test has run open to the dB using Server Explorer within VS, i.e.



Using the dB in a test

Now that a fresh dB will be created for each test it's time to look at simple test:

1:  [Test]  
2: public void TestOne()
3: {
4: using (var conn = new SqlCeConnection(CONNECTION_STRING))
5: using (var ctx = new TestCtx(conn))
6: {
7: ctx.Roads.Add(new Road() { Name = "Test" });
8: ctx.SaveChanges();
9: Assert.That(1, Is.EqualTo(ctx.Roads.Count()));
10: }
11: }
Road in this case is defined as:

1:  class Road  
2: {
3: [Key]
4: public string Name { get; set; }
5: }

The first thing to note is that EF is not used to form the connection to the dB, instead one is made using the SqlCe specific classes.  Attempting to get EF to connect to a specific dB instance when not referring to a named connection strings in the .config file is a bit of an art (I may write another entry about this).  However, EF is quite happy to work with an existing connection.  This makes for a good separation of responsibilities in the code where EF manages the interactions with the dB but the control of the connection is elsewhere.

NOTE: It is likely that each test will require a connection and a context hence rather it might make more sense to move the creation of the SqlCeConnection and the context (TestCtx in this case) to a SetUp method and as these resources need disposing of adding a TearDown method to do that.  TestCtx could also be modified to pass true to the DbContext constructor to give ownership of the connection to the context so that it will dispose of it then context is disposed off.

I would have preferred to avoid having to defined a specific derived context and instead use DbContext directory, e.g.
1:  [Test]  
2: public void TesTwo()
3: {
4: using (var conn = new SqlCeConnection(CONNECTION_STRING))
5: using (var ctx = new DbContext(conn, false))
6: {
7: ctx.Set<Road>().Add(new Road() { Name = "Test" });
8: ctx.SaveChanges();
9: Assert.That(1, Is.EqualTo(ctx.Set<Road>().Count()));
10: }
11: }

However when SaveChanges() is called the following exception is thrown:

System.InvalidOperationException : The entity type Road is not part of the model for the current context.

This is because EF knows nothing about the Road type.  When a derived context is created for the first time I think EF performs reflection on any properties that expose DbSet.  These are the types that form the Model.  Another option is to create the model, optionally compile it and then pass it to an instance of DbContext.  This way involves a lot less code.

That's it.  The final section is just footnote about the move away from using EF to create the dB.

Why DbContext is no longer used to create the database

As mentioned creating the dB using:
1:  using (var ctx = new DbContext("bar.sdf"))  
2: {
3: ctx.Database.Create();
4: // create schema etc.
5: }
causes the '__MigrationHistory' table to be created.  Assuming this method was used, later on when TestCtx was used top open the dB and perform an operation the following exception would be thrown:

System.InvalidOperationException : The model backing the 'DbContext' context has changed since the database was created. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=238269).
This is because the context used to create the model was a raw DbContext (as per the previous post) whereas the dB was accessed via the TestCtx.  If the context used to create the dB is also changed to TestCtx then this problem goes away.
However, given the original dB is not intended to be created nor be maintained (code migrations) by EF then using the non-context/EF approach to dB completely removes EF from the picture.









Integration Testing with NUnit and Entity Framework

This post gives a quick introduction into creating SQL CE dBs for performing Integration Tests using NUnit.

In the previous post Using NUnit and Entity Framework DbContext to programmatically create SQL Server CE databases and specify the databse directory a basic way was shown to how to create a new dB (using Entity Framework's DbContext) programmtically.  This was used to generate a new dB for a test hosted by NUnit.

The subsequent post Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to generate a SQL CE dB schema from an existing SQL Server database.

This post ties theprevious ones together.  As mentioned in the first post the reason for this is an attempt at what amounts to Integration Testing using NUnit.  I'm currently building a Repository and Unit Of Work abstraction on top of Entity Framework which will allow the isolation of the dB code (in fact it will isolate and abstract away most forms of data storage).  This means any business logic can be tested with a test-double that implements the Repository and UnitOfWork interfaces; which is straight forward Unit Testing.  The Integration Testing is to verify that the Repository and Unit Of Work implementations work correctly.

The rest of the post isn't focused on these two patterns; though it may mention them.  Instead it documents my further experience of using NUnit to writes tests that interact with dB via Entity Framework.  The premise for this is that a dB already exists.

As such the approach to using Entity Framework is a hybrid of Database First and Code First in that the dB schema exists and needs be maintained outside of EF and also that EF should not generate model classes, i.e. allowing the use of Code First POCOs.  This is possible as the POCOs can be defined, a connection made to dB and then the two are conflated via an EF DbContext.  It then seems that EF creates the model on the fly (internally compiles it) and as long as the POCO types map to the dB types then it all works as if by magic!

The advantage of doing it this way is that the existing dB is SQL Express based but for the Integration Testing a new dB can be created when needed, potentially one per test.  In order to keep the test dBs isolated from the real dB SQL Server Compact Edition (SQL Server CE V4) was used.  Therefore the requirement was for the EF code to be able to work with SQL Express and SQL CE with the primary definition of the schema taken from SQL Express.  It's not possible to use exactly the same schema as SQL CE only has a subset of the data-types provides by SQL CE.  However, the process described in the post 
Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to create semantically equivalent SQL.


From this point onwards it's assumed that an SQL file to create the dB has been generated.  Now create a new C# class library project and using the NUGet add Entity Framework, NUnit and SQL CE 4.0.  All my work has been with EF 4.3.1.  Following this drag the Model1.edmx.sqlce file from the project used to generate to new project.  You may wish to rename it, e.g. to test.sqlce.


Creating the database

The post Generating a SQL Server CE database schema from a SQL Server database using Entity Framework showed how to create a new CE dB per-test using the EF DbContext to do the hard work.  A different approach is now taken as the problem with creating a dB using DbContext is that in addition to creating any specified tables and indices etc. it also creates an additional table called '__MigrationHistory' which contains a description of the EF model used to create the dB.  The description of the problem caused by this will be delayed until the "Why DbContext is no longer used to create the database" section.  Suffice to say for the present using the new mechanism avoids the creation of this table.

The code below is the beginnings of a test class.  It is assumed all the tests need a fresh copy of the dB hence the creation is performed in the Setup method.  All this code does is create a SQL CE dB and then
creates the schema.

1:  [TestFixture]  
2: public class SimpleTests
3: {
4: const string DB_NAME = "test.sdf";
5: const string DB_PATH = @".\" + DB_NAME;
6: const string CONNECTION_STRING = "data source=" + DB_PATH;
7: [SetUp]
8: public void Setup()
9: {
10: DeleteDb();
11: using (var eng = new SqlCeEngine(CONNECTION_STRING))
12: eng.CreateDatabase();
13: using (var conn = new SqlCeConnection(CONNECTION_STRING))
14: {
15: conn.Open();
16: string sql=ReadSQLFromFile(@"C:\Users\Pete\work\Jub\EFTests\Test.sqlce");
17: string[] sqlCmds = sql.Split(new string[] { "GO" }, int.MaxValue, StringSplitOptions.RemoveEmptyEntries);
18: foreach (string sqlCmd in sqlCmds)
19: try
20: {
21: var cmd = conn.CreateCommand();
22:
23: cmd.CommandText = sqlCmd;
24: cmd.ExecuteNonQuery();
25: }
26: catch (Exception e)
27: {
28: Console.Error.WriteLine("{0}:{1}", e.Message, sqlCmd);
29: throw;
30: }
31: }
32: }
33: public void DeleteDb()
34: {
35: if (File.Exists(DB_PATH))
36: File.Delete(DB_PATH);
37: }
38: private string ReadSQLFromFile(string sqlFilePath)
39: {
40: using (TextReader r = new StreamReader(sqlFilePath))
41: {
42: return r.ReadToEnd();
43: }
44: }
45: }
46:
The dB file (Test.sdf) will be created in the current working directory.  As the test assembly is located in <project>\bin\debug which is where the NUnit test runner picks up the DLL from this directory this is where it is created.  If a specific directory is required then the '.\' can be replaced with the required path.

The Setup method is marked with NUnit's SetUp attribute meaning it will be invoked on a per-test basis creating a new dB instance for each test.  The DeleteDb method could be marked with [TearDown] attribute but at the moment any previous dB is deleted before creating a new one.  It would be fine to do both as a belt and braces approach.  The reason I didn't make it the TearDown method is so that I could inspect the dB following a test if needed.

SQL CE does not support batch execution of SQL scripts which is where it gets interesting as the SQL generated previously is in batch form.  The code reads the entire file into a string and determines each individual statement by splitting string on the 'GO' command that separates each SQL command.

To help understand the SQL the following is the diagram of the dB I'm working with.  All fields are strings except for the Ids which are numeric.
Each of these commands is then executed.  The previously generated SQL (the SQL for the dB I'm working with is below) will not work completely out of the box.  The ALTER and DROP statements at the beginning don't apply as the schema is being applied to an empty dB, these should be removed.  Interestingly the schema generation step for my dB seems to miss out a 'GO' between the penultimate and ultimate statement.  I had to add one by hand.  Finally, the comments at the end prove a problem as there is no terminating 'GO'.  Removing these fixes the problem.  In the code above the exception handler re-throws the exception after writing out the details.  For everything to proceed the SQL needs modifying to execute perfectly.  If the re-throw is removed then the code will tolerate individual command failures which in this context really just amount to warnings.

NOTE: Text highlighted in red has been removed and text in blue added.

-- --------------------------------------------------
-- Entity Designer DDL Script for SQL Server Compact Edition
-- --------------------------------------------------
-- Date Created: 07/29/2012 12:28:35
-- Generated from EDMX file: C:\Users\Pete\work\Jub\DummyWebApplicationToGenerateSQLServerCE4Script\Model1.edmx
-- --------------------------------------------------


-- --------------------------------------------------
-- Dropping existing FOREIGN KEY constraints
-- NOTE: if the constraint does not exist, an ignorable error will be reported.
-- --------------------------------------------------

    ALTER TABLE [RepComments] DROP CONSTRAINT [FK_RepComments_Reps];
GO

-- --------------------------------------------------
-- Dropping existing tables
-- NOTE: if the table does not exist, an ignorable error will be reported.
-- --------------------------------------------------

    DROP TABLE [RepComments];
GO
    DROP TABLE [Reps];
GO
    DROP TABLE [Roads];
GO

-- --------------------------------------------------
-- Creating all tables
-- --------------------------------------------------

-- Creating table 'RepComments'
CREATE TABLE [RepComments] (
    [CommentId] int IDENTITY(1,1) NOT NULL,
    [RepId] int  NOT NULL,
    [Comment] ntext  NOT NULL
);
GO

-- Creating table 'Reps'
CREATE TABLE [Reps] (
    [RepId] int IDENTITY(1,1) NOT NULL,
    [RepName] nvarchar(50)  NOT NULL,
    [RoadName] nvarchar(256)  NOT NULL,
    [HouseNumberOrName] nvarchar(50)  NOT NULL,
    [ContactTelNumber] nvarchar(20)  NOT NULL,
    [Email] nvarchar(50)  NULL
);
GO

-- Creating table 'Roads'
CREATE TABLE [Roads] (
    [Name] nvarchar(256)  NOT NULL
);
GO

-- --------------------------------------------------
-- Creating all PRIMARY KEY constraints
-- --------------------------------------------------

-- Creating primary key on [CommentId] in table 'RepComments'
ALTER TABLE [RepComments]
ADD CONSTRAINT [PK_RepComments]
    PRIMARY KEY ([CommentId] );
GO

-- Creating primary key on [RepId] in table 'Reps'
ALTER TABLE [Reps]
ADD CONSTRAINT [PK_Reps]
    PRIMARY KEY ([RepId] );
GO

-- Creating primary key on [Name] in table 'Roads'
ALTER TABLE [Roads]
ADD CONSTRAINT [PK_Roads]
    PRIMARY KEY ([Name] );
GO

-- --------------------------------------------------
-- Creating all FOREIGN KEY constraints
-- --------------------------------------------------

-- Creating foreign key on [RepId] in table 'RepComments'
ALTER TABLE [RepComments]
ADD CONSTRAINT [FK_RepComments_Reps]
    FOREIGN KEY ([RepId])
    REFERENCES [Reps]
        ([RepId])
    ON DELETE NO ACTION ON UPDATE NO ACTION;
GO
-- Creating non-clustered index for FOREIGN KEY 'FK_RepComments_Reps'
CREATE INDEX [IX_FK_RepComments_Reps]
ON [RepComments]
    ([RepId]);
GO

-- --------------------------------------------------
-- Script has ended
-- --------------------------------------------------

Getting the SQL into a state where it will run flawlessly is a little bit of a hassle but given the number of times it will be used subsequently it's job a big job, well for a small dB anyway.  To verify that your dB has been created as needed an quick and easy way to test is to comment out the call to DeleteDb() and after a test has run open to the dB using Server Explorer within VS, i.e.



Using the dB in a test

Now that a fresh dB will be created for each test it's time to look at simple test:

1:  [Test]  
2: public void TestOne()
3: {
4: using (var conn = new SqlCeConnection(CONNECTION_STRING))
5: using (var ctx = new TestCtx(conn))
6: {
7: ctx.Roads.Add(new Road() { Name = "Test" });
8: ctx.SaveChanges();
9: Assert.That(1, Is.EqualTo(ctx.Roads.Count()));
10: }
11: }
Road in this case is defined as:

1:  class Road  
2: {
3: [Key]
4: public string Name { get; set; }
5: }

The first thing to note is that EF is not used to form the connection to the dB, instead one is made using the SqlCe specific classes.  Attempting to get EF to connect to a specific dB instance when not referring to a named connection strings in the .config file is a bit of an art (I may write another entry about this).  However, EF is quite happy to work with an existing connection.  This makes for a good separation of responsibilities in the code where EF manages the interactions with the dB but the control of the connection is elsewhere.

NOTE: It is likely that each test will require a connection and a context hence rather it might make more sense to move the creation of the SqlCeConnection and the context (TestCtx in this case) to a SetUp method and as these resources need disposing of adding a TearDown method to do that.  TestCtx could also be modified to pass true to the DbContext constructor to give ownership of the connection to the context so that it will dispose of it then context is disposed off.

I would have preferred to avoid having to defined a specific derived context and instead use DbContext directory, e.g.
1:  [Test]  
2: public void TesTwo()
3: {
4: using (var conn = new SqlCeConnection(CONNECTION_STRING))
5: using (var ctx = new DbContext(conn, false))
6: {
7: ctx.Set<Road>().Add(new Road() { Name = "Test" });
8: ctx.SaveChanges();
9: Assert.That(1, Is.EqualTo(ctx.Set<Road>().Count()));
10: }
11: }

However when SaveChanges() is called the following exception is thrown:

System.InvalidOperationException : The entity type Road is not part of the model for the current context.

This is because EF knows nothing about the Road type.  When a derived context is created for the first time I think EF performs reflection on any properties that expose DbSet.  These are the types that form the Model.  Another option is to create the model, optionally compile it and then pass it to an instance of DbContext.  This way involves a lot less code.

That's it.  The final section is just footnote about the move away from using EF to create the dB.

Why DbContext is no longer used to create the database

As mentioned creating the dB using:
1:  using (var ctx = new DbContext("bar.sdf"))  
2: {
3: ctx.Database.Create();
4: // create schema etc.
5: }
causes the '__MigrationHistory' table to be created.  Assuming this method was used, later on when TestCtx was used top open the dB and perform an operation the following exception would be thrown:

System.InvalidOperationException : The model backing the 'DbContext' context has changed since the database was created. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=238269).
This is because the context used to create the model was a raw DbContext (as per the previous post) whereas the dB was accessed via the TestCtx.  If the context used to create the dB is also changed to TestCtx then this problem goes away.
However, given the original dB is not intended to be created nor be maintained (code migrations) by EF then using the non-context/EF approach to dB completely removes EF from the picture.









Generating a SQL Server CE database schema from a SQL Server database using Entity Framework

In a previous entry I described how to programmatically create (& destroy) a SQL CE dB for integration testing using NUnit.  Since getting that working I ran into a couple of other problems which I've more or less solved so I thought I'd write those up.  To begin with though this is a prequel post describing how to obtain the SQL script to create the SQL CE dB.

If you happen to be working exclusively with CE then you'll already have your schema file.  In my case I'm using SQLExpress and as this is experimental work I created my dB by hand.  However, using the EF it's pretty easy to obtain the schema and have the EF wizard generate the CE schema.  This is important as there are differences in the dialect of SQL used by SQL Express and SQL CE and its easier to have a tool handle those, though it doesn't do all of them.

The basic flow is to generate an EF model (EDMX) file from the existing SQL Express database and then use the 'Generate database from model' functionality.  It is at this point that the target SQL dB can be chosen, i.e. SQL Server, SQL Server CE or some others.

To create a model requires adding a 'New Item' of type 'ADO.Net Entity Data Model' to a VS project so first a new dummy project needs creating.  This is where it gets a little complicated as not any type of project will do.  I'm working with CE 4 and require a schema for that version of the dB (though creating one for 3.5 works but I like to things as close to ideal as possible).  Due to this constraint it is necessary to chose a Web type project as for some reason the VS2010 integration provided by EF only supports the generation of CE 4 dBs for Web projects.  If a simple C# Windows Console project is selected then you're limited to CE 3.5.  Thus the simplest project type is the 'ASP.Net Empty Web Application' as shown below.


Having done this, next add a new item of type ADO.Net Entity Data Model as below. NOTE: The project will have to reference the Entity Framework assemblies.  The easiest way to do this (& the one most people are probably using) is to use the NuGet package.


Then follow the wizard.


Selecting "Generate from database".


Choose your SQLExpress (or SQL Server) dB but uncheck the "Save entity connection settings in Web.Config as:" as we're converting to SQLCE so want to minimize anything related to other types of SQL Server.


Finally select the SQL elements you require.  In this example only the existing tables were selected.  As this is generating the EF model from an existing database no SQL file is generated just the model for which the diagram is shown, i.e.


The next phase is to generate the SQL from the model (which was generated from the hand crafted db) but to make sure the SQL that's generated is compliant with SQL CE.

To generate the schema right click and select "Generate model from database..."


This brings up the "Generate database" wizard which is very similar to the previously used "Entity Data Model" wizard used to create the model.  From here choose the "New Connection" option which pops up another set of dialogs.  On the first choose the type of data source as "Microsoft SQL Server Compact 4.0".

Clicking on continue then leads to the next dialog where you need to create a dB.



Ok-ing this leads back to the "Generate database wizard".


This time check the "Save entity connection settings in Web.Config" checkbox.  This information will be useful later (to be covered in a different post).  Clicking "Next" the SQL is generated and present in the wizard.


This can be copied & pasted directly from here or pressing "Finish" will save the SQL to the file indicated at the top of the dialog box.  This file is added to the project.  The following prompt will appear when "Finish" is pressed.
 

This doesn't really matter as this is a throw away project but having the updated schemas maybe useful so go with "Yes".

The SQL can now be used to configure an empty SQL CE 4.0 database.  The easiest way is to open the SQL file and right-click selecting the "Execute SQL" menu item.


This brings up the SQL Server dialog from which if "New Database" is selected an CE 4 one can be specified.


Having specified a location and pressed "Ok" the SQL script is executed.  As can be seen below this is not without errors.  However, this isn't anything to worry about as the errors are to do with dropping tables and indices that currently don't exist as it's a newly created dB.  Performing the same steps but missing out the creation of the dB file as it already exists sees the SQL script execute flawlessly.



The final picture shows the newly created database in VS2010's Server Explorer demonstrating that the tables were indeed created.


The basis for this post is my experimentation on using NUnit to programmatically test some dB based functionality.  If a single instance of a database suffices for all your tests and you can execute the SQL by hand as above and then you can follow these steps.  In may case I want to a fresh database per test so I need to automate the running of the SQL Script combined the with the creation and destruction of the underlying database.  The creation and deletion aspect were covered in a previous post but the next step will have to wait until a later one.

Generating a SQL Server CE database schema from a SQL Server database using Entity Framework

In a previous entry I described how to programmatically create (& destroy) a SQL CE dB for integration testing using NUnit.  Since getting that working I ran into a couple of other problems which I've more or less solved so I thought I'd write those up.  To begin with though this is a prequel post describing how to obtain the SQL script to create the SQL CE dB.

If you happen to be working exclusively with CE then you'll already have your schema file.  In my case I'm using SQLExpress and as this is experimental work I created my dB by hand.  However, using the EF it's pretty easy to obtain the schema and have the EF wizard generate the CE schema.  This is important as there are differences in the dialect of SQL used by SQL Express and SQL CE and its easier to have a tool handle those, though it doesn't do all of them.

The basic flow is to generate an EF model (EDMX) file from the existing SQL Express database and then use the 'Generate database from model' functionality.  It is at this point that the target SQL dB can be chosen, i.e. SQL Server, SQL Server CE or some others.

To create a model requires adding a 'New Item' of type 'ADO.Net Entity Data Model' to a VS project so first a new dummy project needs creating.  This is where it gets a little complicated as not any type of project will do.  I'm working with CE 4 and require a schema for that version of the dB (though creating one for 3.5 works but I like to things as close to ideal as possible).  Due to this constraint it is necessary to chose a Web type project as for some reason the VS2010 integration provided by EF only supports the generation of CE 4 dBs for Web projects.  If a simple C# Windows Console project is selected then you're limited to CE 3.5.  Thus the simplest project type is the 'ASP.Net Empty Web Application' as shown below.


Having done this, next add a new item of type ADO.Net Entity Data Model as below. NOTE: The project will have to reference the Entity Framework assemblies.  The easiest way to do this (& the one most people are probably using) is to use the NuGet package.


Then follow the wizard.


Selecting "Generate from database".


Choose your SQLExpress (or SQL Server) dB but uncheck the "Save entity connection settings in Web.Config as:" as we're converting to SQLCE so want to minimize anything related to other types of SQL Server.


Finally select the SQL elements you require.  In this example only the existing tables were selected.  As this is generating the EF model from an existing database no SQL file is generated just the model for which the diagram is shown, i.e.


The next phase is to generate the SQL from the model (which was generated from the hand crafted db) but to make sure the SQL that's generated is compliant with SQL CE.

To generate the schema right click and select "Generate model from database..."


This brings up the "Generate database" wizard which is very similar to the previously used "Entity Data Model" wizard used to create the model.  From here choose the "New Connection" option which pops up another set of dialogs.  On the first choose the type of data source as "Microsoft SQL Server Compact 4.0".

Clicking on continue then leads to the next dialog where you need to create a dB.



Ok-ing this leads back to the "Generate database wizard".


This time check the "Save entity connection settings in Web.Config" checkbox.  This information will be useful later (to be covered in a different post).  Clicking "Next" the SQL is generated and present in the wizard.


This can be copied & pasted directly from here or pressing "Finish" will save the SQL to the file indicated at the top of the dialog box.  This file is added to the project.  The following prompt will appear when "Finish" is pressed.
 

This doesn't really matter as this is a throw away project but having the updated schemas maybe useful so go with "Yes".

The SQL can now be used to configure an empty SQL CE 4.0 database.  The easiest way is to open the SQL file and right-click selecting the "Execute SQL" menu item.


This brings up the SQL Server dialog from which if "New Database" is selected an CE 4 one can be specified.


Having specified a location and pressed "Ok" the SQL script is executed.  As can be seen below this is not without errors.  However, this isn't anything to worry about as the errors are to do with dropping tables and indices that currently don't exist as it's a newly created dB.  Performing the same steps but missing out the creation of the dB file as it already exists sees the SQL script execute flawlessly.



The final picture shows the newly created database in VS2010's Server Explorer demonstrating that the tables were indeed created.


The basis for this post is my experimentation on using NUnit to programmatically test some dB based functionality.  If a single instance of a database suffices for all your tests and you can execute the SQL by hand as above and then you can follow these steps.  In may case I want to a fresh database per test so I need to automate the running of the SQL Script combined the with the creation and destruction of the underlying database.  The creation and deletion aspect were covered in a previous post but the next step will have to wait until a later one.