Set the volume in OpenBox/LXDE (or on the command line) with PulseAudio and Ubuntu

I am switching to LXDE, and enjoying it, but a few things require some manual config before it’s just how I like it.

To control the sound volume with the volume buttons, the default LXDE config in ~/openbox/lxde-rc.xml contains an entry like this:

<!-- Doesn't work for me -->
<keybind key="XF86AudioRaiseVolume">
  <action name="Execute">
    <command>amixer -q sset Master 3%+</command>
  </action>
</keybind>

(Inside the <keyboard> section.)

This doesn’t work for me, but we can do it by sending a command to PulseAudio, using the pactl command. The command to increase the volume is:

pactl -- set-sink-volume 0 +5%

To decrease the volume, put “-5%” instead of “+5%”. Note that if you have more than one enabled audio sink you might need to change the “0” to a “1” or something else. Running pactl stat should help you here.

[Beware of bug 686667, meaning you can’t use pacmd and you must have the -- at the beginning.]

So the correct recipe for your OpenBox config in ~/openbox/lxde-rc.xml is:

<keybind key="XF86AudioRaiseVolume">
  <action name="Execute">
    <command>pactl -- set-sink-volume 0 +5%</command>
  </action>
</keybind>
<keybind key="XF86AudioLowerVolume">
  <action name="Execute">
    <command>pactl -- set-sink-volume 0 -5%</command>
  </action>
</keybind>

After editing this file you can run:

openbox --reconfigure

To update without restarting OpenBox.

Of course, because I don’t have volume control buttons, I want my volume to change with Ctrl-Alt-PageUp and Ctrl-Alt-PageDown, so I use this recipe:

<keybind key="C-A-Prior">
  <action name="Execute">
    <command>pactl -- set-sink-volume 0 +5%</command>
  </action>
</keybind>
<keybind key="C-A-Next">
  <action name="Execute">
    <command>pactl -- set-sink-volume 0 -5%</command>
  </action>
</keybind>

but that’s just me.

Everybody loves build.xml (test-driven Ant)

In the previous post we looked at how it is possible to write reasonable code in Ant, by writing small re-usable blocks of code.

Of course, if you’re going to have any confidence in your build file you’re going to need to test it. Now we’ve learnt some basic Ant techniques, we’re ready to do the necessary magic that allows us to write tests.

Slides: Everybody loves build.xml slides.

First, let me clear up what we’re testing:

What do we want to test?

We’re not testing our Java code. We know how to do that, and to run tests, if we’ve written them using JUnit, just needs a <junit> tag in our build.xml. (Other testing frameworks are available and some people say they’re better).

The things we want to test are:

  • build artifacts – the “output” of our builds i.e. JAR files, zips and things created when we run the build,
  • build logic – such as whether dependencies are correct, whether the build succeeds or fails under certain conditions, and
  • units of code – checking whether individual macros or code snippets are correct.

Note, if you’re familiar with the terminology, that testing build artifacts can never be a “unit test”, since it involves creating real files on the disk and running the real build.

Below we’ll see how I found ways to test build artifacts, and some ideas I had to do the other two, but certainly not a comprehensive solution. Your contributions are welcome.

Before we start, let’s see how I’m laying out my code:

Code layout

build.xml      - real build file
asserts.xml    - support code for tests
test-build.xml - actual tests

I have a normal build file called build.xml, a file containing support code for the tests (mostly macros allowing us to make assertions) called asserts.xml, and a file containing the actual tests called test-build.xml.

To run the tests I invoke Ant like this:

ant -f test-build.xml test-name

test-build.xml uses an include to get the assertions:

<include file="asserts.xml"/>

Tests call a target inside build.xml using subant, then use the code in asserts.xml to make assertions about what happened.

Simple example: code got compiled

If we want to check that a <javac …> task worked, we can just check that a .class file was created:

Here’s the test, in test-build.xml:

<target name="test-class-file-created">
    <assert-target-creates-file
        target="build"
        file="bin/my/package/ExampleFile.class"
    />
</target>

We run it like this:

ant -f test-build.xml test-class-file-created

The assert-target-creates-file assertion is a macrodef in asserts.xml like this:

<macrodef name="assert-target-creates-file">
    <attribute name="target"/>
    <attribute name="file"/>
    <sequential>
        <delete file="@{file}" quiet="true"/>
        <subant antfile="build.xml" buildpath="." target="@{target}"/>
        <assert-file-exists file="@{file}"/>
    </sequential>
</macrodef>

It just deletes a file, runs the target using subant, then asserts that the file exists, which uses this macrodef:

<macrodef name="assert-file-exists">
    <attribute name="file"/>
    <sequential>
        <echo message="Checking existence of file: @{file}"/>
        <fail message="File '@{file}' does not exist.">
            <condition>
                <not><available file="@{file}"/></not>
            </condition>
        </fail>
    </sequential>
</macrodef>

This uses a trick I’ve used a lot, which is the fail task, with a condition inside it, meaning that we only fail if the condition is satisfied. Here we use not available which means fail if the file doesn’t exist.

Harder example: JAR file

Now let’s check that a JAR file was created, and has the right contents. Here’s the test:

<target name="test-jar-created-with-manifest">

    <assert-target-creates-file
        target="build"
        file="dist/MyProduct.jar"
    />
    <assert-file-in-jar-contains
        jarfile="dist/MyProduct.jar"
        filename="MANIFEST.MF"
        find="Main-Class: my.package.MyMain"
    />

This just says after we’re run the target, the file MyProduct.jar exists, and it contains a file called MANIFEST.MF that has the right Main-Class information in it.

assert-file-in-jar-contains looks like this:

<macrodef name="assert-file-in-jar-contains">
    <attribute name="jarfile"/>
    <attribute name="filename"/>
    <attribute name="find"/>

    <sequential>
        <!-- ... insert checks that jar exists, and contains file -->

        <delete dir="${tmpdir}/unzip"/>
        <unzip src="@{jarfile}" dest="${tmpdir}/unzip"/>

        <fail message="@{jarfile}:@{filename} should contain @{find}">
            <condition>
                <resourcecount when="equal" count="0">
                    <fileset dir="${tmpdir}/unzip">
                        <and>
                            <filename name="**/@{filename}"/>
                            <contains text="@{find}"/>
                        </and>
                    </fileset>
                </resourcecount>
            </condition>
        </fail>

        <delete dir="${tmpdir}/unzip"/>

    </sequential>
</macrodef>

Which basically unzips the JAR into a directory, then searches the directory using fileset for a file with the right name and contents, and fails if it’s not found (i.e. if the resourcecount of the fileset is zero. These are the kinds of backflips you need to do to bend Ant to your will.

Or, you can choose the Nuclear Option.

The Nuclear Option

If ant tasks just won’t do, since Ant 1.7 and Java 1.6 we can drop into a <script> tag.

You ain’t gonna like it:

<script language="javascript"><![CDATA[
system.launchMissiles(); // Muhahahaha
]]></script>

The script tag allows us to use a scripting language as provided through the JSR 223 Java feature directly within our Ant file, meaning we can do anything.

In all the JVMs I’ve tried, the only scripting language actually available is JavaScript, provided by the Rhino virtual machine, which is now part of standard Java.

When using the script tag, expect bad error messages. Rhino produces unhelpful stack traces, and Ant doesn’t really tell you what went wrong.

So now we know how to test the artifacts our build produces, but what about directly testing the logic in build.xml?

Testing build logic

We want to:

  • Confirm that targets succeed or fail under certain conditions

  • Check indirect dependencies are as expected

  • Test a unit of Ant logic (e.g. a macrodef)

Success and failure

Here’s a little macro I cooked up to assert that something is going to fail:

<macrodef name="expect-failure">
    <attribute name="target"/>
    <sequential>
        <local name="ex.caught"/>
        <script language="javascript"><![CDATA[
        try {
            project.executeTarget( "@{target}" );
        } catch( e ) {
            project.setProperty( "ex.caught", "yes" )
        }
        ]]></script>
        <fail message="@{target} succeeded!!!" unless="ex.caught"/>
    </sequential>
</macrodef>

I resorted to the Nuclear Option of a script tag, and used Ant’s Java API (through JavaScript) to execute the target, and catch any exceptions that are thrown. If no exception is thrown, we fail.

Testing dependencies

To check that the dependencies are as we expect, we really want to run ant’s dependency resolution without doing anything. Remarkably, ant has no support for this.

But we can hack it in:

<target name="printCdeps">
    <script language="javascript"><![CDATA[

        var targs = project.getTargets().elements();
        while( targs.hasMoreElements() )
        {
            var targ = targs.nextElement();
            targ.setUnless( "DRY.RUN" );
        }
        project.setProperty( "DRY.RUN", "1" );
        project.executeTarget( "targetC" );

    ]]></script>
</target>

(See Dry run mode for Ant for more.)

Now we need to be able to run a build and capture the output. We can do that like this:

<target name="test-C-depends-on-A">
    <delete file="${tmpdir}/cdeps.txt"/>
    <ant
        target="printCdeps"
        output="${tmpdir}/cdeps.txt"
    />
    <fail message="Target A did not execute when we ran C!">
        <condition>
            <resourcecount when="equal" count="0">
                <fileset file="${tmpdir}/cdeps.txt">
                    <contains text="targetA:"/>
                </fileset>
            </resourcecount>
        </condition>
    </fail>
    <delete file="${tmpdir}/cdeps.txt"/>
</target>

We use ant to run the build, telling it to write to a file cdeps.txt. Then, to assert that C depends on A, we just fail if cdeps.txt doesn’t contain a line indicating we ran A. (To assert a file contains a certain line we use a load of fail, condition, resourcecount and fileset machinery as before.)

So, we can check that targets depend on each other, directly or indirectly. Can we write proper unit tests for our macrodefs?

Testing ant units

To test a macrodef or target as a piece of logic, without touching the file system or really running it, we will need fake versions of all the tasks, including <jar>, <copy>, <javac> and many more.

If we replace the real versions with fakes and then run our tasks, we can set up our fakes to track what happened, and then make assertions about it.

If we create a file called real-fake-tasks.xml, we can put things like this inside:

<macrodef name="jar">
    <attribute name="destfile"/>
    <sequential>
        <property name="jar.was.run" value="yes"/>
    </sequential>
</macrodef>

and, in build.xml we include something called fake-tasks.xml, with the optional attribute set to true:

<include file="fake-tasks.xml" optional="true"/>

If the target we want to test looks like this (in build.xml):

<target name="targetA">
    <jar destfile="foo.jar"/>
</target>

Then we can write a test like this in test-build.xml:

<target name="test-A-runs-jar" depends="build.targetA">
    <fail message="Didn't jar!" unless="jar.was.run"/>
</target>

and run the tests like this:

cp real-fake-tasks.xml fake-tasks.xml
ant -f test-build.xml test-A-runs-jar
rm fake-tasks.xml

If fake-tasks.xml doesn’t exist, the real tasks will be used, so running your build normally should still work.

This trick relies on the fact that our fake tasks replace the real ones, which appears to be an undocumented behaviour of my version of Ant. Ant complains about us doing this, with an error message that sounds like it didn’t work, but actually it did (on my machine).

If we wanted to avoid relying on this undocumented behaviour, we’d need to write our real targets based on special macrodefs called things like do-jar and provide a version of do-jar that hands off to the real jar, and a version that is a fake. This would be a lot of work, and pollutes our production code with machinery needed for testing, but it could work with Ant’s documented behaviour, making it unlikely to fail unexpectedly in the future.

Summary

You can write Ant code in a test-driven way, and there are even structures that allow you to write things that might be described as unit tests.

At the moment, I am using mostly the “testing artifacts” way. The tests run slowly, but they give real confidence that your build file is really working.

Since I introduced this form of testing into our build, I enjoy working with build.xml a lot more, because I know when I’ve messed it up.

But I do spend more time waiting around for the tests to run.

Everybody hates build.xml (code reuse in Ant)

If you’re starting a new Java project, I’d suggest suggest considering the many alternatives to Ant, including Gant, Gradle, SCons and, of course, Make. This post is about how to bend Ant to work like a programming language, so you can write good code in it. It’s seriously worth considering a build tool that actually is a programming language.

If you’ve chosen Ant, or you’re stuck with Ant, read on.

Slides: Everybody hates build.xml slides.

Most projects I’ve been involved with that use Ant have a hateful build.xml surrounded by fear. The most important reason for this is that the functionality of the build file is not properly tested, so you never know whether you’ve broken it, meaning you never make “non-essential” changes i.e. changes that make it easier to use or read. A later post and video will cover how to test your build files, but first we must address a pre-requisite:

Can you write good code in Ant, even if you aren’t paralysed by fear?

One of the most important aspects of good code is that you only need to express each concept once. Or, to put it another way, you can re-use code.

I want to share with you some of the things I have discovered recently about Ant, and how you should (and should not) re-use code.

But first:

What is Ant?

Ant is 2 languages:

  • A declarative language to describe dependencies
  • A procedural language to prescribe actions

In fact, it’s just like a Makefile (ignore this if Makefiles aren’t familiar). A Makefile rule consists of a target (the name before the colon) with its dependencies (the names after the colon), which make up a declarative description of the dependencies, and the commands (the things indented by tabs) which are a normal procedural description of what to do to build that target.

# Ignore this if you don't care about Makefiles!
target: dep1 dep2   # Declarative
    action1         # Procedural
    action2

The declarative language

In Ant, the declarative language is a directed graph of targets and dependencies:

<target name="A"/>
<target name="B" depends="A"/>
<target name="C" depends="B"/>
<target name="D" depends="A"/>

This language describes a directed graph of dependencies. I.e. they say what depends on what, or what must be built before you can build something else. Targets and dependencies are completely separate from what lives inside them, which are tasks.

The procedural language

The procedural language is a list of tasks:

<target ...>
    <javac ...>
    <copy ...>
    <zip ...>
    <junit ...>
</target>

When the dependency mechanism has decided a target will be executed, its tasks are executed one by one in order, just like in a programming language. Except that tasks live inside targets, they are completely separate from them. Essentially each target has a little program inside it consisting of tasks, and these tasks are a conventional programming language, nothing special (except for the lack of basic looping and branching constructs).

I’m sorry if the above is glaringly obvious to you, but it only recently became clear to me, and it helped me a lot to think about how to improve my Ant files.

Avoiding repeated code

Imagine you have two similar Ant targets:

<target name="A">
    <javac
        srcdir="a/src" destdir="a/bin"
        classpath="myutil.jar" debug="false"
    />
</target>

<target name="B">
    <javac
        srcdir="b/code" destdir="b/int"
        classpath="myutil.jar" debug="false"
    />
</target>

The classpath and debug information are the same in both targets, and we would like to write this information in one single place. Imagine with me that the code we want to share is too complex for it to be possible to store it as the values of properties in some properties file.

How do we share this code?

The Wrong Way: antcall

Here’s the solution we were using in my project until I discovered the right way:

<target name="compile">
    <javac
        srcdir="${srcdir}" destdir="${destdir}"
        classpath="myutil.jar" debug="false"
    />
</target>

<target name="A">
    <antcall target="compile">
        <param name="srcdir" value="a/src"/>
        <param name="destdir" value="a/bin"/>
    </antcall>
</target>

<target name="B">
    <antcall target="compile">
    ...

Here we put the shared code into a target called compile, which makes use of properties to access the varying information (or the parameters, if we think of this as a function). The targets A and B use the <antcall> task to launch the compile target, setting the values of the relevant properties.

This works, so why is it Wrong?

Why not antcall?

antcall launches a whole new Ant process and runs the supplied target within that. This is wrong because it subverts the way Ant is supposed to work. The new process will re-calculate all the dependencies in the project (even if our target doesn’t depend on anything) which could be slow. Any dependencies of the compile target will be run even if they’ve already been run, meaning some of your assumptions about order of running could be incorrect, and the assumption that each target will only run once will be violated. What’s more, it subverts the Ant concept that properties are immutable, and remain set once you’ve set them: in the example above, srcdir and destdir will have different values at different times (because they exist inside different Ant processes).

Basically what we’re doing here is breaking all of Ant’s paradigms to force it to do what we want. Before Ant 1.8 you could have considered it a necessary evil. Now, it’s just Evil.

The Horrific Way: custom tasks

Ant allows you to write your own tasks (not targets) in Java. So our example would look something like this:

Java:

public class MyCompile extends Task {
    public void execute() throws BuildException
    {
        Project p = getProject();

        Javac javac = new Javac();

        javac.setSrcdir(  new Path( p, p.getUserProperty( "srcdir" ) ) );
        javac.setDestdir( new File( p.getUserProperty( "destdir" ) ) );
        javac.setClasspath( new Path( p, "myutil.jar" ) );
        javac.setDebug( false );
        javac.execute();
    }
}

Ant:

<target name="first">
    <javac srcdir="mycompile"/>
    <taskdef name="mycompile" classname="MyCompile"
        classpath="mycompile"/>
</target>

<target name="A" depends="first">
    <mycompile/>
</target>

<target name="B" depends="first">
    <mycompile/>
</target>

Here we write the shared code as a Java task, then call that task from inside targets A and B. The only word to describe this approach is “cumbersome”. Not only do we need to ensure our code gets compiled before we try to use it, and add a taskdef to allow Ant to see our new task (meaning every target gets a new dependency on the “first” target), but much worse, our re-used code has to be written in Java, rather than the Ant syntax we’re using for everything else. At this point you might start asking yourself why you’re using Ant at all – my thoughts start drifting towards writing my own build scripts in Java … anyway, I’m sure that would be a very bad idea.

The Relatively OK Way: macrodef

So, enough teasing. Here’s the Right Way:

<macrodef name="mycompile">
    <attribute name="srcdir"/>
    <attribute name="destdir"/>
    <sequential>
        <javac
            srcdir="@{srcdir}" destdir="@{destdir}"
            classpath="myutil.jar" debug="false"
        />
    </sequential>
</macrodef>

<target name="A">
    <mycompile srcdir="a/src" destdir="a/bin"/>
</target>

<target name="B">
    <mycompile srcdir="b/code" destdir="b/int"/>
</target>

Since Ant 1.8, we have the macrodef task, which allows us to write our own tasks in Ant syntax. In any other language these would be called functions, with arguments which Ant calls attributes. You use these attributes by giving their name wrapped in a @{} rather than the normal ${} for properties. The body of the function lives inside a sequential tag.

This allows us to write re-usable tasks within Ant. But what about re-using parts from the other language – the declarative targets and dependencies?

Avoiding repeated dependencies?

Imagine we have a build file containing targets like this:

<target name="everyoneneedsme"...

<target name="A" depends="everyoneneedsme"...
<target name="B" depends="everyoneneedsme"...
<target name="C" depends="everyoneneedsme"...
<target name="D" depends="everyoneneedsme"...

In Ant, I don’t know how to share this. The best I can do is make a single target that is re-used whenever I want the same long list of dependencies, but in a situation like this where everything needs to depend on something, I don’t know what to do. (Except, of course, drop to the Nuclear Option of the <script> tag, which we’ll see next time.)

I haven’t used it in anger, but this kind of thing seems pretty straightforward with Gradle. I believe the following is roughly equivalent to my example above, but I hope someone will correct me if I get it wrong:

task everyoneneedsme

tasks.whenTaskAdded { task ->
    task.dependsOn everyoneneedsme
}
    
task A
task B
task C
task D

(Disclaimer: I haven’t run this.)

So, if you want nice features in your build tool, like code-reuse and testability, you should consider a build tool that is integrated into a grown-up programming language where all this stuff comes for free. But, if you’re stuck with Ant, you should not despair: basic good practice is possible if you make the effort.

Dry run mode for Ant (ant -n, ant –dry-run)

I am working on the problem of writing Ant build files in a test-driven way. One thing I found myself needing was a “dry run” mode, like many Unix tools have. For example, make has the -n or –dry-run option, which shows what it would have done, but doesn’t really do it.

Today I found a partial solution to this problem, so that you can at least see which dependencies will be run when you run a particular ant target.

It’s an horrific hack, but it’s the best I can do at the moment.

We write some code in a <script> tag to hack all the targets in our project (at runtime). We modify the targets so they all have an “unless” attribute, set to a property name of “DRY-RUN”. Then we set the “DRY-RUN” property, and execute our target.

Ant prints out the names of all the targets in the dependency chain, even if they are not executed because of an unless attribute.

Note: this code makes use of the Ant <script> script tag, which is an Ant 1.8+ feature. Using JavaScript inside this tag seems to be supported in Oracle, OpenJDK and IBM versions of Java, but is not guaranteed.

<?xml version="1.0" encoding="UTF-8"?>
<project default="build">

    <target name="targetA"/>
    <target name="targetB" depends="targetA">
        <echo message="DON'T RUN ME"/>
    </target>
    <target name="targetC" depends="targetB"/>

    <target name="build" depends="targetB"/>

    <target name="dry-run">
        <do-dry-run target="build"/>
    </target>

    <macrodef name="do-dry-run">
        <attribute name="target"/>
        <sequential>
            <script language="javascript"><![CDATA[

                var targs = project.getTargets().elements();
                while( targs.hasMoreElements() ) {
                    var targ = targs.nextElement();
                    targ.setUnless( "DRY.RUN" );
                }
                project.setProperty( "DRY.RUN", "1" );
                project.executeTarget( "@{target}" );

            ]]></script>
        </sequential>
    </macrodef>

</project>

Running this build file normally, the tasks in the targets execute, so we can see that the <echo> happens:

$ ant
Buildfile: build.xml

targetA:

targetB:
     [echo] DON'T RUN ME

build:

BUILD SUCCESSFUL
Total time: 0 seconds

But when we run the dry-run target, only the target names are printed, and the <echo> task (and any other tasks) don’t:

$ ant dry-run
Buildfile: build.xml

dry-run:

targetA:

targetB:

build:

BUILD SUCCESSFUL
Total time: 0 seconds

A lot of pain, for a partial implementation of very simple functionality that you’d expect to be a built-in feature? I couldn’t possibly comment.

My First Raspberry Pi Game – Part 12 – Scoring, done!

Parts: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12.

Writing your first ever computer program on the Raspberry Pi.

Today, we finish!

Our game is almost done. All we need to do now is let you play several times, and give you a score at the end.

First, because we’re going to use it lots of times, we need to make the ready_screen function set its background colour properly. Open redgreen.py in LeafPad, and add a single line to the function ready_screen, making it look like this:

def ready_screen():
    screen.fill( pygame.Color( "black" ) )
    white = pygame.Color( "white" )
    write_text( screen, "Ready?", white, True )
    pygame.display.flip()

Previously, ready_screen was always the first thing we did, so we got away with not drawing a background colour because it starts off plain black. Now, we need to do it.

Next, let’s do the really interesting part. We want to play the game several times, and whenever we want to do something several times, we need a loop. This time we’ll use a for loop, letting us go through a list of things. Scroll to the very bottom, and change the code to look like this:

# We start from here

start()

for i in range( 10 ):

    ready_screen()

    wait()

    shape()

end()

The new lines are green above, and lines that haven’t changed except being indented by putting four spaces at the beginning are blue.

A for loop lets you run through a list of things, running the same code each time. A for loop always looks like for NAME in LIST where NAME is the name of a new variable, and LIST is a list of things. What we’ve done here is make a list of 10 numbers by calling the range function and giving it an argument of 10, and told Python to put the particular item of the list that we’re working on now into a variable called i.

So, the ready_screen, wait and shape functions will each get called 10 times. Each time they are called, i will be a different number. We’re not using i yet, so all that matters for the moment is that the code runs 10 times. Try it out by opening LXTerminal and typing ./redgreen.py, and you’ll see that you can play the game 10 times, and then it will finish.

Playing 10 times is all very well, but it’s not a lot of fun if I can’t see how well I’ve done at the end. Let’s keep track of our score.

We’ll award the player 1 point for every time they get it right, and no points if they get it wrong. The places where we know which of these has happened are in red_shape and green_shape. Let’s change them to pass back a score (either 1 or 0) depending on what you did:

def green_shape():
    ...the rest of green_shape is still here...

    pressed = shape_wait()

    if pressed:
        green_success()
        return 1
    else:
        green_failure()
        return 0
def red_shape():
    ...the rest of green_shape is still here...

    pressed = shape_wait()

    if pressed:
        red_failure()
        return 0
    else:
        red_success()
        return 1

I’ve abbreviated it above, but we’re not changing anything in these functions except at the very bottom, where we’re adding two return lines to each function.

Whenever the player succeeds, we return a score of 1 point, and whenever they fail we return 0 points.

We’re not doing anything with this score yet. We call the green_shape and red_shape functions from inside shape, so first let’s make sure shape passes back the answer to where we need it:

def shape():
    GREEN = 0
    RED   = 1
    shape = random.choice( [GREEN, RED] )

    if shape == GREEN:
        return green_shape()
    else:
        return red_shape()

shape doesn’t need to do anything special here – just take the answer coming from green_shape or red_shape and use the return statement to pass it back to us.

Now shape is giving us back an answer, we can use it in the main code right at the bottom:

start()

correct = 0

for i in range( 10 ):

    ready_screen()

    wait()

    correct += shape()

end( correct )

We’ve made a variable called correct that keeps hold of how many correct answers we’ve been given (i.e. the score). It starts off as zero, and every time we call shape we add on the answer that comes back. shape will either return 0 or 1, so correct will increase by either 0 or 1 each time.

The last thing we’ve done here is pass the answer (the player’s final score) into the end function so we can display it. To use this answer, we need to change end a bit:

def end( correct ):
    print "You got %d correct answers" % correct
    screen.fill( pygame.Color( "black" ) )
    white = pygame.Color( "white" )
    write_text( screen, "Thanks for playing!", white, True )
    msg = "Score: %d   Press a key to exit" % correct
    write_text( screen, msg, white, False )
    pygame.display.flip()
    pygame.event.clear()
    timed_wait( 0, press_events )

We changed the def line to allow us to pass in the score, giving it the same name we used below, correct. Then we added a line that prints out the answer into the terminal, just for good measure, and we modified the write_text line, splitting it into 2 parts – creating a variable called msg containing our message, and then using it on the next line.

Twice above we’ve used a nice feature of Python that makes building our own messages quite simple. If you write a string like "Score: %d Press a key to exit" you can substitute a number into it using the % “operator” as we’ve done (an operator is something like + or / that combines 2 things). Where the %d appears in the string, it gets replaced by the number inside the variable you supply (correct in our case). You can also substitute in other strings (using %s) and lots of other things if you want to. This allows us to put the score into a string and then print it on the screen.

If you try your game now you will see it counts how many right answers you got and tells you at the end. Wouldn’t it be better, though, if it told you how you were doing all the way through?

Scroll up to the ready_screen function and modify it to take two arguments and use them to keep us informed:

def ready_screen( go_number, correct ):
    screen.fill( pygame.Color( "black" ) )
    white = pygame.Color( "white" )
    write_text( screen, "Ready?", white, True )

    go_number_str = "Turn: %d    Score: %d" % ( ( go_number + 1 ), correct )

    write_text( screen, go_number_str, pygame.Color( "white" ), False )

    pygame.display.flip()

The arguments we take are called go_number and correct. correct will be the current score, as we’ve seen before, and go_number is the counter telling us how far we’ve got.

We use a slightly different form of the % operator here to substitute two values into a string instead of one. To do this, we put a list of values on the right instead of just one: ( ( go_number + 1 ), correct ). We need brackets around the outside so that Python knows it is a list and doesn’t just take the first value on its own. When we use a list like this, the values will be substituted in order, one for each %d (or %s or similar) that is in the string. You must always have the same number of %ds in the string as values in the list.

You may be wondering why we have to add one to go_number. We’ll see in a moment.

To be able to provide the two new arguments to ready_screen we need to change the code right at the bottom to look like this:

start()

correct = 0

for i in range( 10 ):

    ready_screen( i, correct )

    wait()

    correct += shape()

end( correct )

Remember when we made the for loop I mentioned that i would be a different number each time we ran the code inside the loop? We pass that number in to ready_screen where it will be used as the go_number. We also pass in the current score, correct.

The reason why we needed to add 1 to go_number inside ready_screen is that when you have a loop like for i in range( 10 ), the variable i actually gets the values 0, 1, 2, … with the last value being 9, instead of ranging from 1 to 10 as you might expect. The reasoning behind this is kind of lost in the mists of time, and kind of makes perfect sense, depending how you look at it. Anyway, believe me when I tell you that once you’ve got used to it you’re going to find it warm and comforting, but for now you may find it a bit weird.

And, on that typically strange note, we have finished! Try out your program, and you should find it tells you what go you’re on, and what your score is all the way through.

Something else you might like to do now is make your game run in full-screen mode (like many games). You can do that by changing the start function like this:

def start():
    global screen
    pygame.init()
    screen = pygame.display.set_mode( screen_size, pygame.FULLSCREEN )

If you have any problems, compare your version with mine here: redgreen.py

I’ve made a slightly extended version of the game that measures your reaction speed and gives you a score based on how quickly you press. In future I may even add more features. If you’d like to follow the project, you can find it here: redgreen on github.

I’ll be doing more series in the future, some for beginners like this one, and some more advanced topics. If you’d like to find out what I’m doing, subscribe to the blog RSS feed, follow me on Twitter or go to my YouTube page and subscribe.