When I set the “doit” property and run like this, it fails:
$ ant -Ddoit=true
Buildfile: build.xml
build:
BUILD FAILED
build.xml:14: Unknown attribute [ant:if:if:set]
Total time: 0 seconds
It looks to me like this is a bug: the if:set attribute is getting passed into the macro, which is complaining that it doesn’t expect an attribute with that name. (If you try to create an attribute with that name, you’ll find that “if:set” is an illegal name…)
However, there is a workaround. You can wrap the call to your macrodef in a <sequential> tag:
I’m not sure anyone except me is still struggling on with using Ant, but just in case, here is a nice thing.
In Ant 1.9.1 they added a useful feature: instead of needing to use the <if> tag and similar from ant-contrib, you can conditionally execute “any” task.
NOTE: The example in the documentation is wrong (at this time, 2013-09-13) – it uses the property name, but this does not work – you must surround it with ${} to get its value.
The properties can be specified in your build file as normal, or supplied on the command line to ant with -Dproperty.name=value.
$ ant -version
Apache Ant(TM) version 1.9.2 compiled on July 8 2013
$ ant -Dsetincmd=true
Buildfile: build.xml
build:
[echo] if:set=setinxml
[echo] unless:set=notset
[echo] if:set=setincmd
BUILD SUCCESSFUL
Total time: 0 seconds
The documentation for this, such as it is, is here: If And Unless.
In the previous post we looked at how it is possible to write reasonable code in Ant, by writing small re-usable blocks of code.
Of course, if you’re going to have any confidence in your build file you’re going to need to test it. Now we’ve learnt some basic Ant techniques, we’re ready to do the necessary magic that allows us to write tests.
We’re not testing our Java code. We know how to do that, and to run tests, if we’ve written them using JUnit, just needs a <junit> tag in our build.xml. (Other testing frameworks are available and some people say they’re better).
The things we want to test are:
build artifacts – the “output” of our builds i.e. JAR files, zips and things created when we run the build,
build logic – such as whether dependencies are correct, whether the build succeeds or fails under certain conditions, and
units of code – checking whether individual macros or code snippets are correct.
Note, if you’re familiar with the terminology, that testing build artifacts can never be a “unit test”, since it involves creating real files on the disk and running the real build.
Below we’ll see how I found ways to test build artifacts, and some ideas I had to do the other two, but certainly not a comprehensive solution. Your contributions are welcome.
Before we start, let’s see how I’m laying out my code:
Code layout
build.xml - real build file
asserts.xml - support code for tests
test-build.xml - actual tests
I have a normal build file called build.xml, a file containing support code for the tests (mostly macros allowing us to make assertions) called asserts.xml, and a file containing the actual tests called test-build.xml.
To run the tests I invoke Ant like this:
ant -f test-build.xml test-name
test-build.xml uses an include to get the assertions:
<include file="asserts.xml"/>
Tests call a target inside build.xml using subant, then use the code in asserts.xml to make assertions about what happened.
Simple example: code got compiled
If we want to check that a <javac …> task worked, we can just check that a .class file was created:
It just deletes a file, runs the target using subant, then asserts that the file exists, which uses this macrodef:
<macrodef name="assert-file-exists">
<attribute name="file"/>
<sequential>
<echo message="Checking existence of file: @{file}"/>
<fail message="File '@{file}' does not exist.">
<condition>
<not><available file="@{file}"/></not>
</condition>
</fail>
</sequential>
</macrodef>
This uses a trick I’ve used a lot, which is the fail task, with a condition inside it, meaning that we only fail if the condition is satisfied. Here we use notavailable which means fail if the file doesn’t exist.
Harder example: JAR file
Now let’s check that a JAR file was created, and has the right contents. Here’s the test:
This just says after we’re run the target, the file MyProduct.jar exists, and it contains a file called MANIFEST.MF that has the right Main-Class information in it.
Which basically unzips the JAR into a directory, then searches the directory using fileset for a file with the right name and contents, and fails if it’s not found (i.e. if the resourcecount of the fileset is zero. These are the kinds of backflips you need to do to bend Ant to your will.
Or, you can choose the Nuclear Option.
The Nuclear Option
If ant tasks just won’t do, since Ant 1.7 and Java 1.6 we can drop into a <script> tag.
The script tag allows us to use a scripting language as provided through the JSR 223 Java feature directly within our Ant file, meaning we can do anything.
In all the JVMs I’ve tried, the only scripting language actually available is JavaScript, provided by the Rhino virtual machine, which is now part of standard Java.
When using the script tag, expect bad error messages. Rhino produces unhelpful stack traces, and Ant doesn’t really tell you what went wrong.
So now we know how to test the artifacts our build produces, but what about directly testing the logic in build.xml?
Testing build logic
We want to:
Confirm that targets succeed or fail under certain conditions
Check indirect dependencies are as expected
Test a unit of Ant logic (e.g. a macrodef)
Success and failure
Here’s a little macro I cooked up to assert that something is going to fail:
I resorted to the Nuclear Option of a script tag, and used Ant’s Java API (through JavaScript) to execute the target, and catch any exceptions that are thrown. If no exception is thrown, we fail.
Testing dependencies
To check that the dependencies are as we expect, we really want to run ant’s dependency resolution without doing anything. Remarkably, ant has no support for this.
Now we need to be able to run a build and capture the output. We can do that like this:
<target name="test-C-depends-on-A">
<delete file="${tmpdir}/cdeps.txt"/>
<ant
target="printCdeps"
output="${tmpdir}/cdeps.txt"
/>
<fail message="Target A did not execute when we ran C!">
<condition>
<resourcecount when="equal" count="0">
<fileset file="${tmpdir}/cdeps.txt">
<contains text="targetA:"/>
</fileset>
</resourcecount>
</condition>
</fail>
<delete file="${tmpdir}/cdeps.txt"/>
</target>
We use ant to run the build, telling it to write to a file cdeps.txt. Then, to assert that C depends on A, we just fail if cdeps.txt doesn’t contain a line indicating we ran A. (To assert a file contains a certain line we use a load of fail, condition, resourcecount and fileset machinery as before.)
So, we can check that targets depend on each other, directly or indirectly. Can we write proper unit tests for our macrodefs?
Testing ant units
To test a macrodef or target as a piece of logic, without touching the file system or really running it, we will need fake versions of all the tasks, including <jar>, <copy>, <javac> and many more.
If we replace the real versions with fakes and then run our tasks, we can set up our fakes to track what happened, and then make assertions about it.
If we create a file called real-fake-tasks.xml, we can put things like this inside:
cp real-fake-tasks.xml fake-tasks.xml
ant -f test-build.xml test-A-runs-jar
rm fake-tasks.xml
If fake-tasks.xml doesn’t exist, the real tasks will be used, so running your build normally should still work.
This trick relies on the fact that our fake tasks replace the real ones, which appears to be an undocumented behaviour of my version of Ant. Ant complains about us doing this, with an error message that sounds like it didn’t work, but actually it did (on my machine).
If we wanted to avoid relying on this undocumented behaviour, we’d need to write our real targets based on special macrodefs called things like do-jar and provide a version of do-jar that hands off to the real jar, and a version that is a fake. This would be a lot of work, and pollutes our production code with machinery needed for testing, but it could work with Ant’s documented behaviour, making it unlikely to fail unexpectedly in the future.
Summary
You can write Ant code in a test-driven way, and there are even structures that allow you to write things that might be described as unit tests.
At the moment, I am using mostly the “testing artifacts” way. The tests run slowly, but they give real confidence that your build file is really working.
Since I introduced this form of testing into our build, I enjoy working with build.xml a lot more, because I know when I’ve messed it up.
But I do spend more time waiting around for the tests to run.