Minimal example of a Maven pom for a mixed Kotlin and Java project

The Kotlin docs describe some things you need in your pom.xml to create a project that is a mix of Kotlin and Java code, but there is no complete example, so here is mine:

pom.xml:

<project>
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.example.kj</groupId>
    <artifactId>kotlin-and-java</artifactId>
    <version>1.0.0-SNAPSHOT</version>

    <properties>
        <kotlin.version>1.5.21</kotlin.version>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
    </properties>

    <build>
        <plugins>
            <plugin>
                <groupId>org.jetbrains.kotlin</groupId>
                <artifactId>kotlin-maven-plugin</artifactId>
                <version>${kotlin.version}</version>
                <executions>
                    <execution>
                        <id>compile</id>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                        <configuration>
                            <sourceDirs>
                                <sourceDir>${project.basedir}/src/main/kotlin</sourceDir>
                                <sourceDir>${project.basedir}/src/main/java</sourceDir>
                            </sourceDirs>
                        </configuration>
                    </execution>
                    <execution>
                        <id>test-compile</id>
                        <goals> <goal>test-compile</goal> </goals>
                        <configuration>
                            <sourceDirs>
                                <sourceDir>${project.basedir}/src/test/kotlin</sourceDir>
                                <sourceDir>${project.basedir}/src/test/java</sourceDir>
                            </sourceDirs>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.5.1</version>
                <executions>
                    <!-- Replacing default-compile as it is treated specially by maven -->
                    <execution>
                        <id>default-compile</id>
                        <phase>none</phase>
                    </execution>
                    <!-- Replacing default-testCompile as it is treated specially by maven -->
                    <execution>
                        <id>default-testCompile</id>
                        <phase>none</phase>
                    </execution>
                    <execution>
                        <id>java-compile</id>
                        <phase>compile</phase>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>java-test-compile</id>
                        <phase>test-compile</phase>
                        <goals>
                            <goal>testCompile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
    <dependencies>
        <dependency>
            <groupId>org.jetbrains.kotlin</groupId>
            <artifactId>kotlin-stdlib</artifactId>
            <version>${kotlin.version}</version>
        </dependency>
    </dependencies>
</project>

src/main/java/MyJava.java:

public class MyJava {
    public static void main(String[] args) {
        MyKotlin k = new MyKotlin();  // Use Kotlin from Java
        System.out.println(k.message());
    }
}

src/main/kotlin/MyKotlin.kt:

class MyKotlin : MyJava() {  // Use Java from Kotlin
    fun message(): String {
        return "Hello from Kotlin!"
    }
}

src/test/java/MadeInJavaTest.java:

class MadeInJavaTest {
    public void testCanUseJava() {
        MyJava j = new MyJava();
    }

    public void testCanUseKotlin() {
        MyKotlin k = new MyKotlin();
        assertEquals(k.message(), "Hello from Kotlin!");
    }

    static void assertEquals(String left, String right) {
        if (!left.equals(right)) {
            throw new AssertionError(left + " != " + right);
        }
    }
}

src/test/kotlin/MadeInKotlinTest.kt:

class MadeInKotlinTest {
    fun testCanUseJava() {
        MyJava()
    }

    fun testCanUseKotlin() {
        val k = MyKotlin();
        assertEquals(k.message(), "Hello from Kotlin!");
    }
}

fun assertEquals(left: String, right: String) {
    if (left != right) {
        throw AssertionError("${left} != ${right}");
    }
}

Widely used programming languages: past, present, and future

Programming languages are like pop groups in that they have followers, fans and supporters; new ones are constantly being created and some eventually become widely popular, while those that were once popular slowly fade away or mutate into something else.

Creating a language is a relatively popular activity. Science fiction and fantasy authors have been doing it since before computers existed, e.g., the Elf language Quenya devised by Tolkien, and in the computer age Star Trek’s Klingon. Some very good how-to books have been written on the subject.

As soon as computers became available, people started inventing programming languages.

What have been the major factors influencing the growth to widespread use of a new programming languages (I’m ignoring languages that become widespread within application niches)?

Cobol and Fortran became widely used because there was widespread implementation support for them across computer manufacturers, and they did not have to compete with any existing widely used languages. Various niches had one or more languages that were widely used in that niche, e.g., Algol 60 in academia.

To become widely used during the mainframe/minicomputer age, a new language first had to be ported to the major computers of the day, whose products sometimes supported multiple, incompatible operating systems. No new languages became widely used, in the sense of across computer vendors. Some new languages were widely used by developers, because they were available on IBM computers; for several decades a large percentage of developers used IBM computers. Based on job adverts, RPG was widely used, but PL/1 not so. The use of RPG declined with the decline of IBM.

The introduction of microcomputers (originally 8-bit, then 16, then 32, and finally 64-bit) opened up an opportunity for new languages to become widely used in that niche (which would eventually grow to be the primary computing platform of its day). This opportunity occurred because compiler vendors for the major languages of the day did not want to cannibalize their existing market (i.e., selling compilers for a lot more than the price of a microcomputer) by selling a much lower priced product on microcomputers.

BASIC became available on practically all microcomputers, or rather some dialect of BASIC that was incompatible with all the other dialects. The availability of BASIC on a vendor’s computer promoted sales of the hardware, and it was not worthwhile for the major vendors to create a version of BASIC that reduced portability costs; the profit was in games.

The dominance of the Microsoft/Intel partnership removed the high cost of porting to lots of platforms (by driving them out of business), but created a major new obstacle to the wide adoption of new languages: Developer choice. There had always been lots of new languages floating around, but people only got to see the subset that were available on the particular hardware they targeted. Once the cpu/OS (essentially) became a monoculture most new languages had to compete for developer attention in one ecosystem.

Pascal was in widespread use for a few years on micros (in the form of Turbo Pascal) and university computers (the source of Wirth’s ETH compiler was freely available for porting), but eventually C won developer mindshare and became the most widely used language. In the early 1990s C++ compiler sales took off, but many developers were writing C with a few C++ constructs scattered about the code (e.g., use of new, rather than malloc/free).

Next, the Internet took off, and opened up an opportunity for new languages to become dominant. This opportunity occurred because Internet related software was being made freely available, and established compiler vendors were not interested in making their products freely available.

There were people willing to invest in creating a good-enough implementation of the language they had invented, and giving it away for free. Luck, plus being in the right place at the right time resulted in PHP and Javascript becoming widely used. Network effects prevent any other language becoming widely used. Compatible dialects of PHP and Javascript may migrate widespread usage to quite different languages over time, e.g., Facebook’s Hack.

Java rode to popularity on the coat-tails of the Internet, and when it looked like security issues would reduce it to niche status, it became the vendor supported language for one of the major smart-phone OSs.

Next, smart-phones took off, but the availability of Open Source compilers closed the opportunity window for new languages to become dominant through lack of interest from existing compiler vendors. Smart-phone vendors wanted to quickly attract developers, which meant throwing their weight behind a language that many developers were already familiar with; Apple went with Objective-C (which evolved to Swift), Google with Java (which evolved to Kotlin, because of the Oracle lawsuit).

Where does Python fit in this grand scheme? I don’t yet have an answer, or is my world-view wrong to treat Python usage as being as widespread as C/C++/Java?

New programming languages continue to be implemented; I don’t see this ever stopping. Most don’t attract more users than their implementer, but a few become fashionable amongst the young, who are always looking to attach themselves to something new and shiny.

Will a new programming language ever again become widely used?

Like human languages, programming languages experience strong networking effects. Widely used languages continue to be widely used because many companies depend on code written in it, and many developers who can use it can obtain jobs; what company wants to risk using a new language only to find they cannot hire staff who know it, and there are not many people willing to invest in becoming fluent in a language with no immediate job prospects.

Today’s widely used programmings languages succeeded in a niche that eventually grew larger than all the other computing ecosystems. The Internet and smart-phones are used by everybody on the planet, there are no bigger ecosystems to provide new languages with a possible route to widespread use. To be widely used a language first has to become fashionable, but from now on, new programming languages that don’t evolve from (i.e., be compatible with) current widely used languages are very unlikely to migrate from fashionable to widely used.

It has always been possible for a proficient developer to dedicate a year+ of effort to create a new language implementation. Adding the polish need to make it production ready used to take much longer, but these days tool chains such as LLVM supply a lot of the heavy lifting. The problem for almost all language creators/implementers is community building; they are terrible at dealing with other developers.

It’s no surprise that nearly all the new languages that become fashionable originate with language creators who work for a company that happens to feel a need for a new language. Examples include:

  • Go created by Google for internal use, and attracted an outside fan base. Company languages are not new, with IBM’s PL/1 being the poster child (or is there a more modern poster child). At the moment Go is a trendy language, and this feeds a supply of young developers willing to invest in learning it. Once the trendiness wears off, Google will start to have problems recruiting developers, the reason: Being labelled as a Go developer limits job prospects when few other companies use the language. Talk to a manager who has tried to recruit developers to work on applications written in Fortran, Pascal and other once-widely used languages (and even wannabe widely used languages, such as Ada),
  • Rust a vanity project from Mozilla, which they have now abandoned. Did Rust become fashionable because it arrived at the right time to become the not-Google language? I await a PhD thesis on the topic of the rise and fall of Rust,
  • Microsoft’s C# ceased being trendy some years ago. These days I don’t have much contact with developers working in the Microsoft ecosystem, so I don’t know anything about the state of the C# job market.

Every now and again a language creator has the social skills needed to start an active community. Zig caught my attention when I read that its creator, Andrew Kelley, had quit his job to work full-time on Zig. Two and a-half years later Zig has its own track at FOSEM’21.

Will Zig become the next fashionable language, as Rust/Go popularity fades? I’m rooting for Zig because of its name, there are relatively few languages whose name starts with Z; the start of the alphabet is over-represented with language names. It would be foolish to root for a language because of a belief that it has magical properties (e.g., powerful, readable, maintainable), but the young are foolish.

Growth in number of packages for widely used languages

These days a language’s ecosystem of add-ons, such as packages, is often more important than the features provided by the language (which usually only vary in their syntactic sugar, and built-in support for some subset of commonly occurring features).

Use of a particular language grows and shrinks, sometimes over very many decades. Estimating the number of users of a language is difficult, but a possible proxy is ecosystem activity in the form of package growth/decline. However, it will take many several decades for the data needed to test how effective this proxy might be.

Where are we today?

The Module Counts website is the home for a project that counts the number of libraries/packages/modules contained in 26 language specific repositories. Daily data, in some cases going back to 2010, is available as a csv :-) The following are the most interesting items I discovered during a fishing expedition.

The csv file contains totals, and some values are missing (which means specifying an ‘ignore missing values’ argument to some functions). Some repos have been experiencing large average daily growth (e.g., 65 for PyPI, and 112 for Maven Central-Java), while others are more subdued (e.g., 0.7 for PERL and 3.9 for R’s CRAN). Apart from a few days, the daily change is positive.

Is the difference in the order of magnitude growth due to number of active users, number of packages that currently exist, a wide/narrow application domain (Python is wide, while R’s is narrow), the ease of getting a package accepted, or something else?

The plots below show how PyPI has been experiencing exponential growth of a kind (the regression model fitted to the daily total has the form e^{1.01days+days^2}, where days is the number of days since 2010-01-01; the red line is the daily diff of this equation), while Ruby has been experiencing a linear decline since late 2014 (all code+data):

Daily change in the number of packages in PyPI and Rubygems.

Will the five-year decline in new submissions to Rubygems continue, and does this point to an eventual demise of Ruby (a few decades from now)? Rubygems has years to go before it reaches PERL’s low growth rate (I think PERL is in terminal decline).

Are there any short term patterns, say at the weekly level? Autocorrelation is a technique for estimating the extent to which today’s value is affected by values from the immediate past (usually one or two measurement periods back, i.e., yesterday or the day before that). The two plots below show the autocorrelation for daily changes, with lag in days:

Autocorrelation of daily changes in PyPI and Maven-Java package counts.

The recurring 7-day ‘peaks’ show the impact of weekends (I assume). Is the larger ”weekend-effect’ for Java, compared to PyPI, due to Java usage including a greater percentage of commercial developers (who tend not to work at the weekend)?

I did not manage to find any seasonal effect, e.g., more submissions during the winter than the summer. But I only checked a few of the languages, and only for a single peak (see code for details).

Another way of tracking package evolution is version numbering. For instance, how often do version numbers change, and which component, e.g., major/minor. There have been a couple of studies looking at particular repos over a few years, but nobody is yet recording broad coverage daily, over the long term 😉

Shutdown order consistency: how Rust helps

Some Java code with bugs

Here’s my main method (in Java). Can you guess the bug?

Db db = new Db();
Monitoring monitoring = new Monitoring();
Monitoring mon2 = new Monitoring();
Billing billing = new Billing(db, monitoring);
monitoring.setDb(db);

runMainLoop(billing, mon2);

db.stop();
billing.stop();
monitoring.stop();

If you would like to hunt down the 2 bugs manually, try reading the full code here: ShutdownOrder.java

But maybe you have an idea already? Maybe you’ve seen code like this before? If you have, you probably have an instinct that there’s some kind of bug, even if you can’t say for sure what it is. Code like this almost always has bugs!

This code compiles fine, but it contains two bugs.

First, we forgot to setDb() on mon2. This causes a NullPointerException, because Monitoring expects always to have a working Db.

Second, and in general harder to spot, we shut down our services in the wrong order. It turns out that Monitoring uses its Db during shutdown, so we get an exception. Even worse, if some other code needed to run after monitoring.stop(), it won’t, because the exception prevents us getting any further.

Of course, this is toy code, but this kind of problem is common (and much harder to spot) in real-life code. In fact, my team dealt with a similar bug this week.

It’s fundamentally hard to figure out your shutdown order. It’s complicated further if classes have start() methods too, which I have seen in lots of Java code.

Given that this is just a hard problem, maybe there’s no point looking for tools to make it easier?

Some Rust code without those bugs

Let’s try writing this code in Rust. Here’s the main method:

let db = Db::new();
let monitoring = Monitoring::new(&db);
let mon2 = Monitoring::new(&db);
let billing = Billing::new(&db, &monitoring);

run_main_loop(&billing, &mon2);

// drop() is called automatically on all objects here

Here’s the full code: shutdown_order.rs

This code shuts down all the services automatically at the end, and any mistakes we make in the order are compile errors, not things we find later when our code is running.

The code to shut down each service looks like this:

impl Drop for Monitoring<'_> {
    fn drop(&mut self) {
        // [Disconnect from monitoring API]
        self.db.add_record("MonitorShutDown");
    }
}

This is us implementing the Drop trait for the struct Monitoring (traits are a bit like Java Interfaces). The Drop trait is special: it indicates what to do when an instance of this struct is dropped. In Rust, this is guaranteed to happen when the instance goes out of scope, which is why our comment at the end of the main method sounds so confident.

Furthermore, Rust’s compiler shuts down everything in the reverse order in which it was created, and guarantees that nothing gets used after it has been dropped.

Rust’s lovely world gives us two relevant treats: no unexpected nulls, and lifetimes.

Treat number 1: no unexpected nulls

First, in Rust, like in other modern languages like Kotlin, we have to be explicit about items that could be missing. In our example, we were able to re-arrange the code so that db can never be missing (or null), and the compiler encouraged us to do so. If we really needed it to be missing some of the time, we could have used the Option type, and the compiler would have forced us to handle the case when it was missing, instead of unexpectedly getting a NullPointerException like we did in Java. (In fact, if we’d structured our code to use final in as many places as possible, we could have been encouraged towards basically the same solution in Java too.)

Treat number 2: lifetimes

Second, if you look a bit more closely at the full code of shutdown_order.rs you’ll see lots of confusing-looking annotations like <'a> and &'a:

struct Monitoring<'a> {
    db: &'a Db,
}

The approximate meaning of those annotations is: a Monitoring holds a reference to a Db, and that Db must last longer than the Monitoring.

This “lasts longer than” wording is what Rust Lifetimes are for. Lifetimes are a way of saying how long something lasts.

Lifetimes are really confusing when you start with Rust, and have caused me a lot of pain. Code like this is where they are both most painful and most helpful. As I mentioned earlier, the problem of shutdown order is fundamentally hard. Rust gives you that pain at the beginning, and until you understand what’s going on, the pain is very confusing and acute. But, once your code compiles, it is correct, at least as far as problems like this are concerned.

I love the sense of security it gives me to write Rust code and know the compiler has checked my code for this kind of problem, meaning it can’t crop up at 3am on Christmas Day…

Final note/caveat

This Rust code is probably over-simplified, because all the references are immutable (you can’t change the objects they point to). In practice, we may well have mutable references, and if we do we’re going have to deal with the further difficulty that Rust won’t allow two different objects to hold references to an object if any of those references are mutable. So it would object to Billing and Monitoring using the Db object at the same time. We’d need to make it immutable (as we have here), or find a different way of structuring the code: for example, we could hold the Db instance only within the run_main_loop code, and pass it in temporarily to the Billing and Monitoring objects when we called their methods. A large part of the art, fun and pain of learning Rust is finding new patterns for your code that do what you need to do and also keep the compiler happy. When you manage it, you get amazing benefits!

Profile a Java unit test (very quickly, with no external tools)

I have a unit test that is running slowly, and I want a quick view of what is happening.

I can get a nice overview of where the code spends its time by adding this to the JVM arguments:

-agentlib:hprof=cpu=samples,lineno=y,depth=3,file=hprof.samples.txt

and running the test as normal.

Now I can look at the file that was created, hprof.samples.txt, and looking at the bottom section I can see how much time is spent in each method.

This worked for me within IntelliJ IDEA community edition by clicking “Run” then “Edit Configurations” and adding the above code to “VM options” for my test.

It should also work in Gradle by editing gradle.properties and adding something like this:

org.gradle.jvmargs=-agentlib:hprof=cpu=samples,lineno=y,depth=3,file=hprof.samples.txt

and should also work in Maven. In fact, I found this information in this stackoverflow question: How do you run maven unit tests with hprof?.

Impact of function size on number of reported faults

Are longer functions more likely to contain more coding mistakes than shorter functions?

Well, yes. Longer functions contain more code, and the more code developers write the more mistakes they are likely to make.

But wait, the evidence shows that most reported faults occur in short functions.

This is true, at least in Java. It is also true that most of a Java program’s code appears in short methods (in C 50% of the code is contained in functions containing 114 or fewer lines, while in Java 50% of code is contained in methods containing 4 or fewer lines). It is to be expected that most reported faults appear in short functions. The plot below shows, left: the percentage of code contained in functions/methods containing a given number of lines, and right: the cumulative percentage of lines contained in functions/methods containing less than a given number of lines (code+data):

left: the percentage of code contained in functions/methods containing a given number of lines, and right: the cumulative percentage of lines contained in functions/methods containing less than a given number of lines.

Does percentage of program source really explain all those reported faults in short methods/functions? Or are shorter functions more likely to contain more coding mistakes per line of code, than longer functions?

Reported faults per line of code is often referred to as: defect density.

If defect density was independent of function length, the plot of reported faults against function length (in lines of code) would be horizontal; red line below. If every function contained the same number of reported faults, the plotted line would have the form of the blue line below.

Number of reported faults in C++ classes (not methods) containing a given number of lines.

Two things need to occur for a fault to be experienced. A mistake has to appear in the code, and the code has to be executed with the ‘right’ input values.

Code that is never executed will never result in any fault reports.

In a function containing 100 lines of executable source code, say, 30 lines are rarely executed, they will not contribute as much to the final total number of reported faults as the other 70 lines.

How does the average percentage of executed LOC, in a function, vary with its length? I have been rummaging around looking for data to help answer this question, but so far without any luck (the llvm code coverage report is over all tests, rather than per test case). Pointers to such data very welcome.

Statement execution is controlled by if-statements, and around 17% of C source statements are if-statements. For functions containing between 1 and 10 executable statements, the percentage that don’t contain an if-statement is expected to be, respectively: 83, 69, 57, 47, 39, 33, 27, 23, 19, 16. Statements contained in shorter functions are more likely to be executed, providing more opportunities for any mistakes they contain to be triggered, generating a fault experience.

Longer functions contain more dependencies between the statements within the body, than shorter functions (I don’t have any data showing how much more). Dependencies create opportunities for making mistakes (there is data showing dependencies between files and classes is a source of mistakes).

The previous analysis makes a large assumption, that the mistake generating a fault experience is contained in one function. This is true for 70% of reported faults (in AspectJ).

What is the distribution of reported faults against function/method size? I don’t have this data (pointers to such data very welcome).

The plot below shows number of reported faults in C++ classes (not methods) containing a given number of lines (from a paper by Koru, Eman and Mathew; code+data):

Number of reported faults in C++ classes (not methods) containing a given number of lines.

It’s tempting to think that those three curved lines are each classes containing the same number of methods.

What is the conclusion? There is one good reason why shorter functions should have more reported faults, and another good’ish reason why longer functions should have more reported faults. Perhaps length is not important. We need more data before an answer is possible.

Example Android project with repeatable tests running inside an emulator

I’ve spent the last couple of days fighting the Android command line to set up a simple project that can run automated tests inside an emulator reliably and repeatably.

To make the tests reliable and independent from anything else on my machine, I wanted to store the Android SDK and AVD files in a local directory.

To do this I had to define a lot of inter-related environment variables, and wrap the tools in scripts that ensure they run with the right flags and settings.

The end result of this work is here: gitlab.com/andybalaam/android-skeleton

You need all the utility scripts included in that repo for it to work, but some highlights include:

The environment variables that I source in every script, scripts/paths:

PROJECT_ROOT=$(dirname $(dirname $(realpath ${BASH_SOURCE[${#BASH_SOURCE[@]} - 1]})))
export ANDROID_SDK_ROOT="${PROJECT_ROOT}/android_sdk"
export ANDROID_SDK_HOME="${ANDROID_SDK_ROOT}"
export ANDROID_EMULATOR_HOME="${ANDROID_SDK_ROOT}/emulator-home"
export ANDROID_AVD_HOME="${ANDROID_EMULATOR_HOME}/avd"

Creation of a local.properties file that tells Gradle and Android Studio where the SDK is, by running something like this:

echo "# File created automatically - changes will be overwritten!" > local.properties
echo "sdk.dir=${ANDROID_SDK_ROOT}" >> local.properties

The wrapper scripts for Android tools e.g. scripts/sdkmanager:

#!/bin/bash

set -e
set -u

source scripts/paths

"${ANDROID_SDK_ROOT}/tools/bin/sdkmanager" \
    "--sdk_root=${ANDROID_SDK_ROOT}" \
    "$@"

The wrapper for avdmanager is particularly interesting since it seems we need to override where it thinks the tools directory is for it to work properly – scripts/avdmanager:

#!/bin/bash

set -e
set -u

source scripts/paths

# Set toolsdir to include "bin/" since avdmanager seems to go 2 dirs up
# from that to find the SDK root?
AVDMANAGER_OPTS="-Dcom.android.sdkmanager.toolsdir=${ANDROID_SDK_ROOT}/tools/bin/" \
    "${ANDROID_SDK_ROOT}/tools/bin/avdmanager" "$@"

An installation script that must be run once before using the project scripts/install-android-tools:

#!/bin/bash

set -e
set -u
set -x

source scripts/paths

mkdir -p "${ANDROID_SDK_ROOT}"
mkdir -p "${ANDROID_AVD_HOME}"
mkdir -p "${ANDROID_EMULATOR_HOME}"

# Download sdkmanager, avdmanager etc.
cd "${ANDROID_SDK_ROOT}"
test -f commandlinetools-*.zip || \
    wget -q 'https://dl.google.com/android/repository/commandlinetools-linux-6200805_latest.zip'
unzip -q -u commandlinetools-*.zip
cd ..

# Ask sdkmanager to update itself
./scripts/sdkmanager --update

# Install the emulator and tools
yes | ./scripts/sdkmanager --install 'emulator' 'platform-tools'

# Platforms
./scripts/sdkmanager --install 'platforms;android-21'
./scripts/sdkmanager --install 'platforms;android-29'

# Install system images for our oldest and newest supported API versions
yes | ./scripts/sdkmanager --install 'system-images;android-21;default;x86_64'
yes | ./scripts/sdkmanager --install 'system-images;android-29;default;x86_64'

# Create AVDs to run the system images
echo no | ./scripts/avdmanager -v \
    create avd \
    -f \
    -n "avd-21" \
    -k "system-images;android-21;default;x86_64" \
    -p ${ANDROID_SDK_ROOT}/avds/avd-21
echo no | ./scripts/avdmanager -v \
    create avd \
    -f \
    -n "avd-29" \
    -k "system-images;android-29;default;x86_64" \
    -p ${ANDROID_SDK_ROOT}/avds/avd-29

Please do contribute to the project if you know easier ways to do this stuff.

How are C functions different from Java methods?

According to the right plot below, most of the code in a C program resides in functions containing between 5-25 lines, while most of the code in Java programs resides in methods containing one line (code+data; data kindly supplied by Davy Landman):

Number of C/Java functions of a given length and percentage of code in these functions.

The left plot shows the number of functions/methods containing a given number of lines, the right plot shows the total number of lines (as a percentage of all lines measured) contained in functions/methods of a given length (6.3 million functions and 17.6 million methods).

Perhaps all those 1-line Java methods are really complicated. In C, most lines contain a few tokens, as seen below (code+data):

Number of lines containing a given number of C tokens.

I don’t have any characters/tokens per line data for Java.

Is Java code mostly getters and setters?

I wonder what pattern C++ will follow, i.e., C-like, Java-like, or something else? If you have data for other languages, please send me a copy.

Building an all-in-one Jar in Gradle with the Kotlin DSL

To build a “fat” Jar of your Java or Kotlin project that contains all the dependencies within a single file, you can use the shadow Gradle plugin.

I found it hard to find clear documentation on how it works using the Gradle Kotlin DSL (with a build.gradle.kts instead of build.gradle) so here is how I did it:

$ cat build.gradle.kts 
import com.github.jengelman.gradle.plugins.shadow.tasks.ShadowJar

plugins {
    kotlin("jvm") version "1.3.41"
    id("com.github.johnrengelman.shadow") version "5.1.0"
}

repositories {
    mavenCentral()
}

dependencies {
    implementation(kotlin("stdlib"))
}

tasks.withType<ShadowJar>() {
    manifest {
        attributes["Main-Class"] = "HelloKt"
    }
}

$ cat src/main/kotlin/Hello.kt 
fun main() {
    println("Hello!")
}

$ gradle wrapper --gradle-version 5.5
BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

$ ./gradlew shadowJar
BUILD SUCCESSFUL in 1s
2 actionable tasks: 2 executed

$ java -jar build/libs/hello-all.jar 
Hello!

Creating a self-signed certificate for Apache and connecting to it from Java

Our mission: to create a self-signed certificate for an Apache web server that allows us to connect to it over HTTPS (SSL/TLS) from a Java program.

The tricky bit for me was generating a certificate that contains Subject Alternative Names for my server, which is needed to connect to it from Java.

We will use the openssl command.

Creating a self-signed certificate for Apache HTTPD

First create a config file cert.conf:

[ req ]
distinguished_name  = subject
x509_extensions     = x509_ext
prompt = no

[ subject ]
commonName = Example Company

[ x509_ext ]
subjectAltName = @alternate_names

[ alternate_names ]
DNS.1 = example.com

In the above, replace “example.com” with the name you will use for the host when you connect from Java. This is important, because Java requires the name in the certificate to match the name it is using to connect to the server. If you’re connecting to it as localhost, just put “localhost”. Note: do not include “https://” or any port or path after the hostname, so “example.com:8080/mypath” is wrong – it should be just “example.com”.

The alternate_names section above gives the “Subject Alternative Names” for this certificate. You can add more as “DNS.2”, “DNS.3”, etc.

Next, generate the server key and self-signed certificate:

openssl genrsa 2048 > server.key
chmod 400 server.key
openssl req -new -x509 -config cert.conf -nodes -sha256 -days 365 -key server.key -out server.crt

Now you have two new files: server.key and server.crt. These are the files that will be used by Apache HTTPD, so put them somewhere useful (e.g. inside /usr/local/apache2/conf/) and refer to them in the Apache config file using keys “SSLCertificateKeyFile” and “SSLCertificateFile” respectively. For more info see the SSL/TLS How-To.

Checking the certificate is being used

Start up your Apache and ensure you can connect to it over HTTPS using curl:

curl -v --insecure https://example.com:8080

Replace “https://example.com:8080” above with the full URL (this time, include “https://” and the port and path.

To examine the certificate that is being returned, run:

openssl s_client -showcerts -connect example.com:8080

Replace “example.com:8080” above with hostname and port (no “https:// this time!).

Connecting from Java

To be able to connect from Java, we need a Trust Store. We can create one in PKCS#12 format with:

openssl pkcs12 -export -passout pass:000000 -out trust.pkcs12 -inkey server.key -in server.crt

Note: Java 8 onwards is able to use .pkcs12 (PKCS#12) files for its trust store. The old .jks (Java Key Store) format can also be used, but is deprecated.

Now you have a file we can use as a trust store, follow my other article to connect from Java over HTTPS with a self-signed certificate.

Build with a different Java version (e.g. 11) using Docker

To spin up a temporary environment with a different Java version without touching your real environment, try this Docker command:

docker run -i -t --mount "type=bind,src=$PWD,dst=/code" openjdk:11-jdk bash

(Change “11-jdk” to the version you want as listed on the README.)

Then you can build the code inside the current directory something like this:

cd code
./gradlew test

Or similar for other build tools, although you may need to install them first.

Scheduling a task in Java within a CompletableFuture

When we want to do something later in our Java code, we often turn to the ScheduledExecutorService. This class has a method called schedule(), and we can pass it some code to be run later like this:

ScheduledExecutorService executor =
    Executors.newScheduledThreadPool(4);
executor.schedule(
    () -> {System.out.println("..later");},
    1,
    TimeUnit.SECONDS
);
System.out.println("do...");
// (Don't forget to shut down the executor later...)

The above code prints “do…” and then one second later it prints “…later”.

We can even write code that does some work and returns a result in a similar way:

// (Make the executor as above.)
ScheduledFuture future = executor.schedule(
    () -> 10 + 25, 1, TimeUnit.SECONDS);
System.out.println("answer=" + future.get())

The above code prints “answer=35”. When we call get() it blocks waiting for the scheduler to run the task and mark the ScheduledFuture as complete, and then returns the answer to the sum (10 + 25) when it is ready.

This is all very well, but you may note that the Future returned from schedule() is a ScheduledFuture, and a ScheduledFuture is not a CompletableFuture.

Why do you care? Well, you might care if you want to do something after the scheduled task is completed. Of course, you can call get(), and block, and then do something, but if you want to react asynchronously without blocking, this won’t work.

The normal way to run some code after a Future has completed is to call one of the “then*” or “when*” methods on the Future, but these methods are only available on CompletableFuture, not ScheduledFuture.

Never fear, we have figured this out for you. We present a small wrapper for schedule that transforms your ScheduledFuture into a CompletableFuture. Here’s how to use it:

CompletableFuture future =
    ScheduledCompletable.schedule(
        executor,
        () -> 10 + 25,
        1,
        TimeUnit.SECONDS
     );
future.thenAccept(
    answer -> {System.out.println(answer);}
);
System.out.println("Answer coming...")

The above code prints “Answer coming…” and then “35”, so we can see that we don’t block the main thread waiting for the answer to come back.

So far, we have scheduled a synchronous task to run in the background after a delay, and wrapped its result in a CompletableFuture to allow us to chain more tasks after it.

In fact, what we often want to do is schedule a delayed task that is itself asynchronous, and already returns a CompletableFuture. In this case it feels particularly natural to get the result back as a CompletableFuture, but with the current ScheduledExecutorService interface we can’t easily do it.

It’s OK, we’ve figured that out too:

Supplier> asyncTask = () ->
    CompletableFuture.completedFuture(10 + 25);
CompletableFuture future =
    ScheduledCompletable.scheduleAsync(
        executor, asyncTask, 1, TimeUnit.SECONDS);
future.thenAccept(
    answer -> {System.out.println(answer);}
);
System.out.println("Answer coming...")

The above code prints “Answer coming…” and then “35”, so we can see that our existing asynchronous task was scheduled in the background, and we didn’t have to block the main thread waiting for it. Also, under the hood, we are not blocking the ScheduledExecutorService‘s thread (from its pool) while the async task is running – that task just runs in whatever thread it was assigned when it was created. (Note: in our example we don’t really run an async task at all, but just immediately return a completed Future, but this does work for real async tasks.)

I know you’re wondering how we achieved all this. First, here’s how we run a simple blocking task in the background and wrap it in a CompletableFuture:

public static  CompletableFuture schedule(
    ScheduledExecutorService executor,
    Supplier command,
    long delay,
    TimeUnit unit
) {
    CompletableFuture completableFuture = new CompletableFuture<>();
    executor.schedule(
        (() -> {
            try {
                return completableFuture.complete(command.get());
            } catch (Throwable t) {
                return completableFuture.completeExceptionally(t);
            }
        }),
        delay,
        unit
    );
    return completableFuture;
}

And here’s how we delay execution of an async task but still return its result in a CompletableFuture:

public static  CompletableFuture scheduleAsync(
    ScheduledExecutorService executor,
    Supplier> command,
    long delay,
    TimeUnit unit
) {
    CompletableFuture completableFuture = new CompletableFuture<>();
    executor.schedule(
        (() -> {
            command.get().thenAccept(
                t -> {completableFuture.complete(t);}
            )
            .exceptionally(
                t -> {completableFuture.completeExceptionally(t);return null;}
            );
        }),
        delay,
        unit
    );
    return completableFuture;
}

Note that this should all work to run methods like exceptionally(), thenAccept(), whenComplete() etc.

Feedback and improvements welcome!

Gradle: what is a task, and how can I make a task depend on another task?

In an insane world, Gradle sometimes seems like the sanest choice for building a Java or Kotlin project.

But what on Earth does all the stuff inside build.gradle actually mean?

And when does my code run?

And how do you make a task?

And how do you persuade a task to depend on another task?

[Related: Clever things people do in Groovy so you have to know about them]

Setting up

To use Gradle, get hold of any version of it for long enough to create a local gradlew file, and the use that.

$ mkdir gradle-experiments
$ cd gradle-experiments
$ sudo apt install gradle  # Briefly install the system version of gradle
...
$ gradle wrapper --gradle-version=5.2.1
$ sudo apt remove gradle   # Optional - uninstalls the system version
$ ./gradlew tasks
... If all is good, this should ...
... print a list of available tasks. ...

It is normal for gradlew and the whole gradle/ directory it creates to be checked into source control. This means everyone who fetches the code from source control will have a predictable Gradle version.

What is build.gradle?

build.gradle is a Groovy program that Gradle runs within a context that it has set up for you. That context means that you are actually calling methods of a Project object, and modifying its properties. The fact that Groovy lets you miss out a lot of punctuation makes that harder to see, but it’s true.

The first thing to get your head around is that Gradle actually runs your code immediately, so if your build.gradle looks like this (and only this):

println("Hello")

when you run Gradle your code runs:

$ ./gradlew -q 
Hello
... more guff ...

So that code runs even if you don’t ask Gradle to run a task containing that code. It runs at “configuration time” – i.e. when Gradle is understanding your build.gradle file. Actually, “understanding” it means executing it.

Remember when I said this code runs in the context of a Project? What that means is that if you have something like this in your build.gradle:

repositories {
    jcenter()
}

what it really means is something like this:

project.repositories(
    {
        it.jcenter()
    }
)

You are calling the repositories method on the project object. The argument to the repositories method is a Groovy closure, which is a blob of code that will get run later. I’ve used the magic it name above to demonstrate that jcenter is just a method being called on the object that is the context for the closure when it is run.

When does it run? Let’s find out:

println("before")
project.repositories( {
    println("within")
    jcenter()
})
println("after")
$ ./gradlew -q
before
within
after
... more guff ...

This surprised me – it means the closure you pass in to repositories is actually run immediately, as part of running repositories, before execution gets to the line after that call.

As we’ll see later, some closures you create do not run immediately like this one.

Once you know that build.gradle is actually modifying a Project object, you have starting point for understanding the Gradle reference documentation.

How do you make a task?

You probably shouldn’t do it very often, but it was instructive for me to understand how to make my own custom task. Here’s an example:

tasks.register("mytask") {
    doLast {
        println("running mytask")
    }
}

This creates a new task by calling the register method on the tasks property of the Project object. Register takes two arguments: a name for the task (“mytask” here), and a closure with some code in it to run when we decide we need this task. That closure gets run in a context that can’t see the Project object, but instead can see a Task object which it is helping to make. That Task object has a doLast method that we call, passing it a closure that will be run when the task is actually executed (not immediately).

If we remove some of the syntactic sugar the above build.gradle looks like this:

tasks.register(
    "mytask",
    {
        it.doLast(
            {
                println("running mytask")
            }
        )
    }
)

Above we can see that register really does take two arguments as I said above – the first version uses a Groovy feature where if you miss out the last argument and write a closure immediately afterwards the closure is passed as the last argument. Confusing, eh?

Again, notice that doLast is a method on the Task object that is implicitly available when the closure is run.

So we have created a task that we can run:

 ./gradlew -q mytask
running mytask

How do you make a task depend on another task?

If I want to run my code formatting before my compile (for example) I sometimes need to modify a task to make it depend on another one. This can done for tasks you create or for pre-existing ones. Here’s an example:

plugins {
    id "java"
}
tasks.register("mytask") {
    doLast {
        println("running mytask")
    }
}
compileJava {
    dependsOn tasks.named("mytask")
}

So, calling the plugins method on the Project at the top with a closure that ran the id method on something modified the Project so that it had a new method called compileJava which we called at the bottom, passing it a closure to run. That closure ran in the context of a Task object (similar to when we created a task, but now allow us to modify a pre-existing one). We called the dependsOn method of the Task object, passing in another Task object which we had got by calling the named method on the tasks object.

[Side note: the register method actually returns a Task object that we could have passed to dependsOn without looking it up again using named, but Groovy doesn’t provide a very convenient way of holding on to that reference, so we didn’t do it. The Kotlin example below shows that this is quite simple in Kotlin.]

How do I do all this in Kotlin?

Because one DSL that hides what’s really going on wasn’t enough for you, Gradle now provides a second DSL that hides what’s going on in subtly different ways, which is a program written in Kotlin instead of Groovy. This is marginally better, because Kotlin doesn’t let you do quite so many stupid tricks as Groovy does.

Below are all our examples in Kotlin. You get started exactly the same way, by following “Setting up” above. Remember to name your build file build.gradle.kts.

Say hello in Gradle Kotlin

println("Hello")

This is identical to the Groovy version.

Use jcenter repo in Gradle Kotlin

repositories {
    jcenter()
}

This is identical to the Groovy version, and with the same meaning: repositories is a method on the implicitly-available Project object.

The “unsugared” version looks like this in Kotlin:

this.repositories(
    {
        this.jcenter()
    }
)

[Note that the word this is used to access the implicit context. The word it has a different meaning in Kotlin from in Groovy. In Groovy it means the implicit context, but in Kotlin it means the first argument. We didn’t pass any arguments to jcenter when we called it, so we can’t use it, but we were being run in a context, which we can refer to using this. Simple. huh?]

Execution order in Gradle Kotlin

We this build.gradle.kts:

println("before")
project.repositories( {
    println("within")
    jcenter()
})
println("after")

We see this behaviour:

$ ./gradlew -q
before
within
after

which is all identical to the Groovy version.

Making a new task in Gradle Kotlin

tasks.register("mytask") {
    doLast {
        println("running mytask")
    }
}

Notice that Kotlin lets you do the same trick as Groovy: providing an extra argument to a function that is a closure by writing it immediately after it looks like you’ve finished calling it. It’s good for people who dislike closing brackets hanging around longer than they’re welcome. As someone who likes Lisp, I’m OK with closing brackets, but what do I know?

The above is identical to the Groovy version, but slightly different when unsugared:

tasks.register(
    "mytask",
    {
        this.doLast(
            {
                println("running mytask")
            }
        )
    }
)

One task depending on another in Gradle Kotlin

plugins {
    java
}
val mytask = tasks.register("mytask") {
    doLast {
        println("running mytask")
    }
}
tasks.compileJava {
    dependsOn(mytask)
}

This differs slightly from the Groovy version, even though the meaning is the same: we start off in the context of a Project object that we call methods on.

The code to make one task depend on another gets hold of the Task object called compileJava from inside the tasks property of the Project, and calls it (because it’s a callable object). We pass in a closure that runs in the context of this Task object, calling its dependsOn method, and passing in a reference to the mytask object, which is a Task and was created in the code above.

Corrections and clarifications welcome

The above is what I have worked out by experimentation and trying to read the Gradle documentation. Please add comments that clear up confusions and correct mistakes.

Performance of Java 2D drawing operations (part 3: image opacity)

Series: operations, images, opacity

Not because I was enjoying it, I seemed compelled to continue my quest to understand the performance of various Java 2D drawing operations. I’m hoping to make my game Rabbit Escape faster, especially on the Raspberry Pi, so you may see another post sometime actually trying this stuff out on a Pi.

But for now, here are the results of my investigation into how different patterns of opacity in images affects rendering performance.

You can find the code here: gitlab.com/andybalaam/java-2d-performance.

Results

  • Images with partially-opaque pixels are no slower than those with fully-opaque pixels
  • Large transparent areas in images are drawn quite quickly, but transparent pixels mixed with non-transparent are slow

Advice

  • Still avoid any transparency whenever possible
  • It’s relatively OK to use large transparent areas on images (e.g. a fixed-size animation where a character moves through the image)
  • Don’t bother restricting pixels to be either fully transparent or fully opaque – partially-opaque is fine

Opacity patterns in images

Non-transparent images drew at 76 FPS, and transparent ones dropped to 45 FPS.

I went further into investigating transparency by creating images that were:

  • All pixels 50% opacity (34 FPS)
  • Half pixels 0% opacity, half 100%, mixed up (34 FPS)
  • Double the size of the original image, but the extra area is fully transparent, and the original area is non-transparent (41 FPS)

I concluded that partial-opacity is not important to performance compared with full-opacity, but that large areas of transparency are relatively fast compared with images with complex patterns of transparency and opacity.

Numbers

Transparency and opacity

Test FPS
large nothing 90
large images20 largeimages 76
large images20 largeimages transparentimages 45
large images20 largeimages transparent50pcimages 34
large images20 largeimages transparent0pc100pcimages 34
large images20 largeimages transparentareaimages 41

Feedback please

Please do get back to me with tips about how to improve the performance of my experimental code.

Feel free to log issues, make merge requests or add comments to the blog post.

Performance of Java 2D drawing operations (part 2: image issues)

Series: operations, images

In my previous post I examined the performance of various drawing operations in Java 2D rendering. Here I look at some specifics around rendering images, with an eye to finding optimisations I can apply to my game Rabbit Escape.

You can find the code here: gitlab.com/andybalaam/java-2d-performance.

Results

  • Drawing images with transparent sections is very slow
  • Drawing one large image is slower than drawing many small images covering the same area(!)
  • Drawing images outside the screen is slower than not drawing them at all (but faster than drawing them onto a visible area)

Advice

  • Avoid transparent images where possible
  • Don’t bother pre-rendering your background tiles onto a single image
  • Don’t draw images that are off-screen

Images with transparency

All the images I used were PNG files with a transparency layer, but in most of my experiments there were no transparent pixels. When I used images with transparent pixels the frame rate was much slower, dropping from 78 to 46 FPS. So using images with transparent pixels causes a significant performance hit.

I’d be grateful if someone who knows more about it can recommend how to improve my program to reduce this impact – I suspect there may be tricks I can do around setComposite or setRenderingHint or enabling/encouraging hardware acceleration.

Composite images

I assumed that drawing a single image would be much faster than covering the same area of the screen by drawing lots of small images. In fact, the result was the opposite: drawing lots of small images was much faster than drawing a single image covering the same area.

The code for a single image is:

g2d.drawImage(
    singleLargeImage,
    10,
    10,
    null
)

and for the small images it is:

for (y in 0 until 40)
{
    for (x in 0 until 60)
    {
        g2d.drawImage(
            compositeImages[(y*20 + x) % compositeImages.size],
            10 + (20 * x),
            10 + (20 * y),
            null
        )
    }
}

The single large image was rendered at 74 FPS, whereas covering the same area using repeated copies of 100 images was rendered at 80 FPS. I ran this test several times because I found the result surprising, and it was consistent every time.

I have to assume some caching (possibly via accelerated graphics) of the small images is the explanation.

Drawing images off the side of the screen

Drawing images off the side of the screen was faster than drawing them in a visible area, but slower than not drawing them at all. I tested this by adding 10,000 to the x and y positions of the images being drawn (I also tested subtracting 10,000 with similar results). Not drawing any images ran at 93 FPS, drawing images on-screen at 80 FPS, and drawing them off-screen only 83 FPS, meaning drawing images off the side takes significant time.

Advice: check whether images are on-screen, and avoid drawing them if not.

Numbers

Transparency

Test FPS
large nothing 95
large images20 largeimages 78
large images20 largeimages transparentimages 46

Composite images

(Lots of small images covering an area, or a single larger image.)

Test FPS
large nothing 87
large largesingleimage 74
large compositeimage 80

Offscreen images

Test FPS
large nothing 93
large images20 largeimages 80
large images20 largeimages offscreenimages 83

Feedback please

Please do get back to me with tips about how to improve the performance of my experimental code.

Feel free to log issues, make merge requests or add comments to the blog post.

Performance of Java 2D drawing operations

I want to remodel the desktop UI of my game Rabbit Escape to be more
convenient and nicer looking, so I took a new look at game-loop-style graphics rendering onto a canvas in a Java 2D (Swing) UI.

Specifically, how fast can it be, and what pitfalls should I avoid when I’m doing it?

Results

  • Larger windows are (much) slower
  • Resizing images on-the-fly is very slow, even if they are the same size every time
  • Drawing small images is fast, but drawing large images is slow
  • Drawing rectangles is fast
  • Drawing text is fast
  • Drawing Swing widgets in front of a canvas is fast
  • Creating fonts on-the-fly is a tiny bit slow

Code

You can find the full code (written in Kotlin) at gitlab.com/andybalaam/java-2d-performance.

Basically, we make a JFrame and a Canvas and tell them not to listen to repaints (i.e. we control their drawing).

val app = JFrame()
app.ignoreRepaint = true
val canvas = Canvas()
canvas.ignoreRepaint = true

Then we add any buttons to the JFrame, and the canvas last (so it displays behind):

app.add(button)
app.add(canvas)

Now we make the canvas double-buffered and get hold of a buffer image for it:

app.isVisible = true
canvas.createBufferStrategy(2)
val bufferStrategy = canvas.bufferStrategy
val bufferedImage = GraphicsEnvironment
    .getLocalGraphicsEnvironment()
    .defaultScreenDevice
    .defaultConfiguration
    .createCompatibleImage(config.width, config.height)

Then inside a tight loop we draw onto the buffer image:

val g2d = bufferedImage.createGraphics()
try
{
    g2d.color = backgroundColor
    g2d.fillRect(0, 0, config.width, config.height)

    ... the different drawing operations go here ...

and then swap the buffers:

    val graphics = bufferStrategy.drawGraphics
    try {
        graphics.drawImage(bufferedImage, 0, 0, null)
        if (!bufferStrategy.contentsLost()) {
            bufferStrategy.show()
        }
    } finally {
        graphics.dispose()
    }
} finally {
    g2d.dispose()
}

Results

Baseline: some rectangles

I decided to compare everything against drawing 20 rectangles at random points on the screen, since that seems like a minimal requirement for a game.

My test machine is an Intel Core 2 Duo E6550 2.33GHz with 6GB RAM and a GeForce GT 740 graphics card (I have no idea whether it is being used here – I assume not). I am running Ubuntu 18.04.1 Linux, OpenJDK Java 1.8.0_191, and Kotlin 1.3.20-release-116. (I expect the results would be identical if I were using Java rather than Kotlin.)

I ran all the tests in two window sizes: 1600×900 and 640×480. 640.×480 was embarrassingly fast for all tests, but 1600×900 struggled with some of the tasks.

Drawing rectangles looks like this:

g2d.color = Color(
    rand.nextInt(256),
    rand.nextInt(256),
    rand.nextInt(256)
)
g2d.fillRect(
    rand.nextInt(config.width / 2),
    rand.nextInt(config.height / 2),
    rand.nextInt(config.width / 2),
    rand.nextInt(config.height / 2)
)

In the small window, the baseline (20 rectangles) ran at 553 FPS. In the large window it ran at 87 FPS.

I didn’t do any statistics on these numbers because I am too lazy. Feel free to do it properly and let me know the results – I will happily update the article.

Fewer rectangles

When I reduced the number of rectangles to do less drawing work, I saw small improvements in performance. In the small window, drawing 2 rectangles instead of 20 increased the frame rate from 553 to 639, but there is a lot of noise in those results, and other runs were much closer. In the large window, the same reduction improved the frame rate from 87 to 92. This is not a huge speed-up, showing that drawing rectangles is pretty fast.

Adding fixed-size images

Drawing pre-scaled images looks like this:

g2d.drawImage(
    image,
    rand.nextInt(config.width),
    rand.nextInt(config.height),
    null
)

When I added 20 small images (40×40 pixels) to be drawn in each frame, the performance was almost unchanged. In the small window, the run showing 20 images per frame (as well as rectangle) actually ran faster than the one without (561 FPS versus 553), suggesting the difference is negligible and I should do some statistics. In the large window, the 20 images version ran at exactly the same speed (87 FPS).

So, it looks like drawing small images costs almost nothing.

When I moved to large images (400×400 pixels), the small window slowed down from 553 to 446 FPS, and the large window slowed from 87 to 73 FPS, so larger images clearly have an impact, and we will need to limit the number and size of images to keep the frame rate acceptable.

Scaling images on the fly

You can scale an image on the fly as you draw onto a Canvas. (Spoiler: don’t do this!)

My code looks like:

val s = config.imageSize
val x1 = rand.nextInt(config.width)
val y1 = rand.nextInt(config.height)
val x2 = x1 + s
val y2 = y1 + s
g2d.drawImage(
    unscaledImage,
    x1, y1, x2, y2,
    0, 0, unscaledImageWidth, unscaledImageHeight,
    null
)

Note the 10-argument form of drawImage is being used. You can be sure you have avoided this situation if you use the 4-argument form from the previous section.

Note: the resulting image is the same size every time, and the Java documentation implies that scaled images may be cached by the system, but I saw a huge slow-down when using the 10-argument form of drawImage above.

On-the-fly scaled images slowed the small window from 446 to 67 FPS(!), and the large window from 73 to 31 FPS, meaning the exact same rendering took over twice as long.

Advice: check you are not using one of the drawImage overloads that scales images! Pre-scale them yourself (e.g. with getScaledInstance as I did here).

Displaying text

Drawing text on the canvas like this:

g2d.font = Font("Courier New", Font.PLAIN, 12)
g2d.color = Color.GREEN
g2d.drawString("FPS: $fpsLastSecond", 20, 20 + i * 14)

had a similar impact to drawing small images – i.e. it only affected the performance very slightly and is generally quite fast. The small window slowed from 553 to 581 FPS, and the large window from 87 to 88.

Creating the font every time (as shown above) slowed the process a little more, so it is worth moving the font creating out of the game loop and only doing it once. The slowdown just for creating the font was 581 to 572 FPS in the small window, and 88 to 86 FPS in the large.

Swing widgets

By adding Button widgets to the JFrame before the Canvas, I was able to display them in front. Their rendering and focus worked as expected, and they had no impact at all on performance.

The same was true when I tried adding these widgets in front of images rendered on the canvas (instead of rectangles).

Turning everything up to 11

When I added everything I had tested all at the same time: rectangles, text with a new font every time, large unscaled images, and large window, the frame rate reduced to 30 FPS. This is a little slow for a game already, and if we had more images to draw it could get even worse. However, when I pre-scaled the images the frame rate went up to 72 FPS, showing that Java is capable of running a game at an acceptable frame rate on my machine, so long as we are careful how we use it.

Numbers

Small window (640×480)

Test FPS
nothing 661
rectangles2 639
rectangles20 553
rectangles20 images2 538
rectangles20 images20 561
rectangles20 images20 largeimages 446
rectangles20 images20 unscaledimages 343
rectangles20 images20 largeimages unscaledimages 67
rectangles20 text2 582
rectangles20 text20 581
rectangles20 text20 newfont 572
rectangles20 buttons2 598
rectangles20 buttons20 612

Large window (1200×900)

Test FPS
large nothing 93
large rectangles2 92
large rectangles20 87
large rectangles20 images2 87
large rectangles20 images20 87
large rectangles20 images20 largeimages 73
large rectangles20 images20 unscaledimages 82
large rectangles20 images20 largeimages unscaledimages 31
large rectangles20 text2 89
large rectangles20 text20 88
large rectangles20 text20 newfont 86
large rectangles20 buttons2 88
large rectangles20 buttons20 87
large images20 buttons20 largeimages 74
large rectangles20 images20 text20 buttons20 largeimages newfont 72
large rectangles20 images20 text20 buttons20 largeimages unscaledimages newfont 30

Feedback please

Please do get back to me with tips about how to improve the performance of my experimental code.

Feel free to log issues, make merge requests or add comments to the blog post.

You must rewind your incoming buffer when you fail to encode a character in a CharsetEncoder or you’ll get an IllegalArgumentException

I am writing a CharsetEncoder in Java, which is my kind of fun.

I was getting a mysterious error when I identified that I could not encode certain characters:

Exception in thread "main" java.lang.IllegalArgumentException
	at java.nio.Buffer.position(Buffer.java:244)
	at java.nio.charset.CharsetEncoder.encode(CharsetEncoder.java:618)
	at java.nio.charset.CharsetEncoder.encode(CharsetEncoder.java:802)
	at java.nio.charset.Charset.encode(Charset.java:843)
	at java.nio.charset.Charset.encode(Charset.java:863)

After some investigation I realised the library code in Charset.encode was expecting me not to have consumed any characters of my incoming CharBuffer if I rejected the input by returning something like CoderResult.unmappableForLength.

Of course, in order to discover the input was unmappable, I did have to read it, but I made this problem go away by stepping back one char when I found an error like this:

@Override
public CoderResult encodeLoop(CharBuffer in, ByteBuffer out) {
    char ch = in.get();
    if(isUnmappable(ch)) {
        in.position(in.position() - 1);
        return CoderResult.unmappableForLength(2);
    }
    // ... rest of method ...

I hope this helps someone.

Evolutionary pressures on C++, Java and Python

The future evolution of C++, Java and Python is being driven by very different interested parties, and it’s going to be interesting watching events unfold over the next 5-10 years.

I have previously written about how the C++ Standard’s committee is past its sell-by date, has taken off its ball and chain and is now in the hands of bored consultants.

Bjarne Stroustrup was once effectively treated as C++’s Benevolent Dictator For Life (during the production of the first C++ Standard some people were labeled as Bjarne groupees); things have moved on since then, but the ‘old-guard’ are trying to make a comeback. Suggesting that people ought to base their thinking on a book published almost 25-years ago (Stroustrup’s “The Design and Evolution of C++”; a very interesting book that is well worth reading) creates a rather backward looking image. Bored consultants are looking to work on exciting new ideas. The old-guard need to appear modern to attract followers (even if the ideas are old ideas with a fresh coat of paint).

The threat to C++ is from bored consultants, each adding their own pet idea to the language standard; a situation that Stroustrup thinks is starting to happen.

Java, the language, is owned by Oracle, the company (let’s not get too involved in exactly what they own, have copyright on, etc). Oracle are not shy about asking people for licensing fees. Java is now on a 6-month release cycle (at least the Oracle version, there are Open Source implementations) and the free support only applies to the current release; paying a license fee buys support for versions older than 6-months. In the short term, the cheapest solution is for companies to pay for support.

Oracle are always happy to send in the lawyers and if too many customers switch to non-Oracle implementations, I’m sure something can be found to introduce enough uncertainty to discourage work/distribution involving Open Source Java implementations.

Will Java survive Oracle’s licensing? It is not in their interest for Java to die; Oracle will adjust their terms to keep the money flowing in, but over the longer term I think willing Java developers are going to be hard to find.

Guido van Rossum recently removed himself from the post of Python’s Benevolent Dictator For Life. One of the jobs of a benevolent dictator is maintaining some degree of language coherence, which involves preventing people’s pet ideas from being added to the language. Does this mean that Python is slowly going to be become more and more bloated? Perhaps, but I think a more likely problem is a language fork, multiple implementations of slightly different (at first) languages all claiming to be Python.

These days, the strength of Python is its large collection of very useful, commercial grade, packages, and future language details may turn out to be irrelevant. There is a lot to learn from the Python 2/3 transition, but true believers like to think that things will turn out differently for them.

Installing specific major Java JDK versions on OS X via Homebrew

In an earlier post, I described how to install the latest version of the Oracle Java JDK using homebrew. What hadn’t been completely obvious to me when I wrote the original blog post is that the ‘java’ cask will install the latest major version of the JDK. As a result, when I upgraded my JDK […]

The post Installing specific major Java JDK versions on OS X via Homebrew appeared first on The Lone C++ Coder's Blog.

Installing a Java 8 JDK on OS X using Homebrew

I’ve had a ‘manual’ install of JDK 8 on my Mac for quite a while, mainly to run Clojure. It was the typical “download from the Oracle website, then manually run the installer” deployment. As I move the management of more development tools from manual management over to homebrew, I decided to use homebrew to […]

The post Installing a Java 8 JDK on OS X using Homebrew appeared first on The Lone C++ Coder's Blog.