Letter to Anneliese Dodds on the invasion of Ukraine by Russia

Dear Anneliese Dodds,

I learn from the BBC (https://www.bbc.co.uk/news/58888451) that "The UK is to phase out Russian oil by the end of the year" and "Russian imports account for 8% of total UK oil demand".

8% is a small amount and the end of the year is a long time in the future. We need immediate action to change Russia's course. Please use all your influence to this end.

Some suggestions, as a minimum:

  • Stop all petrochemical purchases from Russia, and requiring this of multinationals
  • Expulsion of all remaining Russian banks from SWIFT
  • Make it unlawful to insure a Russian enterprise
  • Seizure and forfeiture of all Russian assets within the UK and its dominions
  • Motion to remove Russia from the UN Security Council

There are many more things which could and should be done, by January 2023 there will be no Ukraine to defend.

Yours sincerely,
Tim Pizey

An Exception wrapper suitable for a RESTful API

User Story

As a third line support engineer

I want to be able to go to the server class that throws an exception reported by a client

So that I do not need to look for the stack trace in the server logs

Example

Client code

if (responseCode != 200) {
throw new TaskException(
"Error occurred while processing the scan response: " +
"response code: " + responseCode +
" response body: " + responseContent.getResponseBody());
}

Server code


if (null != header && header.startsWith(BEARER)) {
String token = header.substring(BEARER.length()).trim();
try {
final Jws jws = Jwts.parser().setSigningKeyResolver(jwtPublicKeyResolver)
.setAllowedClockSkewSeconds(3)
.parseClaimsJws(token);
} catch (JwtException ex) {
String errorMessage = "Invalid JWT token. ";
setError(httpServletResponse, errorMessage + ex.getMessage());
return;
}
}

This results in the following being reported by the second level support agent monitoring the client logs:

[Error occurred while processing the scan response : response code: 401 response body: Invalid JWT token. Error accessing publickey Api]:

What we, as Third Line Support, want is to know which server class throws the exception, ideally without grepping the code base or opening the server logs.

A better Exception message would be:

[Problem with scan response: status code: 401, body: com.corp.server.validation.JwtValidator.validate() line 72: JWT token Exception: Error accessing Public Key API]

This is the motivation for the StackAwareException, a wrapper exception which adds the class, method and line number of the first element of the wrapped exception's stack trace.

See https://github.com/timp/StackAwareException

Twenty Year Exit from the Oracle Ecosystem

The Oracle Ecosystem instance that I inheritted was designed from 2000 and went live in 2004. Its centre piece Oracle Workflow went end of life that year but had new versions through to 2007. In its own way it was a pinnacle of a certain view of software, the software vendor as a one stop shop, in the same way as DEC and IBM had previously sold their systems.
A whole set of components which were guaranteed to work together, with technical support. I probably would have made the same choice, even as late as 2000. This system was the largest system (by disk storage used) in Europe for a while. Housed in a purpose built computer room. Then came AWS S3 cloud storage.
The 'replacement' system migrated eighty percent of the files out of the database, seemingly unaware of the eighty-twenty rule, and duplicated the data and the dataflows. The replacement system was a 'microservice architecture' joined by a single datastore. The users now had two systems to use and were stuck with a 2000 vintage user interface. The next step was to stop using the Oracle Forms and introduce a modern three tier web application, choosing the new, shiny AngularJS as the front end, Hibernate on java as the middleware.
This was understandabley a big job, the UI was now on AWS but the data was in a data centre (a commercial one by now). The need to quit the data centre motivated the removal of CMSDK (Content Management Software Development Kit) from inside the database to separate, external, email and SFTP handling systems, to reduce databse size and enable security in the cloud.
The remaining steps are to finsh the migration of files (who knew this would be the difficult bit) and to migrate from Oracle AQ (Advanced Queues (software naming error 101: avoid hubristic adjectives)) to AWS SQS (Simple Queueing Service (software naming error 102: avoid indexical adjectives)). Note that we have to upgrade AngularJS to Angular just to stay still, and pop things into Kubernetes, just because.
This is clean, recogisable, manageable and stable. We could rest here for a while; but all that pain motivates completion:
The elephant is in the room. The final step is then to replace Oracle Workflow with Camunda.

Letter to Anneliese Dodds MP: Support for the Labour Left

Dear Anneliese Dodds MP, 

I have voted for you in the last two elections, as a proxy for my support for Jeremy Corbyn, though your attendance and performance at the hustings arranged by the Stop the War Coalition was a reason for my support for you personally. 

For me to vote for you again I would need to see your support for the left of the Labour party,  and the mass movement built by Jeremy Corbyn; should you instead side with the faction which has suspended him you will lose my support. 

best wishes 
Tim Pizey

Victory in Europe (VE) Day in Churchill’s Toyshop

My grandfather, Norman Angier,

Norman Angier

worked at Churchill’s Toyshop (M.D.1) as the head civilian engineer during WWII.

Norman Angier

On VE day “Norman Angier felt it was an occasion for fireworks. He therefore acquired a large batch of quite big rockets and proceeded to poop them off, selecting as his firing site a point at the summit of a concrete road which led down to the ranges and the CMP’s camp. Unlike the Guy Fawkes day rockets, these were not provided with sticks for poking into the ground to keep the bodies upright ; but Norman had fixed up some sort of stand for doing this. All went well for a while and the show was most spectacular. Then Norman got careless. A rocket he had just initiated was not properly secured. It fell over, and instead of going up vertically proceeded at speed in the near horizontal plane. A weary CMP was walking along this road on his way back to the camp. The rocket struck him right on target. Luckily, he was not seriously hurt and we soon whipped him off to hospital . The trouble was to make him believe that the attack was not intentional. He had been the victim of a 1000 to 1 chance.” Norman prepares to detonate DeGaul

Clean git blame history

Place the following in a command file, run it within your repo:

#!/bin/sh

git filter-branch --env-filter '

an="$GIT_AUTHOR_NAME"
am="$GIT_AUTHOR_EMAIL"
cn="$GIT_COMMITTER_NAME"
cm="$GIT_COMMITTER_EMAIL"

if [ "$GIT_AUTHOR_EMAIL" = "timp@paneris.org" ]
then
an="Tim Pizey"
am="timp21337@paneris.org"
fi

if [ "$GIT_COMMITTER_EMAIL" = "timp@paneris.org" ]
then
cn="Tim Pizey"
cm="timp21337@paneris.org"
fi

export GIT_AUTHOR_NAME="$an"
export GIT_AUTHOR_EMAIL="$am"
export GIT_COMMITTER_NAME="$cn"
export GIT_COMMITTER_EMAIL="$cm"
'
Then git push -f origin master Note that this process is very slow so do not repeat for every change: add all the changes you want to make and perform in one pass.

Disaster Recovery: A Dynamic Redundancy Approach

The problem with disaster planning is that it is not rehearsed. When you need to retrieve a file from backup is when you discover that your backup has been broken for three months.

Modern cloud systems, based upon software defined infrastructure and redundant, auto-scaling fleets of micro-services, come with disaster recovery built in. They are designed to be resilient against DDoS attacks, unexpected peaks in usage and continent wide unavailability.

Some systems have yet to migrate to outsourced infrastructure, some never will migrate. For these systems we need a Disaster Recovery Strategy which can be implemented within reasonable costs and ideally does not suffer from the fails when needed feature of many backup systems. One answer is to do regular fire drills. No one would dispute the importance of fire drills in saving lives and ensuring that people know what to do in the case of a real fire, however we all know there is a big difference between a rehearsal and the real thing.

The key insight in the modern cloud architectures is that every version of a system is the same (at a particular time).

We can reduce this to a minimal redundant system: a pair of identical systems with one designated Primary and the other Secondary, with a standard data mirroring link from Primary to Secondary.

To ensure that both elements of the pair really can function as the Primary you could rehearse a cutover one weekend.

But if the two systems really are identical then there is no reason to reverse the cutover at the end of the rehearsal. The old Secondary is the new Primary, the old Primary is the new Secondary. The Primary can be swapped at a periodicity the business is comfortable with, say twice a year.

This Dynamic Redundancy strategy ensures that your Disaster Recovery works when you need it to and can be adjusted according to the business' appetite for risk.

Direct access to SonarQube Postgresql Database

I want to change to change the name of a sonarqube project. This cannot be done without performing another analysis. You can just do it in SQL https://stackoverflow.com/questions/30511849/how-to-rename-a-project-in-sonarqube-5-1 but you have to be able to login to the database. Postgresql is very secure. A quick fix is to edit /var/lib//pgsql/pg_hba.conf change local connections from ident to trust:

# TYPE DATABASE USER CIDR-ADDRESS METHOD

# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust

Now you can edit:

psql -U sonarqube -W sonar
and finally:

UPDATE projects
SET name = 'NEW_PROJECT_NAME',
long_name = 'NEW_PROJECT_NAME'
WHERE kee = 'PROJECT_KEY'

Thames path improvements – Letter to Oxfordshire County Council

I have sent the following letter to LTS.Team AT oxfordshire.gov.uk in response to https://consultations.oxfordshire.gov.uk/consult.ti/OxfordRiversideRoutes/.

 

Hi,

 

I have submitted the following via the questionnaire:

 

The Oxpens bridge should move upstream to parallel the railway bridge with a path extension besides railway to station. Getting from Thames/railway station to canal towpath and up to Kidlington Airport and Kidlington new housing extension needed. Second bridge, again parallel to rail bridge needed to access Science Park.

 

I have used the Thames towpath for commuting to Osney Mead industrial estate for three months and for work on Cornmarket for three years. I also used the path for three months to commute to Kidlington (Oxford Airport) but it is too arduous and much to my sadness I now cycle on the road, so have trenchant views on the difficulty of cycle commuting in Oxford.

 

I understand that there is a plan to build many new houses in the Kidlington gap, between Summertown and Kidlington.

 

These new commuters would very much appreciate a functioning cycle route into the centre of town.

 

The new Oxford Parkway should be integrated into this cycle route.

 

The problem facing cyclists is not how to get from the towpath to the Centre, this is served by the pipe bridge, the pedestrian crossing down stream and Folley Bridge. What is needed is easy access to the railway station which could be easily achieved by a bridge parallel to the railway bridge and then along railway land to the station itself.

 

Cyclists wishing to get from the Thames to the canal have to continue to Osney, cross the road and then fiddle around the Thames onto the canal path, which is in a very poor state, then up to Kingsbridge and all the way through Kidlington to the Oxford Airport.

 

Finally the Thames path should connect the Science Park, and the planned new housing at Littlemore. This again could be achieved by a cycle bridge parallel to the existing railway bridge down stream from the bypass underpass.

 

I really welcome the proposals but would urge you to consider extending its scope and vision. This could be such a good route and would show that Oxford is a cycling city.

 

best regards Tim Pizey

-- Tim Pizey - http://tim.pizey.uk/

Thames path improvements – Letter to Oxfordshire County Council

I have sent the following letter to LTS.Team AT oxfordshire.gov.uk in response to https://consultations.oxfordshire.gov.uk/consult.ti/OxfordRiversideRoutes/.

 

Hi,

 

I have submitted the following via the questionnaire:

 

The Oxpens bridge should move upstream to parallel the railway bridge with a path extension besides railway to station. Getting from Thames/railway station to canal towpath and up to Kidlington Airport and Kidlington new housing extension needed. Second bridge, again parallel to rail bridge needed to access Science Park.

 

I have used the Thames towpath for commuting to Osney Mead industrial estate for three months and for work on Cornmarket for three years. I also used the path for three months to commute to Kidlington (Oxford Airport) but it is too arduous and much to my sadness I now cycle on the road, so have trenchant views on the difficulty of cycle commuting in Oxford.

 

I understand that there is a plan to build many new houses in the Kidlington gap, between Summertown and Kidlington.

 

These new commuters would very much appreciate a functioning cycle route into the centre of town.

 

The new Oxford Parkway should be integrated into this cycle route.

 

The problem facing cyclists is not how to get from the towpath to the Centre, this is served by the pipe bridge, the pedestrian crossing down stream and Folley Bridge. What is needed is easy access to the railway station which could be easily achieved by a bridge parallel to the railway bridge and then along railway land to the station itself.

 

Cyclists wishing to get from the Thames to the canal have to continue to Osney, cross the road and then fiddle around the Thames onto the canal path, which is in a very poor state, then up to Kingsbridge and all the way through Kidlington to the Oxford Airport.

 

Finally the Thames path should connect the Science Park, and the planned new housing at Littlemore. This again could be achieved by a cycle bridge parallel to the existing railway bridge down stream from the bypass underpass.

 

I really welcome the proposals but would urge you to consider extending its scope and vision. This could be such a good route and would show that Oxford is a cycling city.

 

best regards Tim Pizey

-- Tim Pizey - http://tim.pizey.uk/

Tell, don’t ask

More than twelve years ago Tim Joyce passed on some programming wisdom:

With programs tell don't ask, vice versa for people.

This was a bit abstract for me at the time but last night it came back to me as what is wrong with the code I am currently working on. We store our application configuration in a table in the system's target database and whenever some configuration is needed it is looked up in the database. There was no problem with this approach when the code was written because JUnit had not been invented and testing was not the main part of our discipline. However to write a test we would need a database present, which is an obstacle to fast, distinct, unit tests and has been a blocker to writing tests.

Noncompliant Code Example

public class Example { 
private String path;
public void logPath() {
try {
path = CachedSystemParameter.getInstance().
getParameterValue("PATH");
} catch (SystemParameterException e) {
logger.error("[BUSINESS] Error while retrieving system parameter PATH", e);
}
logger.info("Path: " + path);
}
}

Compliant Code Example

By adding sftpPath to the class constructor we can test the business logic without the need for a database fixture.
public class Example { 

private String path;

public Example() {
this(CachedSystemParameter.getInstance().
getParameterValue("PATH"));
}

public Example(String path) {
this.path = path;
}

public void logPath() {
logger.info("Path: " + path);
}
}

Tell, don’t ask

More than twelve years ago Tim Joyce passed on some programming wisdom:

With programs tell don't ask, vice versa for people.

This was a bit abstract for me at the time but last night it came back to me as what is wrong with the code I am currently working on. We store our application configuration in a table in the system's target database and whenever some configuration is needed it is looked up in the database. There was no problem with this approach when the code was written because JUnit had not been invented and testing was not the main part of our discipline. However to write a test we would need a database present, which is an obstacle to fast, distinct, unit tests and has been a blocker to writing tests.

Noncompliant Code Example

public class Example { 
private String path;
public void logPath() {
try {
path = CachedSystemParameter.getInstance().
getParameterValue("PATH");
} catch (SystemParameterException e) {
logger.error("[BUSINESS] Error while retrieving system parameter PATH", e);
}
logger.info("Path: " + path);
}
}

Compliant Code Example

By adding sftpPath to the class constructor we can test the business logic without the need for a database fixture.
public class Example { 

private String path;

public Example() {
this(CachedSystemParameter.getInstance().
getParameterValue("PATH"));
}

public Example(String path) {
this.path = path;
}

public void logPath() {
logger.info("Path: " + path);
}
}

Testing java slf4j over log4j logging in JUnit using SLF4J Test

Testing logging on failure paths has two problems:

  • It is hard to get the log message text
  • The logger outputs to the test log
The first leads to compromises eg verifying only that a message was logged, the second makes you, the programmer, think an error has occurred when the tests in fact passed.

Code to test


public class Sut {
public String perform() {
getLog().debug("In perform");
return "Hello world";
}
}

My clunky PowerMock Solution

My approach was problematic as it required the use of PowerMock which is as powerful as nitroglycerin.

Test Code


@RunWith(PowerMockRunner.class)
@PrepareForTest({LoggerFactory.class})
public class SutTest {

@Test
public void testPerform() {
mockStatic(LoggerFactory.class);
Logger mockLog = mock(Logger.class);
when(LoggerFactory.getLogger(any(Class.class))).thenReturn(mockLog);

assertEquals("Hello world", new Sut().perform());
verify(mockLog, times(1)).debug(startsWith("In perform"));
}
}

Elegant SLF4j Test Solution

The slf4j-test project by RobElliot266 provides a logger which stores messages and so can be asserted against.

POM Setup

Add the following to your dependencies


<dependency>
<groupId>uk.org.lidalia</groupId>
<artifactId>slf4j-test</artifactId>
<version>1.1.0</version>
<scope>test</scope>
</dependency>

To ensure that this logger is used during tests only and that it takes precedence over the production logger in the test class path ensure the test logger is the first logger mentioned in the dependencies block and has a test scope.

As an additional measure you can explicitly exclude the production logger from the test class path:


<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.18.1</version>
<configuration>
<classpathDependencyExcludes>
<classpathDependencyExcludes>org.slf4j:slf4j-jdk14</classpathDependencyExcludes>
</classpathDependencyExcludes>
</configuration>
</plugin>

Test Code


public class SutTest {
@Test
public void testPerform() {
assertEquals("Hello world", new Sut().perform());
assertEquals("Testing", logger.getLoggingEvents().get(0).getMessage());
}
}

Much thanks to RobElliot266 for an neat solution to a problem that has been bugging me for a while.

Testing java slf4j over log4j logging in JUnit using SLF4J Test

Testing logging on failure paths has two problems:

  • It is hard to get the log message text
  • The logger outputs to the test log
The first leads to compromises eg verifying only that a message was logged, the second makes you, the programmer, think an error has occurred when the tests in fact passed.

Code to test


public class Sut {
public String perform() {
getLog().debug("In perform");
return "Hello world";
}
}

My clunky PowerMock Solution

My approach was problematic as it required the use of PowerMock which is as powerful as nitroglycerin.

Test Code


@RunWith(PowerMockRunner.class)
@PrepareForTest({LoggerFactory.class})
public class SutTest {

@Test
public void testPerform() {
mockStatic(LoggerFactory.class);
Logger mockLog = mock(Logger.class);
when(LoggerFactory.getLogger(any(Class.class))).thenReturn(mockLog);

assertEquals("Hello world", new Sut().perform());
verify(mockLog, times(1)).debug(startsWith("In perform"));
}
}

Elegant SLF4j Test Solution

The slf4j-test project by RobElliot266 provides a logger which stores messages and so can be asserted against.

POM Setup

Add the following to your dependencies


<dependency>
<groupId>uk.org.lidalia</groupId>
<artifactId>slf4j-test</artifactId>
<version>1.1.0</version>
<scope>test</scope>
</dependency>

To ensure that this logger is used during tests only and that it takes precedence over the production logger in the test class path ensure the test logger is the first logger mentioned in the dependencies block and has a test scope.

As an additional measure you can explicitly exclude the production logger from the test class path:


<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.18.1</version>
<configuration>
<classpathDependencyExcludes>
<classpathDependencyExcludes>org.slf4j:slf4j-jdk14</classpathDependencyExcludes>
</classpathDependencyExcludes>
</configuration>
</plugin>

Test Code


public class SutTest {
@Test
public void testPerform() {
assertEquals("Hello world", new Sut().perform());
assertEquals("Testing", logger.getLoggingEvents().get(0).getMessage());
}
}

Much thanks to RobElliot266 for an neat solution to a problem that has been bugging me for a while.

Migrate MelatiSite from CVS to github

Re-visiting http://tim-pizey.blogspot.co.uk/2011/10/cvs-to-github.html (why did I not complete this at the time?)

Following How to export revision history from mercurial or git to cvs?

On hanuman I created an id file git_authors mapping cvs ids to github name, email format for all contributors:


timp=Tim Pizey<timp@paneris.org>
then create a repository on github (melati in this example, I already have uploaded my ssh public key for this machine)

cd ~
git cvsimport -d /usr/cvsroot -C MelatiSite -r cvs -k -A git_authors MelatiSite

cd melati
echo A jdbc to java object relational mapping system. 1999-2011 > README.txt
git add README.txt
git commit -m "Initial" README.txt
git remote add origin git@github.com:timp21337/melati.git
git push -u origin master
See https://github.com/timp21337/melati.

Migrate MelatiSite from CVS to github

Re-visiting http://tim-pizey.blogspot.co.uk/2011/10/cvs-to-github.html (why did I not complete this at the time?)

Following How to export revision history from mercurial or git to cvs?

On hanuman I created an id file git_authors mapping cvs ids to github name, email format for all contributors:


timp=Tim Pizey<timp@paneris.org>
then create a repository on github (melati in this example, I already have uploaded my ssh public key for this machine)

cd ~
git cvsimport -d /usr/cvsroot -C MelatiSite -r cvs -k -A git_authors MelatiSite

cd melati
echo A jdbc to java object relational mapping system. 1999-2011 > README.txt
git add README.txt
git commit -m "Initial" README.txt
git remote add origin git@github.com:timp21337/melati.git
git push -u origin master
See https://github.com/timp21337/melati.

CentOS setup on VirtualBox

Once you have Networking working there is still a long way to go.

yum groupinstall "Development Tools"
yum install kernel-devel
yum install kde-workspace
yum group install "X Window System"
yum groupinstall "Fonts"
yum install gdm

Now we can login without a GUI but startx when one is needed.

Installing Guest Additions

The guest Centos is a stock distribution, you have to tell it that it is inside VirtualBox.

Make the additions visible to the guest:

In the "Devices" menu in the virtual machine's menu bar, VirtualBox has a handy menu item named "Insert Guest Additions CD image", which mounts the Guest Additions ISO file inside your virtual machine.

yum install dkms
mkdir -p /media/cdrom
# Note change from /dev/scd0 in CentOS6
mount /dev/sr0 /media/cdrom
sh /media/cdrom/VBoxLinuxAdditions.run

We are now able to move the mouse seamlessly between our guest and host and window systems understand each other.

Sharing files between the host and guest

In the host (Windows) create C:\vbshared and using the VirtualBox interface share this with the guest. In the guest:


mkdir /vbshared
mount -t vboxsf vbshared /vbshared

it will be visible as /vbshared/ from inside the guest.

CentOS setup on VirtualBox

Once you have Networking working there is still a long way to go.

yum groupinstall "Development Tools"
yum install kernel-devel
yum install kde-workspace
yum group install "X Window System"
yum groupinstall "Fonts"
yum install gdm

Now we can login without a GUI but startx when one is needed.

Installing Guest Additions

The guest Centos is a stock distribution, you have to tell it that it is inside VirtualBox.

Make the additions visible to the guest:

In the "Devices" menu in the virtual machine's menu bar, VirtualBox has a handy menu item named "Insert Guest Additions CD image", which mounts the Guest Additions ISO file inside your virtual machine.

yum install dkms
mkdir -p /media/cdrom
# Note change from /dev/scd0 in CentOS6
mount /dev/sr0 /media/cdrom
sh /media/cdrom/VBoxLinuxAdditions.run

We are now able to move the mouse seamlessly between our guest and host and window systems understand each other.

Sharing files between the host and guest

In the host (Windows) create C:\vbshared and using the VirtualBox interface share this with the guest. In the guest:


mkdir /vbshared
mount -t vboxsf vbshared /vbshared

it will be visible as /vbshared/ from inside the guest.

Current Software Development Pre-Requisites

When starting a new project or joining an existing one there are a number of tools and features which should be in place. I have ordered them in order both of importance and the order in which the global community learnt the painful lessons that none of these are optional.

This is based upon Project initiation - a recipe.

Short name

Google it, ensure it is available as a url, check twitter.

README

If there is no README create it now!

Source control

The only decision is public or private. It will be a git repo.

If any other SCM system is in place convert to git before doing anything else.

Decide on git usage strategy: git flow, release branches, developer forks with feature branches and merge to master.

Development machine

Do we really want to develop in Fortran under VMS? oh, OK.

Develop on the operating system you are deploying to. If you develop on OSX and deploy to debian it will bite you. Developing for Redhat using Windows should be made illegal.

Continuous Integration

Jenkins of course.

Track the code coverage, anything less than 100&percent; is not acceptable.

Static Analysis

For legacy projects Sonar establishes a baseline, for new projects it holds the line throughout the projects life.

Continuous Deployment

The closer to Continuous Deployment the fewer platform types are needed.

Measurements

Metrics enable blue green deployment and A/B testing.

Issue tracking and work planning

Just you: gitthub, team: Jira

Current Software Development Pre-Requisites

When starting a new project or joining an existing one there are a number of tools and features which should be in place. I have ordered them in order both of importance and the order in which the global community learnt the painful lessons that none of these are optional.

This is based upon Project initiation - a recipe.

Short name

Google it, ensure it is available as a url, check twitter.

README

If there is no README create it now!

Source control

The only decision is public or private. It will be a git repo.

If any other SCM system is in place convert to git before doing anything else.

Decide on git usage strategy: git flow, release branches, developer forks with feature branches and merge to master.

Development machine

Do we really want to develop in Fortran under VMS? oh, OK.

Develop on the operating system you are deploying to. If you develop on OSX and deploy to debian it will bite you. Developing for Redhat using Windows should be made illegal.

Continuous Integration

Jenkins of course.

Track the code coverage, anything less than 100&percent; is not acceptable.

Static Analysis

For legacy projects Sonar establishes a baseline, for new projects it holds the line throughout the projects life.

Continuous Deployment

The closer to Continuous Deployment the fewer platform types are needed.

Measurements

Metrics enable blue green deployment and A/B testing.

Issue tracking and work planning

Just you: gitthub, team: Jira

Continuous Availability for a Jenkins Continuous Integration Service

When your CI server is becoming too big to fail

This post was written when I was responsible for a heavily used CI server, for a company which is no longer trading, so the tenses may be a mixed

Once an organisation starts to use Jenkins, and starts to buy into the Continuous Integration methodology, very quickly the Continuous Integration server becomes indispensable.

The Problem

The success of Jenkins is based upon its plugin based architecture. This has enabled Kohsuke Kawaguchi to keep tight control over the core whilst allowing others to contribute plugins. This has led to rapid growth of the community and a very low bar to contributing (there are currently over 1000 plugins).

Each plugin has the ability to bring your CI server to a halt. Whilst there is a Long Term Support version of Jenkins the plugins, which supply almost all of the functionality, do not have any enforced gate keeping.

Solution Elements

A completely resilient CI service is an expensive thing to achieve. The following elements must be applied baring in mind the proportion of the risk of failure they mitigate.

Split its jobs onto multiple CI servers

Use of personal Jenkins installations is recommended, but there is still a requirement for a single, central server.

This should be a last resort, splitting tasks out across slaves achieves many of the benefits without losing a single reporting point.

Split jobs out to SSH slaves
We had a misconfiguration of our ssh slaves such that they install the Jenkins package. The only use of the package is to ensure that the jenkins user is present, though tasks should not, ideally, be run as the jenkins user.

One disadvantage of using ssh slaves is that it requires copies of the ssh keys to be manually copied from the master server to the slaves.

Because jobs are initiated from master to the slave the master cannot be restarted during a job's execution (this is currently also true for JNLP slaves, but is not necessarily so).

The main disadvantage of ssh slaves is that by referencing real slaves they make the task of creating a staging server more complex, as a simple copy of the master would initiate jobs on the real slaves.

Split jobs out to JNLP slaves

Existing ssh slave jobs should be left unchanged until they can be replaced. This is a blocker on creating a staging CI server.

This is the recommended setup, which we used eventually for most jobs.

Minimise Shared Resources

Most of these problems can be overcome by spinning up a virtual machine for each job, from scratch, provisioned by puppet via vagrant.

In addition to sharing plugins, and hence sharing faulty plugins, another way in which jobs can adversely interact is by their use of shared resources(disk space, memory, cpus) and shared services(databases, message queues, mail servers, web application servers, caches and indexes).

Run the LTS version on production CI servers

Move to LTS at the earliest opportunity.

There are two plugin feeds, one for bleeding edge, the other for LTS.

Strategies for Plugin upgrade

Hope and trust

Up until our recent problem I would have said that the Jenkins community is pretty high quality, most plugins do not break your server, your ability to predict which ones will break your installation is small so brace yourself and be ready to fix and report any problems that there are. I have run three servers for five years and not previously had a problem.

Upgrade plugins one at a time, restart server between each one.

This seems reasonable, but at a release rate of 4.3 per day, seven days a week since 2011-02-21 even your subset of plugins are going to get updated quite frequently.

Use a staging CI server, if you can

If your CI server and its slaves are all setup using puppet, then you can clone it all, including repositories and services, so that any publishing acts do not have any impact on the real world, otherwise you will send emails and publish artefacts which interfere with your live system. Whilst we are using ssh slaves the staging server would either initiate jobs on real slaves or they too would need to be staged.

Use a partial staging CI server
Jobs which publish an artefact every time they are run cannot be re-run so are not suitable for running on a staging server.

You can prune your jobs down to those which are idempotent, ie those which do not publish and do not use ssh slaves, but the non-idempotent jobs cannot be re-run.

Control and monitor the addition of plugins

Users intending to install a plugin should ask on irc, giving the plugin url.

From the above it is clear that for a production CI server the addition of plugins is not risk or cost free.

Remove unused plugins, after consulting original installer

We still have a number of redundant plugins installed.

Plugins build up over time.

Monitor the logs

Currently there is no monitoring of the Jenkins log.

A log monitor which detects java exceptions might be used.

Backup the whole machine

Whilst the machine is backed up a fire drill is needed to prove that a state can be returned to.

Once a month restore from backup to a clean machine.

Store the configuration in Git

The configuration of Jenkins has been stored, and restored from.

This process is only one element of recreating a server. Once a month restore from git to a clean machine.

Continuous Availability for a Jenkins Continuous Integration Service

When your CI server is becoming too big to fail

This post was written when I was responsible for a heavily used CI server, for a company which is no longer trading, so the tenses may be a mixed

Once an organisation starts to use Jenkins, and starts to buy into the Continuous Integration methodology, very quickly the Continuous Integration server becomes indispensable.

The Problem

The success of Jenkins is based upon its plugin based architecture. This has enabled Kohsuke Kawaguchi to keep tight control over the core whilst allowing others to contribute plugins. This has led to rapid growth of the community and a very low bar to contributing (there are currently over 1000 plugins).

Each plugin has the ability to bring your CI server to a halt. Whilst there is a Long Term Support version of Jenkins the plugins, which supply almost all of the functionality, do not have any enforced gate keeping.

Solution Elements

A completely resilient CI service is an expensive thing to achieve. The following elements must be applied baring in mind the proportion of the risk of failure they mitigate.

Split its jobs onto multiple CI servers

Use of personal Jenkins installations is recommended, but there is still a requirement for a single, central server.

This should be a last resort, splitting tasks out across slaves achieves many of the benefits without losing a single reporting point.

Split jobs out to SSH slaves
We had a misconfiguration of our ssh slaves such that they install the Jenkins package. The only use of the package is to ensure that the jenkins user is present, though tasks should not, ideally, be run as the jenkins user.

One disadvantage of using ssh slaves is that it requires copies of the ssh keys to be manually copied from the master server to the slaves.

Because jobs are initiated from master to the slave the master cannot be restarted during a job's execution (this is currently also true for JNLP slaves, but is not necessarily so).

The main disadvantage of ssh slaves is that by referencing real slaves they make the task of creating a staging server more complex, as a simple copy of the master would initiate jobs on the real slaves.

Split jobs out to JNLP slaves

Existing ssh slave jobs should be left unchanged until they can be replaced. This is a blocker on creating a staging CI server.

This is the recommended setup, which we used eventually for most jobs.

Minimise Shared Resources

Most of these problems can be overcome by spinning up a virtual machine for each job, from scratch, provisioned by puppet via vagrant.

In addition to sharing plugins, and hence sharing faulty plugins, another way in which jobs can adversely interact is by their use of shared resources(disk space, memory, cpus) and shared services(databases, message queues, mail servers, web application servers, caches and indexes).

Run the LTS version on production CI servers

Move to LTS at the earliest opportunity.

There are two plugin feeds, one for bleeding edge, the other for LTS.

Strategies for Plugin upgrade

Hope and trust

Up until our recent problem I would have said that the Jenkins community is pretty high quality, most plugins do not break your server, your ability to predict which ones will break your installation is small so brace yourself and be ready to fix and report any problems that there are. I have run three servers for five years and not previously had a problem.

Upgrade plugins one at a time, restart server between each one.

This seems reasonable, but at a release rate of 4.3 per day, seven days a week since 2011-02-21 even your subset of plugins are going to get updated quite frequently.

Use a staging CI server, if you can

If your CI server and its slaves are all setup using puppet, then you can clone it all, including repositories and services, so that any publishing acts do not have any impact on the real world, otherwise you will send emails and publish artefacts which interfere with your live system. Whilst we are using ssh slaves the staging server would either initiate jobs on real slaves or they too would need to be staged.

Use a partial staging CI server
Jobs which publish an artefact every time they are run cannot be re-run so are not suitable for running on a staging server.

You can prune your jobs down to those which are idempotent, ie those which do not publish and do not use ssh slaves, but the non-idempotent jobs cannot be re-run.

Control and monitor the addition of plugins

Users intending to install a plugin should ask on irc, giving the plugin url.

From the above it is clear that for a production CI server the addition of plugins is not risk or cost free.

Remove unused plugins, after consulting original installer

We still have a number of redundant plugins installed.

Plugins build up over time.

Monitor the logs

Currently there is no monitoring of the Jenkins log.

A log monitor which detects java exceptions might be used.

Backup the whole machine

Whilst the machine is backed up a fire drill is needed to prove that a state can be returned to.

Once a month restore from backup to a clean machine.

Store the configuration in Git

The configuration of Jenkins has been stored, and restored from.

This process is only one element of recreating a server. Once a month restore from git to a clean machine.

Continuous Deployment of a platform and its variants using githook and cron

Architecture

We have a prototypical webapp platform which has four variants, commodities, energy, minerals and wood. We use the Maven war overlay feature. Don't blame me it was before my time.

This architecture, with a core platform and four variants, means that one commit to platform can result in five staging sites needing to be redeployed.

Continuous Integration

We have a fairly mature Continuous Integration setup, using Jenkins to build five projects on commit. The team is small enough that we also build each developer's fork. Broken builds on trunk are not common.

NB This setup does deploy broken builds. Use a pipeline if broken staging builds are a problem in themselves.

Of Martin Fowler's Continuous Integration Checklist we have a score in every category but one:

  • Maintain a Single Source Repository
    Bitbucket.org
  • Automate the Build
    We build using Maven.
  • Make Your Build Self-Testing
    Maven runs our Spring tests and unit tests (coverage could be higher).
  • Everyone Commits To the Mainline Every Day
    I do, some with better memories keep longer running branches.
  • Every Commit Should Build the Mainline on an Integration Machine
    Jenkins as a service from Cloudbees.
  • Fix Broken Builds Immediately
    We have a prominently displayed build wall, the approach here deploys broken builds to staging so we rely upon having none.
  • Keep the Build Fast
    We have reduced the local build to eight minutes, twenty five minutes or so on Jenkins. This is not acceptable and does cause problems, such as tests not being run locally, but increasing the coverage and reducing the run time will not be easy.
  • Test in a Clone of the Production Environment
    There is no difference in kind between the production and development environments. Developers use the same operating system and deployment mechanism as is used on the servers.
  • Make it Easy for Anyone to Get the Latest Executable
    Jenkins deploys to a Maven snapshot repository.
  • Everyone can see what's happening
    Our Jenkins build wall is on display in the coding room.
  • Automate Deployment
    The missing piece, covered in this post.

Continuous, Unchecked, Deployment

Each project has an executable build file redo:


mvn clean install -DskipTests=true
sudo ./reload
which calls the deployment to tomcat reload

service tomcat7 stop
rm -rf /var/lib/tomcat7/webapps/ROOT
cp target/ROOT.war /var/lib/tomcat7/webapps/
service tomcat7 start
We can use a githook to call redo when the project is updated:
cd .git/hooks
ln -s ../../redo post-merge
Add a line to chrontab using
crontab -e

*/5 * * * * cd checkout/commodities && git pull -q origin master
This polls for changes to the project code every five minutes and calls redo if a change is detected.

However we also need to redeploy if a change is made to the prototype, platform.

We can achieve this with a script continuousDeployment


#!/bin/bash

# Assumed to be invoked from derivative project which contains script redo

pwd=`pwd`
cd ../platform
git fetch > change_log.txt 2>&1
if [ -s change_log.txt ]
then
# Installs by fireing githook post-merge
git pull origin master
cd $pwd
./redo
fi
rm change_log.txt
which is also invoked by a crontab:

*/5 * * * * cd checkout/commodities && ../platform/continuousDeployment

Continuous Deployment of a platform and its variants using githook and cron

Architecture

We have a prototypical webapp platform which has four variants, commodities, energy, minerals and wood. We use the Maven war overlay feature. Don't blame me it was before my time.

This architecture, with a core platform and four variants, means that one commit to platform can result in five staging sites needing to be redeployed.

Continuous Integration

We have a fairly mature Continuous Integration setup, using Jenkins to build five projects on commit. The team is small enough that we also build each developer's fork. Broken builds on trunk are not common.

NB This setup does deploy broken builds. Use a pipeline if broken staging builds are a problem in themselves.

Of Martin Fowler's Continuous Integration Checklist we have a score in every category but one:

  • Maintain a Single Source Repository
    Bitbucket.org
  • Automate the Build
    We build using Maven.
  • Make Your Build Self-Testing
    Maven runs our Spring tests and unit tests (coverage could be higher).
  • Everyone Commits To the Mainline Every Day
    I do, some with better memories keep longer running branches.
  • Every Commit Should Build the Mainline on an Integration Machine
    Jenkins as a service from Cloudbees.
  • Fix Broken Builds Immediately
    We have a prominently displayed build wall, the approach here deploys broken builds to staging so we rely upon having none.
  • Keep the Build Fast
    We have reduced the local build to eight minutes, twenty five minutes or so on Jenkins. This is not acceptable and does cause problems, such as tests not being run locally, but increasing the coverage and reducing the run time will not be easy.
  • Test in a Clone of the Production Environment
    There is no difference in kind between the production and development environments. Developers use the same operating system and deployment mechanism as is used on the servers.
  • Make it Easy for Anyone to Get the Latest Executable
    Jenkins deploys to a Maven snapshot repository.
  • Everyone can see what's happening
    Our Jenkins build wall is on display in the coding room.
  • Automate Deployment
    The missing piece, covered in this post.

Continuous, Unchecked, Deployment

Each project has an executable build file redo:


mvn clean install -DskipTests=true
sudo ./reload
which calls the deployment to tomcat reload

service tomcat7 stop
rm -rf /var/lib/tomcat7/webapps/ROOT
cp target/ROOT.war /var/lib/tomcat7/webapps/
service tomcat7 start
We can use a githook to call redo when the project is updated:
cd .git/hooks
ln -s ../../redo post-merge
Add a line to chrontab using
crontab -e

*/5 * * * * cd checkout/commodities && git pull -q origin master
This polls for changes to the project code every five minutes and calls redo if a change is detected.

However we also need to redeploy if a change is made to the prototype, platform.

We can achieve this with a script continuousDeployment


#!/bin/bash

# Assumed to be invoked from derivative project which contains script redo

pwd=`pwd`
cd ../platform
git fetch > change_log.txt 2>&1
if [ -s change_log.txt ]
then
# Installs by fireing githook post-merge
git pull origin master
cd $pwd
./redo
fi
rm change_log.txt
which is also invoked by a crontab:

*/5 * * * * cd checkout/commodities && ../platform/continuousDeployment

Using multiple SSH keys with git

The problem I have is that I have multiple accounts with git hosting suppliers (github and bitbucket) but they both want to keep a one to one relationship between users and ssh keys.

For both accounts I am separating work and personal repositories.

Github

BitBucket

In the past I have authorised all my identities on all my repositories, this has resulted in multiple identities being used within one repository which makes the statistics look a mess.

The Solution

Generate an ssh key for your identity and store it in a named file for example ~/.ssh/id_rsa_timp.

Add the key to your github or bitbucket account.

Use an ssh config file ~/.ssh/config


Host bitbucket.timp
HostName bitbucket.com
User git
IdentityFile ~/.ssh/id_rsa_timp
IdentitiesOnly=yes

You should now be good to go:

git clone git@bitbucket.timp:timp/project.git

Using multiple SSH keys with git

The problem I have is that I have multiple accounts with git hosting suppliers (github and bitbucket) but they both want to keep a one to one relationship between users and ssh keys.

For both accounts I am separating work and personal repositories.

Github

BitBucket

In the past I have authorised all my identities on all my repositories, this has resulted in multiple identities being used within one repository which makes the statistics look a mess.

The Solution

Generate an ssh key for your identity and store it in a named file for example ~/.ssh/id_rsa_timp.

Add the key to your github or bitbucket account.

Use an ssh config file ~/.ssh/config


Host bitbucket.timp
HostName bitbucket.com
User git
IdentityFile ~/.ssh/id_rsa_timp
IdentitiesOnly=yes

You should now be good to go:

git clone git@bitbucket.timp:timp/project.git

Update .git/config

[remote "bitbucket"]
url = git@bitbucket.timp:timp/wiki.git
fetch = +refs/heads/*:refs/remotes/bitbucket/*

Read Maven Surefire Test Result files using Perl

When you want something quick and dirty it doesn't get dirtier, or quicker, than Perl.

We have four thousand tests and they are taking way too long. To discover why we need to sort the tests by how long they take to run and see if a pattern emerges. The test runtimes are written to the target/surefire-reports directory. Each file is named for the class of the test file and contains information in the following format:


-------------------------------------------------------------------------------
Test set: com.mycorp.MyTest
-------------------------------------------------------------------------------
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.03 sec


#! /usr/bin/perl -wall

my %tests;
open(RESULTS, "grep 'Tests run' target/surefire-reports/*.txt|");
while () {
s/\.txt//;
s/target\/surefire-reports\///;
s/Tests run:.+Time elapsed://;
s/ sec//;
s/,//;
/^(.+):(.+)$/;
$tests{$1} = $2;
}
close(RESULTS);

my $cumulative = 0.0;
print("cumulative\ttime\tcumulative_secs\ttime_secs\ttest");
foreach my $key (sort {$tests{$a} <=> $tests{$b}} keys %tests) {
$cumulative += $tests{$key};
printf("%2d:%02d\t%2d:%02d\t%5d\t%5d\t%s\n",
($cumulative/60)%60, $cumulative%60,
($tests{$key}/60)%60, $tests{$key}%60,
$cumulative,
$tests{$key},
$key);
};


The resultant CSV can be viewed using a google chart:

Read Maven Surefire Test Result files using Perl

When you want something quick and dirty it doesn't get dirtier, or quicker, than Perl.

We have four thousand tests and they are taking way too long. To discover why we need to sort the tests by how long they take to run and see if a pattern emerges. The test runtimes are written to the target/surefire-reports directory. Each file is named for the class of the test file and contains information in the following format:


-------------------------------------------------------------------------------
Test set: com.mycorp.MyTest
-------------------------------------------------------------------------------
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.03 sec


#! /usr/bin/perl -wall

my %tests;
open(RESULTS, "grep 'Tests run' target/surefire-reports/*.txt|");
while () {
s/\.txt//;
s/target\/surefire-reports\///;
s/Tests run:.+Time elapsed://;
s/ sec//;
s/,//;
/^(.+):(.+)$/;
$tests{$1} = $2;
}
close(RESULTS);

my $cumulative = 0.0;
print("cumulative\ttime\tcumulative_secs\ttime_secs\ttest");
foreach my $key (sort {$tests{$a} <=> $tests{$b}} keys %tests) {
$cumulative += $tests{$key};
printf("%2d:%02d\t%2d:%02d\t%5d\t%5d\t%s\n",
($cumulative/60)%60, $cumulative%60,
($tests{$key}/60)%60, $tests{$key}%60,
$cumulative,
$tests{$key},
$key);
};


The resultant CSV can be viewed using a google chart:

Tomcat7 User Config

Wouldn't it be nice if tomcat came with the following, commented out, in /etc/tomcat7/tomcat-users.xml ?

<?xml version='1.0' encoding='utf-8'?>
<tomcat-users>
<role rolename="manager-gui" />
<role rolename="manager-status" />
<role rolename="manager-script" />
<role rolename="manager-jmx" />

<role rolename="admin-gui" />
<role rolename="admin-script" />

<user
username="admin"
password="admin"
roles="manager-gui, manager-status, manager-script, manager-jmx, admin-gui, admin-script"/>

</tomcat-users>

Tomcat7 User Config

Wouldn't it be nice if tomcat came with the following, commented out, in /etc/tomcat7/tomcat-users.xml ?

<?xml version='1.0' encoding='utf-8'?>
<tomcat-users>
<role rolename="manager-gui" />
<role rolename="manager-status" />
<role rolename="manager-script" />
<role rolename="manager-jmx" />

<role rolename="admin-gui" />
<role rolename="admin-script" />

<user
username="admin"
password="admin"
roles="manager-gui, manager-status, manager-script, manager-jmx, admin-gui, admin-script"/>

</tomcat-users>

Debian Release Code Names – Aide Mémoire

The name series for Debian releases is taken from characters in the Pixar/Disney film Toy Story.

Sid

The unstable release is always called Sid as the character in the film took delight in breaking his toys.

A backronym: Still In Development.

Stretch

The current pending release is always called testing and will have been christened. At the time of writing the testing release is Stretch.

Jessie

Release 8.0
2015/04/26

Jessie is the current stable release.

After a considerable while a release will migrate from testing to stable, it will then become the Current Stable release and the previous version will join the (head) of the list of Obsolete Stable releases.

Wheezy

Release 7.0
2013/05/04

The current head of the list of Obsolete Stable releases.

Squeeze

Release 8.0
2011/02/06

Obsolete Stable release.

Lenny

Release 5.0
2009/02/14

Obsolete Stable release.

Etch

Release 4.0
2007/04/08

Obsolete Stable release.

Sarge

Release 3.1
2005/06/06

Obsolete Stable release.

Woody

Release 3.0
2002/07/19

Obsolete Stable release.

Potato

Release 2.2
2000/08/15

Obsolete Stable release.

Slink

Release 2.1
1999/03/09

Obsolete Stable release.

Hamm

Release 2.0
1998/07/24

Obsolete Stable release.

Bo

Release 1.3
1997/06/05

Obsolete Stable release.

Rex

Release 1.2
1996/12/12

Obsolete Stable release.

Buzz

Release 1.1
1996/06/17

Obsolete Stable release.

Previous versions did not have versions.

Debian Release Code Names – Aide Mémoire

The name series for Debian releases is taken from characters in the Pixar/Disney film Toy Story.

Sid

The unstable release is always called Sid as the character in the film took delight in breaking his toys.

A backronym: Still In Development.

Stretch

The current pending release is always called testing and will have been christened. At the time of writing the testing release is Stretch.

Jessie

Release 8.0
2015/04/26

Jessie is the current stable release.

After a considerable while a release will migrate from testing to stable, it will then become the Current Stable release and the previous version will join the (head) of the list of Obsolete Stable releases.

Wheezy

Release 7.0
2013/05/04

The current head of the list of Obsolete Stable releases.

Squeeze

Release 8.0
2011/02/06

Obsolete Stable release.

Lenny

Release 5.0
2009/02/14

Obsolete Stable release.

Etch

Release 4.0
2007/04/08

Obsolete Stable release.

Sarge

Release 3.1
2005/06/06

Obsolete Stable release.

Woody

Release 3.0
2002/07/19

Obsolete Stable release.

Potato

Release 2.2
2000/08/15

Obsolete Stable release.

Slink

Release 2.1
1999/03/09

Obsolete Stable release.

Hamm

Release 2.0
1998/07/24

Obsolete Stable release.

Bo

Release 1.3
1997/06/05

Obsolete Stable release.

Rex

Release 1.2
1996/12/12

Obsolete Stable release.

Buzz

Release 1.1
1996/06/17

Obsolete Stable release.

Previous versions did not have versions.

Groovy: Baby Steps

Posting a form in groovy, baby steps. Derived from http://coderberry.me/blog/2012/05/07/stupid-simple-post-slash-get-with-groovy-httpbuilder/


@Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.7' )
@Grab(group='org.codehaus.groovyfx', module='groovyfx', version='0.3.1')
import groovyx.net.http.HTTPBuilder
import groovyx.net.http.ContentType
import groovyx.net.http.Method
import groovyx.net.http.RESTClient

public class Post {

public static void main(String[] args) {

def baseUrl = "http://www.paneris.org/pe2/org.paneris.user.controller.LoginUser"

def ret = null
def http = new HTTPBuilder(baseUrl)

http.request(Method.POST, ContentType.TEXT) {
//uri.path = path
uri.query = [db:"paneris",
loginid:"timp",
password:"password"]
headers.'User-Agent' = 'Mozilla/5.0 Ubuntu/8.10 Firefox/3.0.4'

response.success = { resp, reader ->
println "response status: ${resp.statusLine}"
println 'Headers: -----------'
resp.headers.each { h ->
println " ${h.name} : ${h.value}"
}

ret = reader.getText()

println '--------------------'
println ret
println '--------------------'
}
}
}
}

Groovy: Baby Steps

Posting a form in groovy, baby steps. Derived from http://coderberry.me/blog/2012/05/07/stupid-simple-post-slash-get-with-groovy-httpbuilder/


@Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.7' )
@Grab(group='org.codehaus.groovyfx', module='groovyfx', version='0.3.1')
import groovyx.net.http.HTTPBuilder
import groovyx.net.http.ContentType
import groovyx.net.http.Method
import groovyx.net.http.RESTClient

public class Post {

public static void main(String[] args) {

def baseUrl = "http://www.paneris.org/pe2/org.paneris.user.controller.LoginUser"

def ret = null
def http = new HTTPBuilder(baseUrl)

http.request(Method.POST, ContentType.TEXT) {
//uri.path = path
uri.query = [db:"paneris",
loginid:"timp",
password:"password"]
headers.'User-Agent' = 'Mozilla/5.0 Ubuntu/8.10 Firefox/3.0.4'

response.success = { resp, reader ->
println "response status: ${resp.statusLine}"
println 'Headers: -----------'
resp.headers.each { h ->
println " ${h.name} : ${h.value}"
}

ret = reader.getText()

println '--------------------'
println ret
println '--------------------'
}
}
}
}

Delete Jenkins job workspaces left after renaming

When renaming a job or moving one from slave to slave Jenkins copies it and does not delete the original. This script can fix that.


#!/usr/bin/env python
import urllib, json, os, sys
from shutil import rmtree

url = 'http://jenkins/api/python?pretty=true'

data = eval(urllib.urlopen(url).read())
jobnames = []
for job in data['jobs']:
jobnames.append(job['name'])

def clean(path):
builds = os.listdir(path)
for build in builds:
if build not in jobnames:
build_path = os.path.join(path, build)
print "removing dir: %s " % build_path
rmtree(build_path)

clean(sys.argv[1])


Delete Jenkins job workspaces left after renaming

When renaming a job or moving one from slave to slave Jenkins copies it and does not delete the original. This script can fix that.


#!/usr/bin/env python
import urllib, json, os, sys
from shutil import rmtree

url = 'http://jenkins/api/python?pretty=true'

data = eval(urllib.urlopen(url).read())
jobnames = []
for job in data['jobs']:
jobnames.append(job['name'])

def clean(path):
builds = os.listdir(path)
for build in builds:
if build not in jobnames:
build_path = os.path.join(path, build)
print "removing dir: %s " % build_path
rmtree(build_path)

clean(sys.argv[1])


Merging two sets of Jacoco Integration Test Coverage Reports for Sonar using Gradle

In a Jenkins build:


./gradlew cukeTest
./gradlew integTest
./gradlew sonarrunner

This leaves three .exec files behind. SonarQube can only use two. So we merge the two integration tests results.


task integCukeMerge(type: JacocoMerge) {
description = 'Merge test code coverage results from feign and cucumber'
// This assumes cuketests have already been run in a separate gradle session

doFirst {
delete destinationFile
// Wait until integration tests have actually finished
println start.process != null ? start.process.waitFor() : "In integCukeMerge tomcat is null"
}

executionData fileTree("${buildDir}/jacoco/it/")

}


sonarRunner {
tasks.sonarRunner.dependsOn integCukeMerge
sonarProperties {
property "sonar.projectDescription", "A legacy codebase."
property "sonar.exclusions", "**/fakes/**/*.java, **/domain/**.java, **/*Response.java"

properties["sonar.tests"] += sourceSets.integTest.allSource.srcDirs

property "sonar.jdbc.url", "jdbc:postgresql://localhost/sonar"
property "sonar.jdbc.driverClassName", "org.postgresql.Driver"
property "sonar.jdbc.username", "sonar"
property "sonar.host.url", "http://sonar.we7.local:9000"


def jenkinsBranchName = System.getenv("GIT_BRANCH")
if (jenkinsBranchName != null) {
jenkinsBranchName = jenkinsBranchName.substring(jenkinsBranchName.lastIndexOf('/') + 1)
}
def branch = jenkinsBranchName ?: ('git rev-parse --abbrev-ref HEAD'.execute().text ?: 'unknown').trim()

def buildName = System.getenv("JOB_NAME")
if (buildName == null) {
property "sonar.projectKey", "${name}"
def username = System.getProperty('user.name')
property "sonar.projectName", "~${name.capitalize()} (${username})"
property "sonar.branch", "developer"
} else {
property "sonar.projectKey", "$buildName"
property "sonar.projectName", name.capitalize()
property "sonar.branch", "${branch}"
property "sonar.links.ci", "http://jenkins/job/${buildName}/"
}

property "sonar.projectVersion", "git describe --abbrev=0".execute().text.trim()

property "sonar.java.coveragePlugin", "jacoco"
property "sonar.jacoco.reportPath", "$project.buildDir/jacoco/test.exec"
// feign results
//property "sonar.jacoco.itReportPath", "$project.buildDir/jacoco/it/integTest.exec"
// Cucumber results
//property "sonar.jacoco.itReportPath", "$project.buildDir/jacoco/it/cukeTest.exec"
// Merged results
property "sonar.jacoco.itReportPath", "$project.buildDir/jacoco/integCukeMerge.exec"

property "sonar.links.homepage", "https://github.com/${org}/${name}"
}
}

Remember to use Overall Coverage in any Sonar Quality Gates!

Merging two sets of Jacoco Integration Test Coverage Reports for Sonar using Gradle

In a Jenkins build:


./gradlew cukeTest
./gradlew integTest
./gradlew sonarrunner

This leaves three .exec files behind. SonarQube can only use two. So we merge the two integration tests results.


task integCukeMerge(type: JacocoMerge) {
description = 'Merge test code coverage results from feign and cucumber'
// This assumes cuketests have already been run in a separate gradle session

doFirst {
delete destinationFile
// Wait until integration tests have actually finished
println start.process != null ? start.process.waitFor() : "In integCukeMerge tomcat is null"
}

executionData fileTree("${buildDir}/jacoco/it/")

}


sonarRunner {
tasks.sonarRunner.dependsOn integCukeMerge
sonarProperties {
property "sonar.projectDescription", "A legacy codebase."
property "sonar.exclusions", "**/fakes/**/*.java, **/domain/**.java, **/*Response.java"

properties["sonar.tests"] += sourceSets.integTest.allSource.srcDirs

property "sonar.jdbc.url", "jdbc:postgresql://localhost/sonar"
property "sonar.jdbc.driverClassName", "org.postgresql.Driver"
property "sonar.jdbc.username", "sonar"
property "sonar.host.url", "http://sonar.we7.local:9000"


def jenkinsBranchName = System.getenv("GIT_BRANCH")
if (jenkinsBranchName != null) {
jenkinsBranchName = jenkinsBranchName.substring(jenkinsBranchName.lastIndexOf('/') + 1)
}
def branch = jenkinsBranchName ?: ('git rev-parse --abbrev-ref HEAD'.execute().text ?: 'unknown').trim()

def buildName = System.getenv("JOB_NAME")
if (buildName == null) {
property "sonar.projectKey", "${name}"
def username = System.getProperty('user.name')
property "sonar.projectName", "~${name.capitalize()} (${username})"
property "sonar.branch", "developer"
} else {
property "sonar.projectKey", "$buildName"
property "sonar.projectName", name.capitalize()
property "sonar.branch", "${branch}"
property "sonar.links.ci", "http://jenkins/job/${buildName}/"
}

property "sonar.projectVersion", "git describe --abbrev=0".execute().text.trim()

property "sonar.java.coveragePlugin", "jacoco"
property "sonar.jacoco.reportPath", "$project.buildDir/jacoco/test.exec"
// feign results
//property "sonar.jacoco.itReportPath", "$project.buildDir/jacoco/it/integTest.exec"
// Cucumber results
//property "sonar.jacoco.itReportPath", "$project.buildDir/jacoco/it/cukeTest.exec"
// Merged results
property "sonar.jacoco.itReportPath", "$project.buildDir/jacoco/integCukeMerge.exec"

property "sonar.links.homepage", "https://github.com/${org}/${name}"
}
}

Remember to use Overall Coverage in any Sonar Quality Gates!

An SSL Truster

Should you wish, when testing say, to trust all ssl certificates:


import java.net.URL;

import javax.net.ssl.*;

import com.mediagraft.shared.utils.UtilsHandbag;

/**
* Modify the JVM wide SSL trusting so that locally signed https urls, and others, are
* no longer rejected.
*
* Use with care as the JVM should not be used in production after activation.
*
*/
public class JvmSslTruster {

private static boolean activated_;

private static X509TrustManager allTrustingManager_ = new X509TrustManager() {

public java.security.cert.X509Certificate[] getAcceptedIssuers() {
return null;
}

public void checkClientTrusted(java.security.cert.X509Certificate[] certs, String authType) {
}

public void checkServerTrusted(java.security.cert.X509Certificate[] certs, String authType) {
}
};

public static SSLSocketFactory trustingSSLFactory() {
SSLContext sc = null;
try {
sc = SSLContext.getInstance("SSL");
sc.init(null, new TrustManager[]{allTrustingManager_}, new java.security.SecureRandom());
new URL(UtilsHandbag.getSecureApplicationURL()); //Force loading of installed trust manager
}
catch (Exception e) {
throw new RuntimeException("Unhandled exception", e);
}
return sc.getSocketFactory();
}

public static void startTrusting() {
if (!activated_) {
HttpsURLConnection.setDefaultSSLSocketFactory(trustingSSLFactory());
com.sun.net.ssl.HttpsURLConnection.setDefaultSSLSocketFactory(trustingSSLFactory());
activated_ = true;
}
}

private JvmSslTruster() {
}
}

An SSL Truster

Should you wish, when testing say, to trust all ssl certificates:


import java.net.URL;

import javax.net.ssl.*;

import com.mediagraft.shared.utils.UtilsHandbag;

/**
* Modify the JVM wide SSL trusting so that locally signed https urls, and others, are
* no longer rejected.
*
* Use with care as the JVM should not be used in production after activation.
*
*/
public class JvmSslTruster {

private static boolean activated_;

private static X509TrustManager allTrustingManager_ = new X509TrustManager() {

public java.security.cert.X509Certificate[] getAcceptedIssuers() {
return null;
}

public void checkClientTrusted(java.security.cert.X509Certificate[] certs, String authType) {
}

public void checkServerTrusted(java.security.cert.X509Certificate[] certs, String authType) {
}
};

public static SSLSocketFactory trustingSSLFactory() {
SSLContext sc = null;
try {
sc = SSLContext.getInstance("SSL");
sc.init(null, new TrustManager[]{allTrustingManager_}, new java.security.SecureRandom());
new URL(UtilsHandbag.getSecureApplicationURL()); //Force loading of installed trust manager
}
catch (Exception e) {
throw new RuntimeException("Unhandled exception", e);
}
return sc.getSocketFactory();
}

public static void startTrusting() {
if (!activated_) {
HttpsURLConnection.setDefaultSSLSocketFactory(trustingSSLFactory());
com.sun.net.ssl.HttpsURLConnection.setDefaultSSLSocketFactory(trustingSSLFactory());
activated_ = true;
}
}

private JvmSslTruster() {
}
}

Restoring user groups once no longer in sudoers

Ubuntu thinks it is neat not to have a password on root. Hmm.

It is easy to remove yourself from all groups in linux, I did it like this:

$ useradd -G docker timp

I thought that might add timp to the group docker, which indeed it does, but it also removes you from adm,cdrom,lpadmin,sudo,dip,plugdev,video,audio which you were previously in.

As you are no longer in sudoers you cannot add yourself back.

Getting a root shell using RefiT

What we now need to get to is a root shell. Googling did not help.

I have Ubuntu 14.04 LTS installed as a dual boot on my MacBookPro (2011). I installed it using rEFIt.

My normal boot is power up, select RefiT, select most recent Ubuntu, press Enter.

To change the grub boot command instead of Enter press F2 (or +). You now have three options, one of which is single user: this hangs for me. Move to the 'Standard Boot' and again press F2 rather than Enter. This enables you to edit the kernel string. I tried adding Single, single, s, init=/bin/bash, init=/bin/sh, 5, 3 and finally 1.

By adding 1 to the grub kernel line you are telling the machine to boot up only to run level one.

This will boot to a root prompt! Now to add yourself back to your groups:

$ usermod -G docker,adm,cdrom,lpadmin,sudo,dip,plugdev,video,audio timp

Whilst we are here:

$ passwd root

Hope this helps, I hope I don't need it again!

Never use useradd always use adduser

Restoring user groups once no longer in sudoers

Ubuntu thinks it is neat not to have a password on root. Hmm.

It is easy to remove yourself from all groups in linux, I did it like this:

$ useradd -G docker timp

I thought that might add timp to the group docker, which indeed it does, but it also removes you from adm,cdrom,lpadmin,sudo,dip,plugdev,video,audio which you were previously in.

As you are no longer in sudoers you cannot add yourself back.

Getting a root shell using RefiT

What we now need to get to is a root shell. Googling did not help.

I have Ubuntu 14.04 LTS installed as a dual boot on my MacBookPro (2011). I installed it using rEFIt.

My normal boot is power up, select RefiT, select most recent Ubuntu, press Enter.

To change the grub boot command instead of Enter press F2 (or +). You now have three options, one of which is single user: this hangs for me. Move to the 'Standard Boot' and again press F2 rather than Enter. This enables you to edit the kernel string. I tried adding Single, single, s, init=/bin/bash, init=/bin/sh, 5, 3 and finally 1.

By adding 1 to the grub kernel line you are telling the machine to boot up only to run level one.

This will boot to a root prompt! Now to add yourself back to your groups:

$ usermod -G docker,adm,cdrom,lpadmin,sudo,dip,plugdev,video,audio timp

Whilst we are here:

$ passwd root

Hope this helps, I hope I don't need it again!

Never use useradd always use adduser

Acid Air Pollution on the Cowley Road Oxford Uk

The Long Wall which Long Wall Street is named for always has a light line of white sand at its base, and nearly always has ongoing masonry works to repair it.

These pictures are taken at another Oxford road junction.

On the eastern edge of Oxford another set of traffic lights, at the junction of Cowley Road and Between Towns Road, again causes stationary traffic.

The local limestone is not very hard, and is constantly eroded, with the damage to the mortar being worse than that to the stone.

The damage is worst at ground level up to one metre.

The damage is worst near the ground and is caused by acidic exhaust fumes.

Here all the lichen has been killed and the brick looks as though it has been cleaned with brick acid.

We know that breathing in small particles from diesel exhaust is dangerous, as the particle size is small enough to pass deep into the lungs. The pattern of the effect on the walls shows that the concentration is particularly strong under one metre in height. There are two primary schools, one on either side of this junction.

Iffley Village Oxford Rag Stone

The quality of Oxford stone can be pretty shaky, and the stone that was used for field and road boundary walls was presumably of lower quality than that used for houses as good quality stone is scarce on the clay. This wall in Iffley is a charming example, but note what is happening to the bottom metre.

Who should we be claiming financial damages from? Shell? BP? Exxon?

Acid Air Pollution on the Cowley Road Oxford Uk

The Long Wall which Long Wall Street is named for always has a light line of white sand at its base, and nearly always has ongoing masonry works to repair it.

These pictures are taken at another Oxford road junction.

On the eastern edge of Oxford another set of traffic lights, at the junction of Cowley Road and Between Towns Road, again causes stationary traffic.

The local limestone is not very hard, and is constantly eroded, with the damage to the mortar being worse than that to the stone.

The damage is worst at ground level up to one metre.

The damage is worst near the ground and is caused by acidic exhaust fumes.

Here all the lichen has been killed and the brick looks as though it has been cleaned with brick acid.

We know that breathing in small particles from diesel exhaust is dangerous, as the particle size is small enough to pass deep into the lungs. The pattern of the effect on the walls shows that the concentration is particularly strong under one metre in height. There are two primary schools, one on either side of this junction.

Iffley Village Oxford Rag Stone

The quality of Oxford stone can be pretty shaky, and the stone that was used for field and road boundary walls was presumably of lower quality than that used for houses as good quality stone is scarce on the clay. This wall in Iffley is a charming example, but note what is happening to the bottom metre.

Who should we be claiming financial damages from? Shell? BP? Exxon?

Add new lines to end of files with missing line ends

A Sonar rule: Files should contain an empty new line at the end convention
Some tools such as Git work better when files end with an empty line.

To add a new line to all files without one place the following in a file called newlines


FILES="$@"
for f in $FILES
do
c=tail -c 1 $f
if [ "$c" != "" ];
then
echo "$f No new line"
echo "" >> $f
continue
fi
done

Then invoke:

$ chmod +x newlines
$ find * -name *.java |xargs ./newlines

Add new lines to end of files with missing line ends

A Sonar rule: Files should contain an empty new line at the end convention
Some tools such as Git work better when files end with an empty line.

To add a new line to all files without one place the following in a file called newlines


FILES="$@"
for f in $FILES
do
c=tail -c 1 $f
if [ "$c" != "" ];
then
echo "$f No new line"
echo "" >> $f
continue
fi
done

Then invoke:

$ chmod +x newlines
$ find * -name *.java |xargs ./newlines

Setting up a mac

Plugin, turn on, update, allow an hour!

Ensure you do not accept the default user details or your admin user will be timpizey not timp.

Install homebrew from http://brew.sh/. Ruby is installed already. This process will install devtools.

Install chrome, font size is under Web Content.

The System Font cannot be altered! The System Font is used by all native Apple applications such as iPhoto and iStore. This is a little annoying (EN_US tr: infuriating and probably illegal). For more general, well written, unix applications the fonts can be altered one by one.

Setting up a mac

Plugin, turn on, update, allow an hour!

Ensure you do not accept the default user details or your admin user will be timpizey not timp.

Install homebrew from http://brew.sh/. Ruby is installed already. This process will install devtools.

Install chrome, font size is under Web Content.

The System Font cannot be altered! The System Font is used by all native Apple applications such as iPhoto and iStore. This is a little annoying (EN_US tr: infuriating and probably illegal). For more general, well written, unix applications the fonts can be altered one by one.

How to mount the Nexus 4 storage SD card on Linux systems

Taken from How to mount the Nexus 4 storage SD card on Linux systems and comments there.

Reproduced here so that I can find it again!

Enable Developer Mode

Settings >>‘about phone’ menu and after that you should tap seven times on ‘Build Number’.

Now, from the Developer Options menu enable USB Debugging.

sudo apt-get install mtp-tools mtpfs
sudo gedit /etc/udev/rules.d/51-android.rules

Note not smart quotes as in the article


#LG – Nexus 4
SUBSYSTEM=="usb", ATTR{idVendor}=="1004?, MODE="0666?
sudo chmod +x /etc/udev/rules.d/51-android.rules 
sudo service udev restart
sudo mkdir /media/nexus4
sudo chmod 755 /media/nexus4

Next, connect your Google Nexus 4 to your Ubuntu computer using the USB cable. The MTP option has to be enabled.

sudo mtpfs -o allow_other /media/nexus4

To unmount:

sudo umount /media/nexus4

How to mount the Nexus 4 storage SD card on Linux systems

Taken from How to mount the Nexus 4 storage SD card on Linux systems and comments there.

Reproduced here so that I can find it again!

Enable Developer Mode

Settings >>‘about phone’ menu and after that you should tap seven times on ‘Build Number’.

Now, from the Developer Options menu enable USB Debugging.

sudo apt-get install mtp-tools mtpfs
sudo gedit /etc/udev/rules.d/51-android.rules

Note not smart quotes as in the article


#LG – Nexus 4
SUBSYSTEM=="usb", ATTR{idVendor}=="1004?, MODE="0666?
sudo chmod +x /etc/udev/rules.d/51-android.rules 
sudo service udev restart
sudo mkdir /media/nexus4
sudo chmod 755 /media/nexus4

Next, connect your Google Nexus 4 to your Ubuntu computer using the USB cable. The MTP option has to be enabled.

sudo mtpfs -o allow_other /media/nexus4

To unmount:

sudo umount /media/nexus4

Rename Selected Jenkins Jobs

Using jenkinsapi it is easy to rename some jobs:

from jenkinsapi.jenkins import Jenkins
J = Jenkins('http://localhost:8080')

for j in J.keys():
if (j.startswith('bad-prefix')):
n = j.replace('bad-prefix', 'good-prefix')
J.rename_job(j, n)

Rename Selected Jenkins Jobs

Using jenkinsapi it is easy to rename some jobs:

from jenkinsapi.jenkins import Jenkins
J = Jenkins('http://localhost:8080')

for j in J.keys():
if (j.startswith('bad-prefix')):
n = j.replace('bad-prefix', 'good-prefix')
J.rename_job(j, n)