Andy Balaam | Andy Balaam's Blog         FreeGuide | GSSMP | Wrestles with God | mop(e)snake | duckmaze | Gnome Attacks         RSS

Planet Andy

kuro5hin : Who Are Your Lifelines?

Friday 27 March 2015 18:20 MST

A Cautionary Tale One night I woke around 2:30am, slightly disoriented. I was sitting on the couch, the TV was on. After a moment I remembered having started watching a movie. Then my phone rang. The recorded message announced that somebody was calling from the Sheriff of Maricopa's Center for Pre-Trial Cruelty. The computer played the inmate's recorded name - it was my imaginary passenger, the one I told you all about before. We had one minute to speak, then the recorded voice would offer to take my credit card number to pay phone-ransom. The bail was only $300 - he asked me to get him out. The minute ran out, and I was not inclined to pay for more time. I figured he was probably better off in the Arpaio Gulag than on the street. A bondswoman called the next day, as a courtesy call on the prisoner's behalf, and provided more information about the bail process. Bailing out that passenger was not high on my list of things I wanted to do. But I did go to visit, a few days later. Poll: How many lifelines can you recite from memory?

Phoronix : Shadow Warrior Is Being Released For Linux Next Week

Friday 27 March 2015 17:07 MST

The Shadow Warrior remake of the 1997 3D Realms' game of the same name is seeing its native Linux release next week! The remake of Shadow Warrior has been out since 2013 by Flying Wild Hog while next week will mark its debut for Linux and OS X...

Planet Gnome : Daniel G. Siegel: all my blogs are dead

Friday 27 March 2015 17:05 MST

paul neave in why i create for the web:

But the most amazing thing about the web is simple yet devastatingly powerful, and the whole reason the web exists in the first place. It's the humble hyperlink.

paul is right. however links randomly disappear, move and change. carter maness writes:

Despite the pervasive assumption that everything online lasts forever, the internet is inherently unstable. We assume everything we publish online will be preserved. But websites are businesses. They get sold, forgotten and broken. Eventually, someone flips the switch and pulls it all down. Hosting charges are eliminated, and domain names slip quietly back into the pool. What?s left behind once the cache clears? For media companies deleting their sites, legacy doesn?t matter; the work carries no intrinsic value if there is no business remaining to capitalize on it. I asked if a backup still existed on a server somewhere. It apparently does; I was invited to purchase it for next to nothing. I could pay for the hosting, flip the switch on, and all my work would return. But I?d never really look at it. Then, eventually, I would stop paying the bills, too.

imagine books disappearing randomly from your bookshelf from time to time. however, this is a funny thought as it pretends books were always available to everyone trivially.

i for myself started archiving outgoing links in the wayback machine with a zsh snippet like this one. i know well that this is no real solution to this problem, but i hope it helps. for now.

function ia-archive() { curl -s -I$* | grep Content-Location | awk '{print "Archived as:"$2}'; }

LWN : Friday's security updates

Friday 27 March 2015 16:13 MST

CentOS has updated setroubleshoot (C6; C7: privilege escalation).

Debian has updated batik (information leak).

Fedora has updated dokuwiki (F20; F21; F22: access control bypass), drupal7 (F22: multiple vulnerabilities), drupal7-views (F20; F21: multiple vulnerabilities), ettercap (F20; F21: multiple vulnerabilities), mingw-xerces-c (F22: denial of service), nx-libs (F20; F21: multiple vulnerabilities), php (F22: multiple vulnerabilities), and xerces-c (F22: denial of service).

Mandriva has updated cabextract (BS1,2: multiple vulnerabilities), cpio (BS1: multiple vulnerabilities; BS2: directory traversal), e2fsprogs (BS1; BS2: multiple vulnerabilities), and openssl (BS1; BS2: multiple vulnerabilities).

openSUSE has updated libXfont (13.1, 13.2: multiple vulnerabilities), libzip (13.1, 13.2: denial of service), and tcpdump (13.1, 13.2: multiple vulnerabilities).

Oracle has updated ipa and slapi-nis (O7: multiple vulnerabilities), kernel (O7: multiple vulnerabilities), and setroubleshoot (O5; O6; O7: privilege escalation).

Red Hat has updated ipa, slapi-nis (RHEL7: multiple vulnerabilities), kernel (RHEL7: multiple vulnerabilities), kernel-rt (RHEL7: multiple vulnerabilities), and setroubleshoot (RHEL5,6,7: privilege escalation).

Scientific Linux has updated ipa and slapi-nis (SL7:), kernel (SL7: multiple vulnerabilities), and setroubleshoot (SL5,6,7: privilege escalation).

SUSE has updated Xen (SLE12: multiple vulnerabilities).

OSNews : 'Cyber is just pounding me from every direction'

Friday 27 March 2015 15:53 MST

Texas representative John Carter, chairman of the subcommittee on Homeland Security appropriations, and who sits on various other defense-related subcommittees, is hearing about cyber a lot these days. As he put it, "cyber is just pounding me from every direction." That's just the first few seconds of the very entertaining video, where Carter tries to find the right words to express his concern over new encryption standards from Apple and others. You may laugh about this, but... These are the people running the most powerful military of the world.

OSNews : GNOME 3.16, Builder released

Friday 27 March 2015 15:49 MST

GNOME 3.16 brings a brand new notification system and updated calendar design, which helps you to easily keep track of what’s happened, and includes useful information like world times and event reminders. Other features include overlaid scrollbars, updated visuals, improved content views in Files, and a redesigned image viewer. Major additions have also been made to the GNOME developer experience: GTK+ support for OpenGL now allows GTK+ apps to support 3D natively, a new GLib reference counting feature will help with debugging, and GTK+ Inspector has also had a major update. Also released: GNOME Builder, an IDE for GNOME.

Phoronix : NVIDIA's $1000+ GeForce GTX TITAN X Delivers Maximum Linux Performance

Friday 27 March 2015 15:45 MST

Last week NVIDIA unveiled the GeForce GTX TITAN X during their annual GPU Tech Conference. Of course, all of the major reviews at launch were under Windows and thus largely focused on the Direct3D performance. Now that our review sample arrived this week, I've spent the past few days hitting the TITAN X hard under Linux with various OpenGL and OpenCL workloads compared to other NVIDIA and AMD hardware on the binary Linux drivers.

Phoronix : Intel Pushes A Bunch Of Broadwell Code Into Coreboot

Friday 27 March 2015 15:44 MST

Intel Linux developers have landed a lot of Broadwell enablement code into Coreboot...

Phoronix : Open-Source Driver Fans Will Love NVIDIA's New OpenGL Demo

Friday 27 March 2015 13:49 MST

Those with a bit of humor will love the demo NVIDIA recently used for showing off their Nouveau-based open-source graphics driver stack on the Tegra K1 SoC...

Phoronix : GHC 7.10.1 Brings New Compiler Features

Friday 27 March 2015 13:20 MST

Version 7.10.1 of the Glasgow Haskell Compiler (GHC) is now available as a major release for this open-source project...

Planet Python : Caktus Consulting Group: Welcome to Our New Staff Members

Friday 27 March 2015 12:00 MST

We?ve hit one of our greatest growth points yet in 2015, adding nine new team members since January to handle our increasing project load. There are many exciting things on the horizon for Caktus and our clients, so it?s wonderful to have a few more hands on deck.

One of the best things about working at Caktus is the diversity of our staff?s interests and backgrounds. In order of their appearance from left to right in the photos above, here?s a quick look at our new Cakti?s roles and some fun facts:

Neil Ashton

Neil was also a Caktus contractor who has made the move to full-time Django developer. He is a keen student of more than programming languages; he holds two degrees in Classics and another Master?s in Linguistics.

Jeff Bradberry

Though Jeff has been working as a contractor at Caktus, he recently became a full-time developer. In his spare time, he likes to play around with artificial intelligence, sometimes giving his creations a dose of inexplicable, random behavior to better mimic us poor humans.

Ross Pike

Ross is our new Lead Designer and has earned recognition for his work from Print, How Magazine, and the AIGA. He also served in the Peace Corps for a year in Bolivia on a health and water mission.

Lucas Rowe

Lucas joins us for six months as a game designer, courtesy of a federal grant to reduce the spread of HIV. When he?s not working on Epic Allies, our HIV medication app, he can be found playing board games or visiting local breweries.

Erin Mullaney

Erin has more than a decade of development experience behind her, making her the perfect addition to our team of Django developers. She loves cooking healthy, vegan meals and watching television shows laden with 90s nostalgia.

Liza Chabot

Liza is an English major who loves to read, write, and organize, all necessary skills as Caktus? Administrative and Marketing Assistant. She is also a weaver and sells and exhibits her handwoven wall hangings and textiles in the local craft community.

NC Nwoko

NC?s skills are vast in scope. She graduated from UNC Chapel Hill with a BA in Journalism and Mass Communication with a focus on public relations and business as well as a second major in International Studies with a focus on global economics. She now puts this experience to good use as Caktus? Digital Health Product Manager, but on the weekends you can find her playing video games and reading comic books.

Edward Rowe

Edward is joining us for six months as a game developer for the Epic Allies project. He loves developing games for social good. Outside of work, Edward continues to express his passion for games as an avid indie game developer, UNC basketball fan, and board and video game player.

Rob Lineberger

Rob is our new Django contractor. Rob is a renaissance man; he?s not only a skilled and respected visual artist, he?s trained in bioinformatics, psychology, information systems and knows his way around the kitchen.

To learn more about our team, visit our About Page. And if you?re wishing you could spend your days with these smart, passionate people, keep in mind that we?re still hiring.

Andy Balaam : Snake in ZX Spectrum BASIC

Friday 27 March 2015 08:12 MST

Series: Groovy, Ruby, BASIC

I’m writing the game Snake in lots of programming languages, for fun, and to try out new languages. This time, the first language I ever learned:

Slides: Snake in ZX Spectrum BASIC

If you want to, you can Support me on Patreon.

Planet Python : Montreal Python User Group: Montréal-Python 53: Sanctified Terabit + MTLData + DevOpsMTL + DockerMTL

Friday 27 March 2015 04:00 MST


Sketch credit Cynthia Savard

If PyCon is not enough, Montréal-Python has the solution: a meetup! Now that your first incredible day of sprints is over, we are bringing on stage some of PyCon's superstar presenters for encore presentations.

This special Montréal-Python edition will be co-organized by MTLData, DevOpsMTL and DockerMTL.

Trey Causey: Scalable Machine Learning in Python using GraphLab Create

I'll be giving an overview of how to use GraphLab Create to quickly build scalable predictive models and deploy them to production using just an IPython notebook on a laptop.

Nina Zakharenko: Technical Debt - The code monster in everyone's closet

Technical debt is the code monster hiding in everyone's closet. If you ignore it, it will terrorize you at night. To banish it and re-gain your productivity, you'll need to face it head on.

Olivier Grisel: What's new in scikit-learn 0.16 and what's cooking in the master branch.

Scikit-learn is a Machine Learning library from the Python data ecosystem. Olivier will give an overview and some demos of the (soon to be | recently) released 0.16.0 version.

Jérome Petazzoni: Deep dive into Docker storage drivers

We will present how aufs and btrfs drivers compare from a high-level perspective, explaining their pros and cons. This will help the audience to make more informed decisions when picking the most appropriate driver for their workloads.

Pierre-Yves David: Mercurial, with real python bites

In this talk, we'll go over on the advantages of Python that helped the project both in its early life when so much feature needs to be implemented, but also nowaday when major companies like Facebook bet on Mercurial for scaling. We'll also point at the drawback of choosing Python and how some work-arounds had to be found. Finally, we'll look at how the choice of Python have an impact on the user too with a demonstration of the extensions system.

Thanks also to our special sponsors for this event: Docker Inc. and LightSpeed Retail


Monday, April 13h 2015


Notman House 51 Rue Sherbrooke West, Montréal, QC H2X 1X2


Just grab a ticket here:


We?d like to thank our sponsors for their ongoing support:

XKCD : Opportunity

Friday 27 March 2015 04:00 MST

Planet Python : Vasudev Ram: Which which is which?

Friday 27 March 2015 02:52 MST

By Vasudev Ram

Recently I had blogged about, a simple Python program that I wrote, here:

A simple UNIX-like 'which' command in Python

I also posted the same program on ActiveState Code:

A UNIX-like "which" command for Python (Python recipe)

A reader there, Hongxu Chen, pointed out that my actually implemented the variant "which -a", that is, the UNIX which with the -a (or -all) option included. This variant displays not just the first full pathname of an occurrence of the searched-for name in the PATH (environment variable), but all such occurrences, in any directory in the PATH. That was not what I had intended. It was a bug. I had intended it to only show the first occurrence.

So I rewrote the program to fix that bug, and also implemented the -a option properly - i.e. when -a (or its long form, --all, is given, find all occurrences, otherwise only find the first. Here is the new version:
from __future__ import print_function

# A minimal version of the UNIX which utility, in Python.
# Also implements the -a or --all option.
# Author: Vasudev Ram -
# Copyright 2015 Vasudev Ram -

import sys
import os
import os.path
import stat

def usage():
sys.stderr.write("Usage: python [ -a | --all ] name ...\n")
sys.stderr.write("or: [ -a | --all ] name ...\n")

def which(name, all):
for path in os.getenv("PATH").split(os.path.pathsep):
full_path = path + os.sep + name
if os.path.exists(full_path):
if not all:

def main():
if len(sys.argv) 2:
if sys.argv[1] in ('-a', '--all'):
# Print all matches in PATH.
for name in sys.argv[2:]:
which(name, True)
# Stop after printing first match in PATH.
for name in sys.argv[1:]:
which(name, False)

if "__main__" == __name__:
I tested it some and it seems to be working okay both with and without the -a option now. After more testing, I'll upload it to my Bitbucket account.

- Vasudev Ram - Online Python training and programming

Dancing Bison Enterprises

Signup to hear about new software or info products that I create.

Posts about Python  Posts about xtopdf

Contact Page

Share |

Vasudev Ram

Planet Python : Mikko Ohtamaa: Testing web hook HTTP API callbacks with ngrok in Python

Friday 27 March 2015 00:49 MST

Today many API services provide webhooks calling back your website or system over HTTP. This enables simple third party interprocess communications and notifications for websites. However unless you are running in production, you often find yourself in a situation where it is not possible to get an Internet exposed HTTP endpoint over publicly accessible IP address. These situations may include your home desktop, public WI-FI access point or continuous integration services. Thus, developing or testing against webhook APIs become painful for contemporary nomad developers.

Screen Shot 2015-03-26 at 17.46.39

ngrok (source) is a pay-what-you-want service to create HTTP tunnels through third party relays. What makes ngrok attractice is that the registration is dead simple with Github credentials and upfront payments are not required. ngrok is also open source, so you can run your own relay for sensitive traffic.

In this blog post, I present a Python solution how to programmatically create ngrok tunnels on-demand. This is especially useful for webhook unit tests, as you have zero configuration tunnels available anywhere where you run your code. ngrok is spawned as a controlled subprocess for a given URL. Then, you can tell your webhook service provider to use this URL to make calls back to your unit tests.

One could use ngrok completely login free. In this case you lose the ability to name your HTTP endpoints. I have found it practical to have control over the endpoint URLs, as this makes debugging much more easier.

For real-life usage, you can check cryptoassets.core project where I came up with ngrok method. ngrok succesfully tunneled me out from CI service and my laptop.


Installing ngrok on OSX from Homebrew:

brew install ngrok

Installing ngrok for Ubuntu:

apt-get install -y unzip
cd /tmp
wget -O ""
unzip ngrok
mv ngrok /usr/local/bin

Official ngrok download, self-contained zips.

Sign up for the ngrok service and grab your auth token.

Export auth token as an environment variable in your shell, don’t store it in version control system:


Ngrok tunnel code

Below is Python 3 code for NgrokTunnel class. See the full source code here.

import os
import time
import uuid
import logging
import subprocess
from distutils.spawn import find_executable

logger = logging.getLogger(__name__)

class NgrokTunnel:

    def __init__(self, port, auth_token, subdomain_base="zoq-fot-pik"):
        """Initalize Ngrok tunnel.

        :param auth_token: Your auth token string you get after logging into

        :param port: int, localhost port forwarded through tunnel

        :parma subdomain_base: Each new tunnel gets a generated subdomain. This is the prefix used for a random string.
        assert find_executable("ngrok"), "ngrok command must be installed, see"
        self.port = port
        self.auth_token = auth_token
        self.subdomain = "{}-{}".format(subdomain_base, str(uuid.uuid4()))

    def start(self, ngrok_die_check_delay=0.5):
        """Starts the thread on the background and blocks until we get a tunnel URL.

        :return: the tunnel URL which is now publicly open for your localhost port

        logger.debug("Starting ngrok tunnel %s for port %d", self.subdomain, self.port)

        self.ngrok = subprocess.Popen(["ngrok", "-authtoken={}".format(self.auth_token), "-log=stdout", "-subdomain={}".format(self.subdomain), str(self.port)], stdout=subprocess.DEVNULL)

        # See that we don't instantly die
        assert self.ngrok.poll() is None, "ngrok terminated abrutly"
        url = "https://{}".format(self.subdomain)
        return url

    def stop(self):
        """Tell ngrok to tear down the tunnel.

        Stop the background tunneling process.

Example usage in tests

Here is a short pseudo example from cryptoassets.core webhook handler unit tests. See the full unit test code here.

class BlockWebhookTestCase(CoinTestRoot, unittest.TestCase):

    def setUp(self):

        self.ngrok = None

        self.backend.walletnotify_config["class"] = "cryptoassets.core.backend.blockiowebhook.BlockIoWebhookNotifyHandler"

        # We need ngrok tunnel for webhook notifications
        auth_token = os.environ["NGROK_AUTH_TOKEN"]
        self.ngrok = NgrokTunnel(21211, auth_token)

        # Pass dynamically generated tunnel URL to backend config
        tunnel_url = self.ngrok.start()
        self.backend.walletnotify_config["url"] = tunnel_url
        self.backend.walletnotify_config["port"] = 21211

        # Start the web server
        self.incoming_transactions_runnable = self.backend.setup_incoming_transactions(,


    def teardown(self):

        # Stop webserver
        incoming_transactions_runnable = getattr(self, "incoming_transactions_runnable", None)
        if incoming_transactions_runnable:

        # Stop tunnelling
        if self.ngrok:
            self.ngrok = None


Please see the unit tests for NgrokTunnel class itself.

 Subscribe to RSS feed Follow me on Twitter Follow me on Facebook Follow me Google+

Git Blame : Git Rev News

Thursday 26 March 2015 23:39 MDT

Christian Couder (who is known for his work enhancing the "git bisect" command several years ago) and Thomas Ferris Nicolaisen (who hosts a popular podcast GitMinutes) started producing a newsletter for Git development community and named it Git Rev News.

Here is what the newsletter is about in their words:

Our goal is to aggregate and communicate some of the activities on the Git mailing list in a format that the wider tech community can follow and understand. In addition, we'll link to some of the interesting Git-related articles, tools and projects we come across.

This edition covers what happened during the month of March 2015.

As one of the people who still remembers "Git Traffic", which was meant to be an ongoing summary of the Git mailing list traffic but disappeared after publishing its first and only issue, I find this a very welcome development. Because our mailing list is a fairly high-volume one, it is almost impossible to keep up with everything that happens there, unless you are actively involved in the development process.

I hope their effort will continue and benefit the wider Git ecosystem. You can help them out in various ways if you are interested.

Git Blame : Git 2.4 will hopefully be a "product quality" release

Thursday 26 March 2015 23:39 MDT

Earlier in the day, an early preview release for the next release of Git, 2.4-rc0, was tagged. Unlike many major releases in the past, this development cycle turned out to be relatively calm, fixing many usability warts and bugs, while introducing only a few new shiny toys.

In fact, the ratio of changes that are fixes and clean-ups in this release is unusually higher compared to recent releases. We keep a series of patches around each topic, whether it is a bugfix, a clean-up, or a new shiny toy, on its own topic branch, and each branch is merged to the 'master' branch after reviewing and testing, and then fixes and trivial clean-ups are also merged to the 'maint' branch. Because of this project structure, it is relatively easy to sift fixes and enhancement apart. Among new commits in release X since release (X-1), the ones that appear also in the last maintenance track for release (X-1) are fixes and clean-ups, while the remainder is enhancements.

Among the changes that went into v1.9.0 since v1.8.5, 23% of them were fixes that got merged to v1.8.5.6, for example, and this number has been more or less stable throughout the last year. Among the changes in v2.3.0 since v2.2.0, 18% of them were also in v2.2.2. Today's preview v2.4.0-rc0, however, has 333 changes since v2.3.0, among which 110 are in v2.3.4, which means that 33% of the changes are fixes and clean-ups.

These fixes came from 33 contributors in total, but changes from only a few usual suspects dominate and most other contributors have only one or two changes on the maintenance track. It is illuminating to compare the output between

$ git shortlog --no-merges -n -s ^maint v2.3.0..master
$ git shortlog --no-merges -n -s v2.3.0..maint

to see who prefers to work on new shiny toys and who works on product quality by fixing other people's bugs. The first command sorts the contributors by the number of commits since v2.3.0 that are only in the 'master', i.e. new shiny toys, and the second command sorts the contributors by the number of commits since v2.3.0 that are in the 'maint', i.e. fixes and clean-ups.

The output matches my perception (as the project maintainer, I at least look at, if not read carefully, all the changes) of each contributor's strength and weakness fairly well. Some are always looking for new and exciting things while being bad at tying loose ends, while others are more careful perfectionists.

OSNews : After a hit game, indie developers struggle to replicate success

Thursday 26 March 2015 23:04 MST

Bithell has become one of a growing number of prominent indie game developers known by name after releasing a hit game. New platforms like Steam and iOS have made it easier than ever for a single developer to create a successful game, and sometimes those games really blow up - developers like Minecraft creator Markus "Notch" Persson have become fast millionaires solely off of a single title. But after the elation of a hit game comes a sudden realization: you need to make another one. This is pretty common among artists; the second album is always the hardest.

OSNews : Microsoft rebrands Universal apps as "Windows apps"

Thursday 26 March 2015 22:59 MST

In the beginning there was the word, and the word was Metro. And then it was Windows 8-style. And then it was Modern. And then it was Windows Store. And then it was Universal. And today, Microsoft has decreed that henceforth these apps - which are all ultimately based on Windows Runtime - will be known as Windows apps. Historically, of course, "Windows apps" (or "Windows programs") referred to standard, Win32-based executables that ran on the Windows desktop. Under the new naming scheme, these Win32 apps will now be called Windows desktop applications. As you can see in the slide above, despite the new nomenclature, the differences between the two types of app remain the same. Microsoft can paint itself red and call itself a girl scout until the pink cows come home, but everyone will still, and will continue to, call them Metro applications.

OSNews : A three rotor Enigma machine wrist watch

Thursday 26 March 2015 22:56 MST

This is one of the most satisfying projects I have done I think. Mainly because this is a real device and something so historically important. It is a fully functioning Enigma machine you can wear on your wrist. This is a three rotor Enigma machine as used by German Wermacht in WW2 for encoding messages. Way cooler than any Android Wrist or iPhone Mini watch.

Planet Python : Python Software Foundation: World Domination: One Student at a Time!

Thursday 26 March 2015 22:42 MST

A couple of years ago, I discovered the edX MIT course 6.00x Intro to Computer Science and Programming Using Python. At the time, I was eager to learn Python and CS basics, so I took the plunge. 
The course has been offered through edX each semester since, and at some point it was divided into two courses to allow more time for in-depth study, as the original one-semester course moved very quickly from basics to more advanced topics, such as complexity classes, plotting techniques, stochastic programs, probability, random walks, and graph optimization. I can?t say enough good things about the excellence of Professor John Guttag, who developed the course and wrote the accompanying textbook (which is recommended but not required), along with co-teachers, Profs. Eric Grimson and Chris Terman.
I was grateful at the time to have found a free introductory college-level course in computer science that uses Python, rather than C, Java, or another language, as I had already had some acquaintance with Python and wanted to solidify my foundation and gain more skill. Working through the course led me to appreciate the features of Python that make it a wonderful teaching language. Since it is relatively easy to learn, it allows the learner to get up and running quickly, to write code and get results early on, without getting too bogged down and discouraged (something that I, as a humanities rather than a math person, had experienced in the past.) In addition, Python teaches good programming habits, including the importance of good documentation, what Prof. Guttag frequently referred to as "good hygiene." I remember wondering at the time why Python wasn?t always the language taught to beginners.
Well, today this is the trend.
According to a July 2014 study by Phillip Guo, Python is Now the Most Popular Introductory Teaching Language at Top U.S. Universities. Guo analyzed the course curricula for the top 39 CS Departments in the US. He used U.S. News' ranking of best computer science schools in 2014, which begins with Carnegie Mellon, MIT, Stanford, and UC Berkeley (he stopped at 39 because apparently there was an 8-way tie for #40), and found that 27 of them teach Python in their Intro courses. Of the top 10 departments, the proportion was higher? 8 of them teach Python. The next most-taught languages the study found were (in descending order): Java, MATLAB, C, C+, Scheme, and Scratch. Moreover, in addition to edX, both Udacity and Coursera use Python for their introductory courses.
Anecdotally, Guo found that professors in academic fields outside of CS are increasingly using Python to fill their students' needs for programming skills. See February?s PSF blog post Python in Nature for an explanation and example of this trend by Dr. Adina Howe, Professor of Agriculture and Biosystems Engineering at Iowa State University.
The increasing popularity of Python as the language for introductory CS courses in the US will undoubtedly lead to further growth of the Python community and the language. As Guo explains: 
? the choice of what language to teach first reflects the pedagogical philosophy of each department and influences many students' first impressions of computer science. The languages chosen by top U.S. departments could indicate broader trends in computer science education, since those are often trendsetters for the rest of the educational community.
I would love to hear from readers. Please send feedback, comments, or blog ideas to me at

Andy Balaam : fetchmail complaining about GoDaddy SSL certificate

Thursday 26 March 2015 22:22 MST

Update: I don’t think this fixed the problem

I was getting this every time I ran fetchmail.

fetchmail: Server certificate verification error: unable to get local issuer certificate
fetchmail: Broken certification chain at: /C=US/ST=Arizona/L=Scottsdale/, Inc./OU= Daddy Secure Certificate Authority - G2
fetchmail: This could mean that the server did not provide the intermediate CA's certificate(s), which is nothing fetchmail could do anything about.  For details, please see the README.SSL-SERVER document that ships with fetchmail.
fetchmail: This could mean that the root CA's signing certificate is not in the trusted CA certificate location, or that c_rehash needs to be run on the certificate directory. For details, please see the documentation of --sslcertpath and --sslcertfile in the manual page.
fetchmail: Server certificate verification error: certificate not trusted
fetchmail: Server certificate verification error: unable to verify the first certificate
fetchmail: Warning: the connection is insecure, continuing anyway. (Better use --sslcertck)

I appear to have fixed it by running:

sudo c_rehash

I found this by reading the documentation on --sslcertpath in the fetchmail man page. (As the error message told me to…)

LWN : A new stable kernel release

Thursday 26 March 2015 20:40 MST

Greg Kroah-Hartman has announced the release of the 3.19.3 kernel. A variety of important fixes and updates are included.

Guardian Congo : Ben Affleck and Bill Gates lend power to Congo cause on Capitol Hill ? video

Thursday 26 March 2015 18:11 MST

Actor and director Ben Affleck, the founder of the Eastern Congo Initiative, and Microsoft founder Bill Gates, the chairman of the Bill and Melinda Gates Foundation, appeal for continued diplomatic and financial assistance to the Democratic Republic of Congo during a testimony at a Senate appropriations committee hearing on Thursday. Referring to Starbucks' partnership with the ECI to develop Congo as a key source of coffee, Affleck said it shown positive results in the impoverished, war-ravaged country Continue reading...

Planet Gnome : Jim Hall: Hands-on usability improvements with GNOME 3.16

Thursday 26 March 2015 17:19 MST

I downloaded the GNOME 3.16 live demo image and experimented with what the latest GNOME has to offer. My focus is usability testing, so I wanted to explore the live demo to see how the usability has improved in the latest release.

From my 2014 study of GNOME's usability, usability testing revealed several "hot" problem areas, including:

Changing the default font in gedit or Notes
Testers typically looked for a "font" or "text" action under the gear menu. Many testers referred to the gear menu as the "options" or "settings" menu because they previously affiliated a "gear" icon with settings or preferences in Mac OS X or Windows. Testers assumed changing the font was a settings, so they looked for it in what they assumed was a "settings" menu: the gear menu.
Bookmarking a location in Nautilus
Most testers preferred to just move a frequently-used folder to the desktop, so it would be easier to find. But GNOME doesn't have a "desktop" per se by default, and expects users to use the "Bookmark this Location" feature in Nautilus. However, this feature was not very discoverable; many testers moved the target folder into another folder, and believed that they had somehow bookmarked the location.
Finding and replacing text in gedit
When asked to make to replace all instances of a word with another word, across a large text file, testers had trouble discovering the "find and replace text" feature in gedit. Instead, testers experimented with "Find" then simply typed over the old text with the new text.
How does the new GNOME 3.16 improve on these problem areas? Let's look at a few screenshots:


GNOME 3.14 saw several updates to the gedit editor, which continue in GNOME 3.16:

The new gedit features a clean appearance that features prominent "Open" and "Save" buttons?two functions that average users with average knowledge will frequently access.

A new "three lines" icon replaces the gear menu for the drop-down menu. This "three lines" menu icon is more common in other applications, including those on Mac OS X and Windows, so the new menu icon should be easier to find.

The "Open" menu includes a quick-access list, and a button to look for other files via the finder.

The preferences menu doesn't offer significant usability improvements, although the color scheme selector is now updated in GNOME 3.16.


The updated Nautilus features large icons that offer good visibility without becoming too overwhelming. The "three lines" menu is simplified in this release, and offers an easier path to bookmark a location.


I uncovered a few issues with the Epiphany web browser (aka "GNOME Web") but since I don't usually use Epiphany (I use Firefox or Google Chrome) I'm not sure how long these problems have been there.

Epiphany has a clean appearance that reserves most of the screen real estate to display the web page. This is a nice design tradeoff, but I noticed that after I navigated to a web page, I lost the URL bar. I couldn't navigate to a new website until I opened a new tab and entered my URL there. I'm sure there's another way to bring up the URL bar, but it's not obvious to me.

I'll also add that taking screenshots of Epiphany was quite difficult. For other GNOME applications, I simply hit Alt-PrtScr to save a screenshot of my active window. But the Epiphany web browser seems to grab control of that key binding, and Alt-PrtScr does nothing most of the time?especially when the "three lines" menu is open. I took several screenshots of Epiphany, and about half were whole-desktop screenshots (PrtScr) that I later cropped using the GIMP.

EDIT: If you click the little "down" triangle next to the URL, you can enter a new URL. I don't like this feature; it obscures URL entry. Basic functionality like this should not be hidden in a web browser. I encourage the Epiphany team to bring back the URL entry bar in the next release.

Other changes

Notifications got a big update in GNOME 3.16. In previous versions of GNOME 3, notifications appeared at the bottom of the screen. Now, notifications appear at the top of the screen, merged with the calendar. You might consider this a "calendar and events" feature. The notifications are unobtrusive; when I plugged in my USB fob drive, a small white marker appeared next to the date and time to suggest a new notification had arrived. While I haven't reviewed notifications as part of my usability testing, my heuristic evaluation is that the new notifications design will improve the usability around notifications. I believe most users will see the new "calendar and events" feature as making a lot of sense.

However, I do have some reservations about the updated GNOME. For one, I dislike the darker colors seen in these screenshots. Users don't like dark desktop colors. In user interface design, colors also affect the mood of an application. As seen in this comparison, users perceived the darker colors used in Windows and GNOME as moody, while the lighter colors used in Mac OS X suggest an airy, friendly interface. This may be why users at large perceive the GNOME desktop to have poor usability, despite usability testing showing otherwise. The dark, moody colors used in GNOME provoke feelings of tension and insecurity, which influence the user's perception of poor usability.

I'm also not sure about the blue-on-grey effect to highlight running programs or selected items in the GNOME Shell. In addition to being dark, moody colors, the blue-on-grey is just too hard to see clearly. I would like GNOME to update the default theme to use lighter, airier colors. I'll reserve a discussion of colors in GNOME for a future article.

Overall, I'm very pleased with the usability improvements that have gone into the new GNOME release. Good job, everyone!

I look forward to doing more usability testing in this version of GNOME, so we can continue to make GNOME great. With good usability, each version of GNOME gets better and easier to use.

Guardian Congo : Bill Gates cast as Ben Affleck's heroic sidekick in lobbying for Africa aid

Thursday 26 March 2015 16:37 MST

The Batman star reported on his work with the coffee industry in eastern Congo while the Microsoft billionaire stressed the need to support African agriculture

Ben Affleck and Bill Gates testified before the Senate on Thursday, with the actor plugging his new Batman movie, joking about sitting next to ?the greatest and most important philanthropist in the history of the world?, praising Starbucks ? and describing how coffee could remake the economy of the Democratic Republic of Congo.

Related: Aid to Africa: private sector investment becomes new priority

Continue reading...

LWN : Thursday's security updates

Thursday 26 March 2015 14:03 MST

CentOS has updated firefox (C6; C7: multiple vulnerabilities).

openSUSE has updated firefox (13.1,13.2: multiple vulnerabilities).

Oracle has updated firefox (O5: multiple vulnerabilities).

Scientific Linux has updated 389-ds-base (SL7: multiple vulnerabilities), firefox (multiple vulnerabilities), freetype (SL6,7: multiple vulnerabilities), glibc (SL7: multiple vulnerabilities), GNOME Shell (SL7: lock screen bypass), hivex (SL7: privilege escalation), httpd (SL7: multiple vulnerabilities), ipa (SL7: multiple vulnerabilities), kernel (SL7: multiple vulnerabilities), krb5 (SL7: multiple vulnerabilities), libreoffice (SL7: multiple vulnerabilities), libvirt (SL7: multiple vulnerabilities), openssh (SL7: multiple vulnerabilities), openssl (SL6; SL7: multiple vulnerabilities), pcre (SL7: information leak), qemu-kvm (SL7: multiple vulnerabilities), unzip (SL6,7: multiple vulnerabilities), and virt-who (SL7: information leak).

Planet Gnome : Mario Sanchez Prada: Building a SNES emulator with a Raspberry Pi and a PS3 gamepad

Thursday 26 March 2015 01:51 MST

It?s been a while since I did this, but I got some people asking me lately about how exactly I did it and I thought it could be nice to write a post answering that question. Actually, it would be a nice thing for me to have anyway at least as ?documentation?, so here it is.

But first of all, the idea: my personal and very particular goal was to have a proper SNES emulator plugged to my TV, based on the Raspberry Pi (simply because I had a spare one) that I could control entirely with a gamepad (no external keyboards, no ssh connection from a laptop, nothing).

Yes, I know there are other emulators I could aim for and even Raspberry specific distros designed for a similar purpose but, honestly, I don?t really care about MAME, NeoGeo, PSX emulators or the like. I simply wanted a SNES emulator, period. And on top of that I was quite keen on playing a bit with the Raspberry, so I took this route, for good or bad.

Anyway, after doing some investigation I realized all the main pieces were already out there for me to build such a thing, all that was needed was to put them all together, so I went ahead and did it. And these are the HW & SW ingredients involved in this recipe:

Once I got all these things around, this is how I assembled the whole thing:

1. Got the gamepad paired and recognized as a joystick under /dev/input/js0 using the QtSixA project. I followed the instructions here, which explain fairly well how to use sixpair to pair the gamepad and how to get the sixad daemon running at boot time, which was an important requirement for this whole thing to work as I wanted it to.

2. I downloaded the source code of PiSNES, then patched it slightly so that it would recognize the PS3 DualShock gamepad, allow me define the four directions of the joystick through the configuration file, among other things.

3. I had no idea how to get the PS3 gamepad paired automatically when booting the Raspberry Pi, so I wrote a stupid small script that would basically wait for the gamepad to be detected under /dev/input/js0, and then launch the snes9x.gui GUI to choose a game from the list of ROMS available. I placed it under /usr/local/bin/snes-run-gui, and looks like this:



# Wait for the PS3 Game pad to be available
while [ ! -e /dev/input/js0 ]; do sleep 2; done

# The DISPLAY=:0 bit is important for the GUI to work
DISPLAY=:0 $BASEDIR/snes9x.gui

4. Because I wanted that script to be launched on boot, I simply added a line to /etc/xdg/lxsession/LXDE/autostart, so that it looked like this:

@lxpanel --profile LXDE
@pcmanfm --desktop --profile LXDE
@xscreensaver -no-splash

By doing the steps mentioned above, I got the following ?User Experience?:

  1. Turn on the RPi by simply plugging it in
  2. Wait for Raspbian to boot and for the desktop to be visible
  3. At this point, both the sixad daemon and the snes-run-gui script should be running, so press the PS button in the gamepad to connect the gamepad
  4. After a few seconds, the lights in the gamepad should stop blinking and the /dev/input/js0 device file should be available, so snes9x.gui is launched
  5. Select the game you want to play and press with the ?X? button to run it
  6. While in the game, press the PS button to get back to the game selection UI
  7. From the game selection UI, press START+SELECT to shutdown the RPi
  8. Profit!

Unfortunately, those steps above were enough to get the gamepad paired and working with PiSNES, but my TV was a bit tricky and I needed to do a few adjustments more in the booting configuration of the Raspberry Pi, which took me a while to find out too.

So, here is the contents of my /boot/config.txt file in case it helps somebody else out there, or simply as reference (more info about the contents of this file in RPiConfig):

# NOOBS Auto-generated Settings:

# Set sdtv mode to PAL (as used in Europe)

# Force sound to be sent over the HDMI cable

# Set monitor mode to DMT

# Overclock the CPU a bit (700 MHz is the default)

# Set monitor resolution to 1280x720p @ 60Hz XGA

As you can imagine, some of those configuration options are specific to the TV I have it connected to (e.g. hdmi_mode), so YMMV. In my case I actually had to try different HDMI modes before settling on one that would simply work, so if you are ever in the same situation, you might want to apt-get install libraspberrypi-bin and use the following commands as well:

 $ tvservice -m DMT # List all DMT supported modes
 $ tvservice -d edid.dat # Dump detailed info about your screen
 $ edidparser edid.dat | grep mode # List all possible modes

In my case, I settled on hdmi_mode=85 simply because that?s the one that work better for me, which stands for the 1280x720p@60Hz DMT mode, according to edidparser:

HDMI:EDID DMT mode (85) 1280x720p @ 60 Hz with pixel clock 74 MHz has a score of 80296

And that?s all I think. Of course there?s a chance I forgot to mention something because I did this in my random slots of spare time I had back in July, but that should be pretty much it.

Now, simply because this post has been too much text already, here you have a video showing off how this actually works (and let alone how good/bad I am playing!):

Video: Raspberry Pi + PS3 Gamepad + PiSNES

I have to say I had great fun doing this and, even if it?s a quite hackish solution, I?m pretty happy with it because it?s been so much fun to play those games again, and also because it?s been working like a charm ever since I set it up, more than half a year ago.

And even better? turns out I got it working just in time for ?Father?s Day?, which made me win the ?best dad in the world? award, unanimously granted by my two sons, who also enjoy playing those good old games with me now (and beating me on some of them!).

Actually, that has been certainly the most rewarding thing of all this, no doubt about it.

LWN : [$] Weekly Edition for March 26, 2015

Thursday 26 March 2015 00:59 MST

The Weekly Edition for March 26, 2015 is available.

Planet Gnome : Matthew Garrett: Python for remote reconfiguration of server firmware

Wednesday 25 March 2015 23:54 MST

One project I've worked on at Nebula is a Python module for remote configuration of server hardware. You can find it here, but there's a few caveats:
  1. It's not hugely well tested on a wide range of hardware
  2. The interface is not yet guaranteed to be stable
  3. You'll also need this module if you want to deal with IBM (well, Lenovo now) servers
  4. The IBM support is based on reverse engineering rather than documentation, so who really knows how good it is

There's documentation in the README, and I'm sorry for the API being kind of awful (it suffers rather heavily from me writing Python while knowing basically no Python). Still, it ought to work. I'm interested in hearing from anybody with problems, anybody who's interested in getting it on Pypi and anybody who's willing to add support for new HP systems.

comment count unavailable comments

LWN : [$] Development activity in LibreOffice and OpenOffice

Wednesday 25 March 2015 16:55 MST

The LibreOffice project was announced with great fanfare in September 2010. Nearly one year later, the project (from which LibreOffice was forked) was cut loose from Oracle and found a new home as an Apache project. It is fair to say that the rivalry between the two projects in the time since then has been strong. Predictions that one project or the other would fail have not been borne out, but that does not mean that the two projects are equally successful. A look at the two projects' development communities reveals some interesting differences.

Click below (subscribers only) for the full article.

Planet Gnome : Bastien Nocera: GNOME 3.16 is out!

Wednesday 25 March 2015 15:23 MST

Did you see?

It will obviously be in Fedora 22 Beta very shortly.

What happened since 3.14? Quite a bit, and a number of unfinished projects will hopefully come to fruition in the coming months.

Hardware support

After quite a bit of back and forth, automatic rotation for tablets will not be included directly in systemd/udev, but instead in a separate D-Bus daemon. The daemon has support for other sensor types, Ambient Light Sensors (ColorHug ALS amongst others) being the first ones. I hope we have compass support soon too.

Support for the Onda v975w's touchscreen and accelerometer are now upstream. Work is on-going for the Wi-Fi driver.

I've started some work on supporting the much hated Adaptive keyboard on the X1 Carbon 2nd generation.

Technical debt

In the last cycle, I've worked on triaging gnome-screensaver, gnome-shell and gdk-pixbuf bugs.

The first got merged into the second, the second got plenty of outdated bugs closed, and priorities re-evaluated as a result.

I wrangled old patches and cleaned up gdk-pixbuf. We still have architectural problems in the library for huge images, but at least we're up to a state where we know what the problems are, not being buried in Bugzilla.

Foundation building

A couple of projects got started that didn't reached maturation yet. I'm pretty happy that we're able to use gnome-books (part of gnome-documents) today to read Comic books. ePub support is coming!

Grilo saw plenty of activity. The oft requested "properties" page in Totem is closer than ever, so is series grouping.

In December, Allan and I met with the ABRT team, and we've landed some changes we discussed there, including a simple "Report bugs" toggle in the Privacy settings, with a link to the OS' privacy policy. The gnome-abrt application had a facelift, but we got somewhat stuck on technical problems, which should get solved in the next cycle. The notifications were also streamlined and simplified.

I'm a fan

Of the new overlay scrollbars, and the new gnome-shell notification handling. And I'm cheering on co-new app in 3.16, GNOME Calendar.

There's plenty more new and interesting stuff in the release, but I would just be duplicating much of the GNOME 3.16 release notes.

Martin Fowler : Retreaded: CodeAsDocumentation

Wednesday 25 March 2015 13:14 MST

Retread of post orginally made on 22 Mar 2005

One of the common elements of agile methods is that they raise programming to a central role in software development - one much greater than the software engineering community usually does. Part of this is classifying the code as a major, if not the primary documentation of a software system.

Almost immediately I feel the need to rebut a common misunderstanding. Such a principle is not saying that code is the only documentation. Although I've often heard this said of Extreme Programming - I've never heard the leaders of the Extreme Programming movement say this. Usually there is a need for further documentation to act as a supplement to the code.

The rationale for the code being the primary source of documentation is that it is the only one that is sufficiently detailed and precise to act in that role - a point made so eloquently by Jack Reeves's famous essay "What is Software Design?"

This principle comes with a important consequence - that it's important that programmers put in the effort to make sure that this code is clear and readable. Saying that code is documentation isn't saying that a particular code base is good documentation. Like any documentation, code can be clear or it can be gibberish. Code is no more inherently clear than any other form of documentation. (And other forms of documentation can be hopelessly unclear too - I've seen plenty of gibberish UML diagrams, to flog a popular horse.)

Certainly it seems that most code bases aren't very good documentation. But just as it's a fallacy to conclude that declaring code to be documentation excludes other forms, it's a fallacy to say that because code is often poor documentation means that it's necessarily poor. It is possible to write clear code, indeed I'm convinced that most code bases can be made much more clear.

I think part of the reason that code is often so hard to read is because people aren't taking it seriously as documentation. If there's no will to make code clear, then there's little chance it will spring into clarity all by itself. So the first step to clear code is to accept that code is documentation, and then put the effort in to make it be clear. I think this comes down to what was taught to most programmers when they began to program. My teachers didn't put much emphasis on making code clear, they didn't seem to value it and certainly didn't talk about how to do it. We as a whole industry need to put much more emphasis on valuing the clarity of code.

The next step is to learn how, and here let me offer you the advice of a best selling technical author - there's nothing like review. I would never think of publishing a book without having many people read it and give me feedback. Similarly there's nothing more important to clear code than getting feedback from others about what is or isn't easy to understand. So take every opportunity to find ways to get other people to read your code. Find out what they find easy to understand, and what things confuse them. (Yes, pair programming is a great way to do this.)

For more concrete advice - well I suggest reading good books on programming style. Code Complete is the first place to look. I'll naturally suggest Refactoring - after all much of refactoring is about making code clearer. After Refactoring, Refactoring to Patterns is an obvious suggestion.

You'll always find people will disagree on various points. Remember that a code base is owned primarily by a team (even if you practice individual code ownership over bits of it). A professional programmer is prepared to bend her personal style to reflect the needs of the team. So even if you like ternary operators don't use them if your team doesn't find them easy to understand. You can program in your own style on your personal projects, but anything you do in a team should follow the needs of that team.

reposted on 25 Mar 2015


Aquarion : WRP Week Six ? As good at weekly as anything.

Wednesday 25 March 2015 08:53 MST

I used to do this kind of thing monthly in WRP format. It’s as good as any:


Skute’s in the wild, to some extent. Beta invitations have been sent out – the demons of the Play Store beta system are occasionally eating us – and the first few hundred tags are around. Speccing up bits for Phase Three of the backend system, and putting docs together.

Converted PiracyInc’s vagrant provisioning to Ansible as a test to see how easy that was. It was so easy I converted an app for my contracting gig, and today one of the Skute applications over (that already had ansible playbooks).

Again on a PiracyInc -> Skute path, used PiracyInc as an excuse to learn how celery (background task queuing thing) backs onto Flask, and then ported a couple of long tasks (uploads to S3, mostly) on Skute’s media server to it. Can probably extend this to some other tasks later on, but the speed boost on uploads is nice.

Need to work out how this would interface with the main API, which is on Heroku. Pretty sure using a remote redis as a queue store isn’t a great idea, though I’m sure it would work. I’ve got enough architecture problems with remote systems I can’t control or fix if they go wrong. Maybe I’m not thinking with my head in the Cloud enough. It’s possible that our time with Heroku is coming to the end of its lollypop, given we already have AWS servers to do the media stuff. I do love platform consolidation tasks, they make my dark heart glad.

More features & fixes up on Test, as well. Those will get migrated with the next android build…


Did the Empire Podcast with Mark and folks, which went really well. Massively glad I did that last weekend, and not next, as a series of blamestorms and internal bullshit has drained me of enthusiasm for the game entirely. This is probably linked to the cloud of rage and stress that’s currently hovering over me, but I did a toys-external rant this afternoon, and my attendance is now slightly shaky. That said, the toys-external rant appears to have actually made stuff happen.

One of the massive problems with doing freelance work and working from home is that my division line between working and not-working is flakey, which is adding to my current feelings of stress. I kind of need to decompress for a week or so, but can’t really do that until a couple of near-future milestones pass. So I’m becoming increasingly short tempered and unwilling to give any leeway to people fucking about, even when it doesn’t actually matter.

Other projects? There’s a long post on small-time media creators in the age of Facebook that I need to finish soon, but the short version is that everything else I do is trapped in a cycle of falling interest, because even the people who are engaged with the premise don’t get most of the things I produce for it.


I’ve booked for two larps, in a desperate attempt to play more than zero this year (Last year I managed one, which promptly collapsed).

More technologically, The new episodes of Dreamfall & Borderlands’s episodic things came out last week, and I drove through them with wild abandon. It’s interesting that Telltale’s hundred episode history hasn’t quite solved the narrative issues of a second part to a longer story, but to be fair Red Thread hasn’t magically done so either. Both felt like a small and not very important internal arc primarily to cause more questions and dominos to be set up for the main story.

MMO-wise, I’ve wandered back into Elder Scrolls Online to finish up the last area. Nobody I know seems to be playing it anymore – apart from fyr, who’s using my account – and I’ve no idea where social hubs are to find a low-key guild to bounce around with, so mostly I’m playing it as a more restrictive and more narrative Elder Scrolls single player game, with occasional multi-gigabyte patch downloads that don’t seem to add anything. I’ve fallen out of The Secret World for a bit. The new player experience is a massive improvement, and I’d highly recommend the game for anyone interested in that kind of modern-world conspiracy setting, with tinges of Lovecraft around the edges (ping me a for a trial code), but my next stage will be Nightmare dungeons and scenarios, and I need a better build, which means grinding out AP. I’m looking forward to the new issue, and the new dungeon that comes with it.

XKCD : Squirrel Plan

Wednesday 25 March 2015 04:00 MST

[Halfway to the Sun ...] Heyyyy ... what if this BALLOON is full of acorns?!

Guardian Congo : Sex abuse poses 'significant risk' to UN peacekeeping, says leaked report

Tuesday 24 March 2015 16:10 MST

Internal UN research talks of a culture of impunity and underreporting on sexual abuse cases in peacekeeping missions

The United Nations has been accused of ignoring an internal report that describes sexual exploitation and abuse as ?the most significant risk? to peacekeeping missions across the globe.

The leaked internal document examines UN peacekeeping missions in Congo, Haiti, Liberia and South Sudan, where 85% of all sexual abuse cases against peacekeepers come from. Of the allegations made in these countries in 2012, 18 (30%) involved minors.

Related: Reflecting on 'collective failure': is the United Nations still relevant?

Related: How to keep aid workers safe: what the security experts say

Related: From soldiers to peacebuilders: can Liberia's taxi drivers help stop Ebola?

Continue reading...

Charlie Brooker : Why tug our forelocks to Richard III, a king who?s such a diva that he needs two funerals?

Tuesday 24 March 2015 14:43 MST

For somebody who did less for Britain than, say, Olly Murs, we?re making a dreadful fuss of our late monarch

Who?s your favourite dead king? For me it?s a toss-up between King Henry VIII (likes: Greensleeves, beheadings) and Nat King Cole (likes: chestnuts roasting on an open fire, Jack Frost nipping at your nose). Those are definitely my top two.

Below them, there?s King Kong, King George III, Good King Wenceslas, and about 500 other assorted types of king before you get to Richard III. Never warmed to him. Don?t know why. I?ve just never really been into Richard III. Maybe it?s his Savile-esque haircut, or the fact that his name is widely used as rhyming slang for fecal matter, or just the way he?s routinely depicted as a murderous, scheming cross between Mr Punch and Quasimodo; a panto villain with nephews? blood on his hands.

Continue reading...

Planet Classpath : Jeroen Frijters: New Development Snapshot

Tuesday 24 March 2015 13:43 MST

After debugging a stack overflow caused by a weird class loader, I decided to make the runtime more robust against this and as a side effect I added the ability to disable eager class loading. This in turn made it easier to test the late binding infrastructure (which is used when a class is not yet available while a method is compiled) and that testing revealed a large number of bugs that have now been fixed.


Binaries available here:

Guardian Congo : Letter from DR Congo: the final journey

Tuesday 24 March 2015 11:59 MST

Among the furniture, animals and goods on Congo?s busy roads, the dead must also weave their way

Last week was the week for driving. I spent three full days on the road. And on the road you see some bizarre sights: trucks overloaded with sacks of goodness knows what, and on top of that, piled with people. Women carrying ridiculous amounts of firewood on their heads, cans of water in their hands and babies on their backs. Cars stuck in the thickest of mud where they will no doubt stay until the sun shines enough to dry it all up.

Soon the absurd becomes normal. What once would have made me laugh, question or at least pass a comment no longer attracts my attention. It?s just another sight along the way.

Continue reading...

Otaku : Android, Rx and Kotlin: a case study

Tuesday 24 March 2015 01:28 MST

There are countless introductions to Rx and quite a few discussing Rx on Android, so instead of writing another one, I decided to come up with a simple activity and go through the entire exercise of implementing it using Rx and Kotlin. I’m sure you’ll be able to follow these article easily even if you’re not familiar with Kotlin because the syntax is remarkably similar to what you would write if you were using Java 8 (just with fewer semi colons).

The activity

It’s shown at the top of this article. It’s pretty simple while covering some important basics and, as we’ll find out along the way, the devil is in the details. Here is a simple functional specification of this activity:

A few additional details:

Without Rx

Implementing this activity with “regular” Android practices is straightforward:

This implementation is straightforward but it’s very scattered. You will very likely end up with a lot of empty methods added just to satisfy listener requirements, storing state in fields to communicate between the various asynchronous tasks and making sure your threading model is sound. In other words, a lot of boiler plate, a messy mix of headless and graphical logic and graphical update code spread a bit everywhere.

There is a better way.

Kotlin and Android

I found Kotlin to be a very good match for Android even in the early days but recently, the Kotlin team has really cranked up their support for Android and adding specific functionalities for the platform in their tooling, which makes Kotlin even more of a perfect match for Android.

Kotlin M11 was released about a week ago and it added a functionality that makes the appeal of Kotlin absolutely irresistible for Android: automatically bound resources. Here is how it works.

Suppose you define the following View in your layout activity_search.xml:

<Button ...

All you need to do is add a special kind of import to your source:


and the identifier addFriendButton becomes magically available everywhere in your source, with the proper type. This basically obsoletes ButterKnife/KotterKnife (well, not quite, there’s still OnClick which is pretty nice. Besides, Jake tells me he has something in the works). And if you press Ctrl-b on such an identifier, Android Studio takes you directly to the layout file where that View is defined. Very neat.

The server

For the purpose of this article, I’m just mocking the server. Here is its definition:

trait Server {
    fun findUser(name: String) : Observable<JsonObject>
    fun addFriend(user: User) : Observable<JsonObject>

If you think this definition looks a lot like a Retrofit service interface, it’s because it is. If you’re not using Retrofit, you should. Right now, I’ll be mocking this server by returning a set of hardcoded answers and also making each call sleep one second to simulate latency (and so we can see the progress bar on the screen). Note that each call on this interface returns an Observable, so they fit right in with our Rx implementation.

In this example, I hardcoded the server to know about two friends (“cedric” and “jon”) but only “cedric” can be added as a friend.

The Rx mindset

Switching to the Rx mindset requires you to start thinking in terms of event sources (observables) and listeners (subscribers). If this idea doesn’t sound that novel, it’s because it’s not. This model was already being advocated in the book “Design Patterns” in 1994 and even in the early versions of Java twenty years ago (and no doubt you can find traces of it in the literature before that). However, Rx introduces new concepts in this idea that we’ll explore in this series.

So let’s rethink our activity in Rx terms: what are event sources (I’ll use the name “observable” from now on) in this activity?

I can count four observables:

  1. First, we have the EditText: whenever a new character is typed, it emits an event that contains the entire text typed so far. We can emit a new name once we have more than three characters.
  2. Next is the name observable, which calls the server and emits the JsonObject it receives in response.
  3. Next in the chain, we have a “user” Observable, which maps the JsonObject into a User instance with the name and id of that person.
  4. Finally, the “Add friend” button is another observable: if the user presses that button, we make another call to the server with the User we have and we update our UI based on the results.

There are various ways we can break this problem into observables and the final result depends on how you want to handle various tasks, which observables you want to reuse, the threading model you prefer, etc…


The “EditText” Observable

Let’s start with our EditText. Its implementation is pretty straightforward:

    .doOnNext { e: OnTextChangeEvent ->
    .map { e: OnTextChangeEvent -> e.text().toString() }
    .filter { s: String -> s.length() >= 3 }
    .subscribe { s: String -> mNameObservable.onNext(s) }

Let’s go through each line in turn:

The "name" Observable

val mNameObservable: BehaviorSubject<String> = BehaviorSubject.create()

A BehaviorSubject is a special kind of Observable that you can send events to after its creation. I like to think of it as an event bus, except that it's focused on a very specific kind of events (as opposed to an event bus that is used to post pretty much anything to). Using a Subject here allows me to create that Observable early and only post new events to it as they come in, which is what we did with the snippet of code above.

Let's see how we use that Observable now, by simply subscribing to it, which means receiving names that have three or more characters in them. All we do is call the server and pass the result to the "user" observable:

    .subscribe{ s: String ->
        mServer.findUser(s).subscribe { jo: JsonObject ->

We're not quite done, though: we actually have another subscriber to that Observable:

    .subscribe { s: String ->

We need to let the user know we just issued a network call, so we show the progress bar (this is a suboptimal implementation, this logic should be done at the server level, but we'll save this for later).

Note that I'm intentionally hiding a very important part of this logic in order to stay focused on the topic, and this also explains why we have two separate subscriptions. I explain why at the end of this article.

The "User" Observable

Next, we have the User Observable, which gets notified when the server sends us a response to the query "Does the user named 'foo' exist?":

    .map { jo: JsonObject ->
        if (mServer.isOk(jo)) {
            User(jo.get("id").getAsString(), jo.get("name").getAsString())
        } else {
    .subscribe { user: User? ->
        addFriendButton.setEnabled(user != null)
        mUser = user

This Observable does two things: it updates our UI and it maps the JsonObject response to our data class User. If the call was a success, we assign this value to the field mUser.

The "Add friend" Observable

Finally, we have the "Add friend" button, which only gets enabled once we have a valid User. If the user presses that button, we issue another call to the server to request that person to be added as a friend and then we update the UI depending on the response:

    .subscribe { e: OnClickEvent ->
            .subscribe { jo: JsonObject ->
                val toastText: String
                if (mServer.isOk(jo)) {
                    toastText = "Friend added id: " + jo.get("id").getAsString()
                } else {
                    toastText = "ERROR: Friend not added"
                Toast.makeText(this, toastText, Toast.LENGTH_LONG).show();

Stepping back

This is a very different implementation of how you would write the code with regular Android calls, but in the end, it's not just compact but it also divides our entire logic in four very distinct components that interact with each other in very clear ways. This is the macro level. At the micro level, these four components are not just self contained, they are also highly configurable thanks to operators, operations you can insert between your Observable and your Subscriber and which transform the data in ways that are easier for you to handle. I only have one such example of this in the code above (transforming an OnTextChangeEvent into a String) but you get the idea.

Another benefit that should be immediately obvious to you even if you don't buy into the whole Rx paradigm shift yet is that thanks to Rx, we now have a universal language for observables and observers. I'm sure that right now, your Android code base contains a lot of such interfaces, all with subtly different method names and definitions and all needing some adapaters to be inserted before they can talk to each other. If ever you felt the need to write an interface with a method called "onSomethingHappened()", Rx will be an immediate improvement.


I have barely touched upon operators and this topic in itself is worthy of an entire book but I'd like to spend a few minutes to give a quick example of the power of operators.

Going back to our activity, remember that as soon as the user types more than three characters, we send a query to the network for each character typed. This is a bit wasteful: a lot of phone users are very fast typists and if they type letters in quick succession, we could be sparing our server some extra work. For example, how about we define that we only call the server when the user stopped typing?

How do we define "stopped typing"? Let's say that we'll decide that the user stopped typing if two keypresses are separated by more than 500ms. This way, a quick typing of "cedric" will result in just one server call instead of four without this function. How do we go about implementing this?

Again, the traditional approach would mean that each time our text change listener is invoked, we compare the current time with the timestamp of the last character typed and if it exceeds a certain value, we trigger the event.

As it turns out, observables have an operator that does this for us called debounce(), so our code becomes:

    .doOnNext {(e: OnTextChangeEvent) ->
    .map { e: OnTextChangeEvent -> e.text().toString() }
    .filter { s: String -> s.length() >= 3 }
    .debounce(500, TimeUnit.MILLISECONDS)
    .subscribe { s: String -> mNameObservable.onNext(s) }

I know what you're thinking: "You just picked a problem that could be solved by an operator that already exists". Well, kind of, but that's not my point. First of all, this is not a made up problem and I'm sure a lot of developers who write user interface code will agree that this is pretty common.

However, my point is more general than this: the debounce() operator has nothing to do with Android. It is defined on Observable, which is the base library we are using. It's an operator that's generally useful to have on any source that emits events. These events might be graphical in nature (as is the case here) or of any other kind, such as a stream of network requests, coordinates of a mouse cursor or capturing data from a geolocation sensor. debounce() represents the general need for getting rid of redundancies in streams.

Not only do we get to reuse an existing implementation without having to rewrite it ourselves, we preserve the locality of our implementation (with the traditional approach, you would probably have polluted your class with a couple of fields) and we maintain the composition abilities of our observable computations. For example, do you need to make sure that the user is not adding themselves before calling the server? Easy:

    // ...
    .filter { s: String -> s.length() >= 3 }
    .filter { s: String -> s != myName }

Wrapping up

At this point, you should probably take a look at the full source so you can step back and see how all these pieces fit together. It's obvious to me that the view of decomposing your problem in terms of observables that emit values that subscribers observe is extremely powerful and leads to cleaner, more self contained and composable code. The non-Rx version of this class feels incredibly messy to me (a lot more fields holding state, methods added just to satisfy slightly incompatible listener interfaces and a total lack of composability).

Having said that, our work here is not over:

Stay tuned!

Update: Discussion on reddit.

RandsInRepose : Hockey Sounds

Monday 23 March 2015 17:26 MST

(Via Coudal who roots for those damn Blackhawks.)


RandsInRepose : Medium as Frozen Pizza

Monday 23 March 2015 14:31 MST

Compelling piece by Matthew Butterick on the business and design of Medium.

On Medium?s use of minimalism:

As a fan of min­i­mal­ism, how­ever, I think that term is mis­ap­plied here. Min­i­mal­ism doesn?t fore­close ei­ther ex­pres­sive breadth or con­cep­tual depth. On the con­trary, the min­i­mal­ist pro­gram?as it ini­tially emerged in fine art of the 20th cen­tury?has been about di­vert­ing the viewer?s at­ten­tion from overt signs of au­thor­ship to the deeper pu­rity of the ingredients.

He continues:

Still, I wouldn?t say that Medium?s ho­mo­ge­neous de­sign is bad ex ante. Among web-pub­lish­ing tools, I see Medium as the equiv­a­lent of a frozen pizza: not as whole­some as a meal you could make your­self, but for those with­out the time or mo­ti­va­tion to cook, a po­ten­tially bet­ter op­tion than just eat­ing peanut but­ter straight from the jar.

The piece is less about typography and more about Medium?s business motivations, but the entire article is worth your time.


XKCD : Wasted Time

Monday 23 March 2015 04:00 MST

Since it sounds like your time spent typing can't possibly be less productive than your time spent not typing, have you tried typing SLOWER?

RandsInRepose : Dear Data

Sunday 22 March 2015 15:51 MST


Dear Data is a year-long project between Giorgia Lupi and Stefanie Posavec who are creating weekly analog data visualizations and sending them to each other on post cards.

I like everything about this project.


GingerDog : Automated twitter compilation up to 22 March 2015

Sunday 22 March 2015 06:00 MST

Arbitrary tweets made by TheGingerDog up to 22 March 2015

ChairmanMioaw : Automated twitter compilation up to 22 March 2015

Sunday 22 March 2015 06:00 MST

Arbitrary tweets made by TheGingerDog up to 22 March 2015

Planet Classpath : Riccardo Mottola: Extirpating systemd from Debian

Friday 20 March 2015 20:35 MST

I found out that all my debian machines switched to systemd without my consent, with just a standard apt-get ugrading.
I despise that decision.

I did not follow the latest discussion about it, I was left with the impression that it would have been installed only if needed, but evidently I was wrong.

Can you get back? Time to toss Debian? I hoped not, I know of other fellow developers who switched  distribution, but Debian is Debian.

Remove systemd and sysvinit (which is now a transitional package) and put back sysvinit-core back. I had the fear that I bricked my laptops, but it still works. For how long? I don't know.

I'm very very sad about this. If I think of GNU/Linux I think of Debian, it has been with me since 68k times, when potato was cool. Debian made a very bad decision.

Something newer than ol' sysvinit? Something modern, fast, capable of parallelism. Yes.
But something portable, light, secure, which is not a dependency hell, which does one thing. In other words, something in line with the Unix philosophy.

Not the enormous pile of rotting shit which is systemd. When I removed it, I freed almost 13Mbytes from my system. I am relieved, but it shows also how big that pile of crap is.

So, for now, Debian can stay with me, I hope it will be for a long while. Long enough that debian will revert or systemd will go away.

XKCD : Mysteries

Friday 20 March 2015 04:00 MST

At the bottom left: The mystery of why, when I know I needed to be asleep an hour ago, I decide it's a good time to read through every Wikipedia article in the categories 'Out-of-place artifacts', 'Earth mysteries', 'Anomalous weather', and 'List of people who disappeared mysteriously'.

RandsInRepose : The Psychology of ?No?

Thursday 19 March 2015 01:26 MST

The sad truth is, we can be absolutely awful at making decisions that affect our long-term happiness. Recent work by psychologists has charted a set of predictable cognitive errors that lead us to mistakes like eating too much junk food, or saving too little for retirement. These quirks lead us to make similarly predictable errors when deciding where to live, how to live, how to move, and even how to build our cities.

(By Charles Montgomery via National Post)


Allan Kelly : Code and other reviews (a small piece of advice)

Tuesday 17 March 2015 16:09 MDT

Many teams have some sort of very regular reviews. I?m not thinking personnel reviews or budget reviews, I?m thinking code reviews specifically but it could be test reviews, documentation reviews or some other. Reviews that need to happen every day but which frequently get delayed.

Lets stick with code reviews because they are the type I encounter most often.

Code reviews are good, by some accounts they the most effective means of removing bugs early - although I haven?t seen code reviews compared with TDD. But, code reviews loose their efficacy when they are not conducted promptly. The longer the period between the review being requested and the review being conducted (and by extension the review comments being acted on) the less effective the review.

The effect reduces because: a) the person who requested the review has moved onto something else, b) issues found in the review may be repeated until the review is conducted and c) the review will either inject a delay into the delivery process or the delivery will happen without the review in which case, what was the point?

So: if you are going to conduct reviews you want them to happen soon.

And it is not just the code and the developer who wrote it who have problems. The designated reviewer feels the pressure to ?do reviews? when they have other - important! - work to do.

One team I know came up with a simple solution to this problem. I recently recommended the solution to another team who promptly adopted it and are delighted. One developer said: ?I?ve never been so relaxed about reviews before.?

The solution is?.

Make reviews the first item of work after the stand-up meeting in the morning. And let people do them before the stand-up too.

Thus, as soon as the stand-up is finished everyone undertakes any reviews which are needed before they start their day?s work. Review work is prioritised before new work: after all, yesterday the thing that now requires review was the priority, it is probably still the overall priority and the only thing standing between it and ?done? is a review.

Reviews don?t typically take very long so today?s work isn?t delayed. And the recipient of the reviews comments can act on the comments before they get into today?s work.

Better still, knowing that reviews will happen right after the stand-up meeting means that it also makes sense to do the reviews BEFORE the stand-up meeting. This also addresses the question of ?how do I usefully use the time before the stand-up meeting when I know I?ll have to stop doing whatever I start.?

So for example, imagine on Tuesday afternoon Bob asks Alice to review the code he has just finished. If Alice is available she might just do it there and then. But if she is busy she will wait. Now if she arrives at work at 9am and the stand-up is 9.30am she can get on with the review before the stand-up, if she finishes Bob will have his feedback and can act on it before 10am. If Alice doesn?t get to Bob?s code then she will do it at 9.45am when the meeting finishes.

Either way, Bob isn?t left waiting.

In part this works simply because it keeps the review queue short. If reviews are done soon, say within 24 hours, then the queue is never allowed to become too big and thus its easy to keep the queue short.

One of the teams actually put a column on their board where tasks awaiting review could rest. In the stand-up everyone could see what was waiting for review and arrange to do it.

Simple really.

While we are on the subject of code reviews let me comment on something else.

There is often a belief that only senior developers, or only ?architects? should conduct reviews. I think this approach is mistaken for two reasons.

Firstly in this model it is normal that there are far fewer reviewers than there are review requesters. This frequently results in queues for reviews because the pool of reviewers - who also have other work to do - is small.

Second this model assumes that only ?architects? can make useful comments on code. I believe most, say 80%, of the efficacy of a code review is the result not of having an expert review the code but simply the result of having another person review the code.

Indeed I even go as far as to say junior people should review senior people?s code. Code reviews are a two way learning process - one of the reasons I like to see them done face-to-face. If an experienced developer is writing code that a junior cannot understand (why do I think of C++ meta-templates?) then the experienced person should know that they are writing code other people cannot maintain.

Anyway, there you go, let me know if you try this idea and what the result is.

Michele Simionato : The wonders of cooperative inheritance, or using super in Python 3

This essay is intended for Python programmers wanting to understand the concept of cooperative inheritance and the usage of super. It does not require any previous reading. The target is Python 3.0, since it has a nicer syntax for super, even if most of what I say here can be backported down to Python 2.2.

Michele Simionato : EuroPython 2010

The EuroPython conference will be held in Birmingham UK, 19th to 22nd July 2010.

Michele Simionato : plac, the easiest command line arguments parser in the Python world

Announcing the first public release of plac, a declarative command line arguments parser designed for simplicity and concision.

Michele Simionato : Threads, processes and concurrency in Python: some thoughts

Removing the hype around the multicore (non) revolution and some (hopefully) sensible comment about threads ad other forms of concurrency.

Michele Simionato : What's new in plac 0.7

plac is much more than a command-line arguments parser. You can use it to implement interactive interpreters (both on a local machine on a remote server) as well as batch interpreters. It features a doctest-like mode, the ability to launch commands in parallel, and more. And it is easy to use too!
Make your own planet, DIYBlog style - just FTP web space needed.