Questions and answers about Pepper3

Series: Examples, Questions

My last post Examples of Pepper3 code was a reply to my friend’s email asking what it was all about. They replied with some questions, and I thought the questions and answers might shed some more light:

Questions!

Brilliant ones, thanks.

In general though you’ve said a lot about what Pepper can do without giving design decisions.

Yep, total brain dump.

Remind me again who this language is for :)

It’s a multi-paradigm (generic, functional, OO) language aimed at application programmers who want:

  • “native” performance on their chosen platform (definitely including actual native machine code). This is inspired by C++.
  • easy deployment (preferably a single binary containing everything, with an option to link most dependencies statically), including packaging of installers for major OSes. This is inspired by C++, and the pain of C++.
  • perfect flexibility for creating types – “meta-programming” is just programming. Things you would have done using code generation (e.g. generating a class hierarchy from an XSD) are done by running arbitrary code at compile time. The powerful type system is inspired by Haskell and the book “Modern C++ Design”, and the meta-programming is inspired by Lisp.
  • Simple memory management without GC through ownership. This is inspired by modern C++, and then Rust came along and implemented it before I could, thus proving it works. However, I would remove a lot of the functionality in Rust (lifetimes) to make it much simpler.
  • Strong support for functional programming if you want it. This is inspired by Haskell.
  • The simplest possible core language, with application programmers able to expand it by giving them the same tools as the language designers – e.g. “for” is just a function, so you can make your own. I am hoping I can even make “class” a function. This is inspired by Lisp, and oppositely-inspired by Java.
  • Separation between the idea of Interfaces, which I think I will call “type specifiers” (and will allow arbitrary code execution to determine whether a type satisfies the requirements) and structs/classes, allowing us to make new Interfaces and have old code satisfy them, meaning we can do generic stuff with e.g. ints even if
    no-one declared that “class Int : public Quaternion” or whatever.
  • Lots of “nudges” towards things that are good: by default things will be functional and immutable – you will have to explicitly say if you want to use more dangerous constructs like side effects and mutable values.
  • No implicit conversions, or really anything happening without you saying so.

Can you assign floats to ints or vice versa?

Yes, but you shouldn’t.

If you’re setting types in code at the start of a file, is this only available in the main file? Are there multiple files per program? Can
you have libraries? If so, do these decide the functionality of their types in the library or does this only happen in the main file?

I haven’t totally decided – either by being enforced, or as a matter of style, you will generally do this once at the beginning of the program (and choose on the compiler command line to do it e.g. the debug way or the release way) and it will affect all of your code.

Libraries will be packaged as Pepper3 source code, so choices you make of the type of Int etc. will be reflected through the whole dependency tree. Cool, huh?

This is inspired by Python.

Can you group variables together into structs or similar?

Yes – it will be especially easy to make “value types”, and lots of default methods will be provided, that you will be strongly encouraged to use – e.g. copy and move operations. This is inspired by Elm.

Why are variables immutable by default but mutable with a special syntax? It’s the opposite of C++ const, but why that way around?

This is one of the “nudges” – immutable stuff is much easier to think about, and makes parallel stuff easier, and allows optimisations and so on, so turning it on by default means you have to choose to take the bad path, and are inclined to take the virtuous one. This is inspired by Haskell and Rust.

Why only allow assignments, function calls and operators? I’m sure you have good reasons.

To be as simple as possible, so you only have those things to learn and the rest can be understood by just reading the code. This is inspired by Python.

I wrote more of my (earlier) thoughts in this 4-post series, which is better thought through: Goodness in Programming Languages

Examples of Pepper3 code

Series: Examples, Questions

I have restarted my effort to make a new programming language that fits the way I like things. I haven’t pushed any code yet, but I have made a lot of progress in my head to understand what I want.

Here are some random examples that might get across some of the ways I am thinking:


// You code using general types like "Int" but you can set what
// they really are in the code (usually at the beginning), so
// if you plan to use native ints in the production code, it's
// a good idea to use:
Int = CheckedNativeInt;
// while in dev, since it will crash at runtime if you overflow.

// Then, in production when you're sure you have no errors,
// switch to an unchecked one:
Int = NativeInt;

// But, if you prefer correctness over efficiency, you can use
// mathematical integer that never overflows:
Int = ArbitrarySizeInt;


// Variables are immutable by default, so:
Int x = 4;
x = 3;      // this is a compile error


// But this is OK
Mutable(Int) y = 6;
y = y + x;

// Notice that you can call functions that return types that you
// then use, like Mutable(Int) here.

// Generally, code can run at either compile time or run time.
// Code to do with types has to run at compile time.
// By default, other code runs at run time, but you can force
// it to run early if you want to.


// A main method looks like this - you get hold of e.g. stdout through
// a World instance - I try to avoid any global functions like print, or
// global variables like sys.stdout.

Auto main =
{:(World world)->Int
    //...
};

// (Although note that Int, String etc. actually are global variables,
// which is a bit annoying)

// I wish the main method were simpler-looking.  The only saving grace
// is that for simple examples you don't need a main method -
// Pepper3 just calculates the expression you provide in your file and
// prints it out.


// Expressions in curly brackets are lambda functions, so:

{3};

// is a function taking no arguments, returning 3, and:

{:(Int x)
    x * 2
};

// is a function that doubles a value.

Obviously, we can tie functions to names:

Auto dbl =
    {:(Int x)
        x * 2
    };

// Meaning we can call dbl like this:
dbl(4);

// Auto is a magic word to say ("use type inference"), so
// this is equivalent to the above:

fn([Int]->Int) dbl =
    {:(Int x)
        x * 2
    };


// Because {} makes an anon function, things like "for" can be
// functions instead of keywords.

for(range(3), {:(Int x)
    world.stdout.println(to(String)(x));
});


// As far as possible, Pepper3 will only contain assignment statements:
String s = "xx";

// and expressions containing function calls and operators:
dbl(3) + 6;


// This means we can make our own constructs like a different type of
// for loop, which would need a new keyword in some languages:

Auto parallel_for = import(multiprocess.parallel_for);

TECH(K)NOW Day workshop on “Writing a programming language”

My OpenMarket colleagues and I ran a workshop at TECH(K)NOW Day on how to write your own programming language:

A big thank you to my colleagues from OpenMarket who volunteered to help: Rowan, Jenny, Zach, James and Elliot.

An extra thank you to Zach and Elliott for their impromptu help on the information desk for attendees:

Hopefully the attendees enjoyed it and learned a bit:

You can find the workshop slides, the full code, info about another simple language called Cell, and lots more links here: github.com/andybalaam/videos-write-your-own-language, my blog at artificialworlds.net/blog, and follow me on twitter @andybalaam.

Thanks to OpenMarket for supporting us in running this workshop!

Adding a concurrency limit to Python’s asyncio.as_completed

Series: asyncio basics, large numbers in parallel, parallel HTTP requests, adding to stdlib

In the previous post I demonstrated how the limited_as_completed method allows us to run a very large number of tasks using concurrency, but limiting the number of concurrent tasks to a sensible limit to ensure we don’t exhaust resources like memory or operating system file handles.

I think this could be a useful addition to the Python standard library, so I have been working on a modification to the current asyncio.as_completed method. My work so far is here: limited-as_completed.

I ran similar tests to the ones I ran for the last blog post with this code to validate that the modified standard library version achieves the same goals as before.

I used an identical copy of timed from the previous post and updated versions of the other files because I was using a much newer version of aiohttp along with the custom-built python I was running.

server looked like:

#!/usr/bin/env python3

from aiohttp import web
import asyncio
import random

async def handle(request):
    await asyncio.sleep(random.randint(0, 3))
    return web.Response(text="Hello, World!")

app = web.Application()
app.router.add_get('/{name}', handle)

web.run_app(app)

client-async-sem needed me to add a custom TCPConnector to avoid a new limit on the number of concurrent connections that was added to aiohttp in version 2.0. I also need to move the ClientSession usage inside a coroutine to avoid a warning:

#!/usr/bin/env python3

from aiohttp import ClientSession, TCPConnector
import asyncio
import sys

limit = 1000

async def fetch(url, session):
    async with session.get(url) as response:
        return await response.read()

async def bound_fetch(sem, url, session):
    # Getter function with semaphore.
    async with sem:
        await fetch(url, session)

async def run(r):
    with ClientSession(connector=TCPConnector(limit=limit)) as session:
        url = "http://localhost:8080/{}"
        tasks = []
        # create instance of Semaphore
        sem = asyncio.Semaphore(limit)
        for i in range(r):
            # pass Semaphore and session to every GET request
            task = asyncio.ensure_future(
                bound_fetch(sem, url.format(i), session))
            tasks.append(task)
        responses = asyncio.gather(*tasks)
        await responses

loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.ensure_future(run(int(sys.argv[1]))))

My new code that uses my proposed extension to as_completed looked like:

#!/usr/bin/env python3

from aiohttp import ClientSession, TCPConnector
import asyncio
import sys

async def fetch(url, session):
    async with session.get(url) as response:
        return await response.read()

limit = 1000

async def print_when_done():
    with ClientSession(connector=TCPConnector(limit=limit)) as session:
        tasks = (fetch(url.format(i), session) for i in range(r))
        for res in asyncio.as_completed(tasks, limit=limit):
            await res

r = int(sys.argv[1])
url = "http://localhost:8080/{}"
loop = asyncio.get_event_loop()
loop.run_until_complete(print_when_done())
loop.close()

and with these, we get similar behaviour to the previous post:

$ ./timed ./client-async-sem 10000
Memory usage: 73640KB	Time: 19.18 seconds
$ ./timed ./client-async-stdlib 10000
Memory usage: 49332KB	Time: 18.97 seconds

So the implementation I plan to submit to the Python standard library appears to work well. In fact, I think it is better than the one I presented in the previous post, because it uses on_complete callbacks to notice when futures have completed, which reduces the busy-looping we were doing to check for and yield finished tasks.

The Python issue is bpo-30782 and the pull request is #2424.

Note: at first glance, it looks like the aiohttp.ClientSession‘s limit on the number of connections (introduced in version 1.0 and then updated in version 2.0) gives us what we want without any of this extra code, but in fact it only limits the number of connections, not the number of futures we are creating, so it has the same problem of unbounded memory use as the semaphore-based implementation.

Basic ideas of Python 3 asyncio concurrency

Series: asyncio basics, large numbers in parallel, parallel HTTP requests, adding to stdlib

Python 3’s asyncio module and the async and await keywords combine to allow us to do cooperative concurrent programming, where a code path voluntarily yields control to a scheduler, trusting that it will get control back when some resource has become available (or just when the scheduler feels like it). This way of programming can be very confusing, and has been popularised by Twisted in the Python world, and nodejs (among others) in other worlds.

I have been trying to get my head around the basic ideas as they surface in Python 3’s model. Below are some definitions and explanations that have been useful to me as I tried to grasp how it all works.

Futures and coroutines are both things that you can wait for.

You can make a coroutine by declaring it with async def:

import asyncio
async def mycoro(number):
    print("Starting %d" % number)
    await asyncio.sleep(1)
    print("Finishing %d" % number)
    return str(number)

Almost always, a coroutine will await something such as some blocking IO. (Above we just sleep for a second.) When we await, we actually yield control to the scheduler so it can do other work and wake us up later, when something interesting has happened.

You can make a future out of a coroutine, but often you don’t need to. Bear in mind that if you do want to make a future, you should use ensure_future, but this actually runs what you pass to it – it doesn’t just create a future:

myfuture1 = asyncio.ensure_future(mycoro(1))
# Runs mycoro!

But, to get its result, you must wait for it – it is only scheduled in the background:

# Assume mycoro is defined as above
myfuture1 = asyncio.ensure_future(mycoro(1))
# We end the program without waiting for the future to finish

So the above fails like this:

$ python3 ./python-async.py
Task was destroyed but it is pending!
task: <Task pending coro=<mycoro() running at ./python-async:10>>
sys:1: RuntimeWarning: coroutine 'mycoro' was never awaited

The right way to block waiting for a future outside of a coroutine is to ask the event loop to do it:

# Keep on assuming mycoro is defined as above for all the examples
myfuture1 = asyncio.ensure_future(mycoro(1))
loop = asyncio.get_event_loop()
loop.run_until_complete(myfuture1)
loop.close()

Now this works properly (although we’re not yet getting any benefit from being asynchronous):

$ python3 python-async.py
Starting 1
Finishing 1

To run several things concurrently, we make a future that is the combination of several other futures. asyncio can make a future like that out of coroutines using asyncio.gather:

several_futures = asyncio.gather(
    mycoro(1), mycoro(2), mycoro(3))
loop = asyncio.get_event_loop()
print(loop.run_until_complete(several_futures))
loop.close()

The three coroutines all run at the same time, so this only takes about 1 second to run, even though we are running 3 tasks, each of which takes 1 second:

$ python3 python-async.py
Starting 3
Starting 1
Starting 2
Finishing 3
Finishing 1
Finishing 2
['1', '2', '3']

asyncio.gather won’t necessarily run your coroutines in order, but it will return a list of results in the same order as its input.

Notice also that run_until_complete returns the result of the future created by gather – a list of all the results from the individual coroutines.

To do the next bit we need to know how to call a coroutine from a coroutine. As we’ve already seen, just calling a coroutine in the normal Python way doesn’t run it, but gives you back a “coroutine object”. To actually run the code, we need to wait for it. When we want to block everything until we have a result, we can use something like run_until_complete but in an async context we want to yield control to the scheduler and let it give us back control when the coroutine has finished. We do that by using await:

import asyncio
async def f2():
    print("start f2")
    await asyncio.sleep(1)
    print("stop f2")
async def f1():
    print("start f1")
    await f2()
    print("stop f1")
loop = asyncio.get_event_loop()
loop.run_until_complete(f1())
loop.close()

This prints:

$ python3 python-async.py
start f1
start f2
stop f2
stop f1

Now we know how to call a coroutine from inside a coroutine, we can continue.

We have seen that asyncio.gather takes in some futures/coroutines and returns a future that collects their results (in order).

If, instead, you want to get results as soon as they are available, you need to write a second coroutine that deals with each result by looping through the results of asyncio.as_completed and awaiting each one.

# Keep on assuming mycoro is defined as at the top
async def print_when_done(tasks):
    for res in asyncio.as_completed(tasks):
        print("Result %s" % await res)
coros = [mycoro(1), mycoro(2), mycoro(3)]
loop = asyncio.get_event_loop()
loop.run_until_complete(print_when_done(coros))
loop.close()

This prints:

$ python3 python-async.py
Starting 1
Starting 3
Starting 2
Finishing 3
Result 3
Finishing 2
Result 2
Finishing 1
Result 1

Notice that task 3 finishes first and its result is printed, even though tasks 1 and 2 are still running.

asyncio.as_completed returns an iterable sequence of futures, each of which must be awaited, so it must run inside a coroutine, which must be waited for too.

The argument to asyncio.as_completed has to be a list of coroutines or futures, not an iterable, so you can’t use it with a very large list of items that won’t fit in memory.

Side note: if we want to work with very large lists, asyncio.wait won’t help us here – it also takes a list of futures and waits for all of them to complete (like gather), or, with other arguments, for one of them to complete or one of them to fail. It then returns two sets of futures: done and not-done. Each of these must be awaited to get their results, so:

asyncio.gather

# is roughly equivalent to:

async def mygather(*args):
    ret = []
    for r in (await asyncio.wait(args))[0]:
        ret.append(await r)
    return ret

I am interested in running very large numbers of tasks with limited concurrency – see the next article for how I managed it.