In this series looking at features introduced by every version of Python 3, this is the first looking at Python 3.5. In it we examine one of the major new improvements in this release, new syntax for coroutines.
This is the 8th of the 29 articles that currently make up the “Python 3 Releases” series.
As I approach the halfway mark of going through all the currently released Python versions in this series of articles, I find myself reflecting briefly on those that I’ve done so far. I must confess it’s taking longer than I expected, probably because I’m going into way more detail than I originally intended. However, it’s been a useful exercise to drill in and add some code snippets — it’s quite easy to misunderstand a verbal explanation, but when you see that alongside some examples it gives you much more confidence in your understanding.
That said, I may try to fight my completionist tendencies and be a little more selective as we go. I’m also going to break things up into more parts, to make things a bit less of a slog!
Anyway, this time we’re up to Python 3.5, released 13 September 2015, bang on schedule 18 months after the release of 3.4. This will be the last release I’ll look at that’s no longer receiving security fixes at time of writing, so we’re getting increasingly close to versions that might still be being used in the real world at this point.
This release has a lot of big changes, the first of which is the subject of this entire article: coroutines.
Alright, I know I said I was talking about coroutines, and asyncio event loops aren’t just related to coroutines. They are intrinsically tied to how coroutines are executed, however, so I wanted to briefly talk about them first.
The event loop is the central scheduling construct in
asyncio. It provides multiple features, not all of which are required for coroutines but for completeness:
Since we’re talking about coroutines here, we won’t go into the I/O features of event loops, but they’re a pretty natural fit to use with coroutines.
The event loop doesn’t have a separate thread of execution controlling it, so it’s “paused” until you call into it to run it. Once you do so, it cycles around its own loop, executing callbacks and the like until it’s stopped. At this point control returns back to wherever you called the run function from.
There are two calls which run the loop:
stop()method is called. Note that most of the event loop classes are not thread-safe, so if you want to stop the loop from another thread you should probably use the
call_soon_threadsafe()method to execute it in the context of the event loop thread.
run_forever()except that it continues until the
Futurepassed as a parameter is done, then it exits.
The typical pattern is for the main thread to set up some initial callbacks or transports, add them to the event loop and then let it run. The code executed by these can schedule more callbacks within the loop as needed. For example, a callback can reschedule itself after another delay to create a repeating timer.
To schedule a call to be run ASAP in the loop, there’s a
call_soon() method, and to schedule calls for the future there are
call_at(), whose semantics you can probably work out from the names.
The other thing that’s worth knowing about event loops is that even though there can be multiple ones, there’s generally a default one for the current thread. Strictly speaking there’s a policy framework which can define context differently than per thread, but that’s getting a bit too far into the details for this overview. For now, suffice to say that you can call
asyncio.get_event_loop() to obtain the current event loop for the calling thread. If you’re writing a library which wants to use its own event loop in isolation from the rest of the code in an application for some reason, you’ll probably want to peruse the documentation further on this topic.
There are some more details to event loops, some of which are platform-specific, but those are the key points required to understand the implementation of coroutines. Here’s a simple bit of code to illustrate some of these calls.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
All of this discussion so far has been in terms of callback functions, which are handy but not nearly as convenient as coroutines for many tasks. In the rest of this article we’ll see how coroutines work and interact with the event loop.
Release 3.5 added new syntax and library routines for declaring coroutines, defined in PEP 492, so I’m going to do a review of where things stand as of this release, which includes some features I glossed over in previous articles in expectation of this one. I did already look at this in a little detail in a previous article so you may like to look at that as well. It’s an important change, however, and it’s been 5 years since I wrote that so going over it again probably won’t be the worst idea — hopefully somewhere between that discussion and this one, most people should find enough to make things clear.
Do also bear in mind the point I raised in my first article on 3.4, however, that the coroutines situation evolved rapidly over the next few Python releases, so anything included in this article doesn’t necessarily still represent a best practice, upcoming articles may change some of these details.
Let’s start by defining a few terms. As of this release, a coroutine is a new type of object. Their relationship to regular functions is much the same as a generator’s relationship to functions: they look superficially quite similar, but the way you use them is quite different.
A coroutine function is defined with
async def name(...): syntax. Just as a generator function returns a generator object, a coroutine function returns a coroutine object when called.
Within a coroutine function the
await keyword can be used to suspend execution until a particular result is ready. As you might have guessed, there’s a new awaitable protocol which defines which objects can be awaited, and it mostly boils down to that object providing an
__await__() method1. This method should return an iterator, and so every
await is essentially waiting for some
yield down the call chain. So far, so Pythonic.
The part which may be a little less intuitive is that coroutines don’t even start until they’re awaited. This makes more sense if you consider them to be green threads — unlike real threads the operating system isn’t going to schedule them for you, so you need to context switch yourself. You do that by relinquishing control to them — i.e. awaiting them. Or if it’s easier you can think of them as generators, the semantics are quite similar.
Here’s brief snippet to illustrate it’s the order of
await not the order of definition which matters. Don’t worry too much about the stuff to actually execute it at the end, we’ll discuss that later on.
>>> import asyncio >>> >>> async def echo(arg): ... print(arg) ... >>> async def test(): ... first = echo("one") ... second = echo("two") ... await second ... await first ... >>> loop = asyncio.get_event_loop() >>> loop.run_until_complete(test()) two one >>> loop.close()
From this simple example, you can see that coroutines can wait for each other, which transfers execution into the one waited. You can see the similarities with generators here, where
await is very similar to the
yield from construct added in Python 3.3 for generator delegation. This similarity is not a coincidence as coroutines in Python have their origins as a “fork” of generators, and have slowly been evolving more independent syntax. When the awaited coroutine returns a value or raises an exception, control is returned to the awaiting coroutine as with generators yielding a value.
I also wanted to include a coroutines version of the earlier countdown code using callbacks, which you can find below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
You can see that it’s not really much shorter than the callbacks version, but I think the logic is more readable since it’s written as sequential loops. The extra complexity is mostly because
countdown() has to take care of cleanly cancelling any other tasks executing, but unlike the callbacks version this has have the advantage that the coroutines could catch
asyncio.CancelledError to implement some closing logic.
So far, there doesn’t seem to be a lot of flexibility here — this logic is essentially syncronous and could be achieved easily with standard functions. The flexibility starts to come as we realise that
await can’t just be used with other coroutines, it can also be used to wait for futures.
asyncio module provides a
Future class for use with corouties which is almost, but not quite, compatible with
concurrent.futures.Future. The main differences are:
as_completed()provided by the
As usual, the
Future is just a standard interface for holding an eventual result, which allows the result to be queried (once ready) and allows callbacks to be registered to be called when the future is done. There are also methods
set_result(), to set the result value and mark the future as “done”, and
cancel(), to mark the future as “cancelled”. So, now we have a future that we can
await on within a coroutine.
>>> import asyncio >>> >>> fut = asyncio.Future() >>> fut.done() False >>> fut.cancelled() False >>> fut.result() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/andy/.pyenv/versions/3.5.10/lib/python3.5/asyncio/futures.py", line 288, in result raise InvalidStateError('Result is not ready.') asyncio.futures.InvalidStateError: Result is not ready. >>> fut.set_result("Message for you, sir!") >>> fut.done() True >>> fut.cancelled() False >>> fut.result() 'Message for you, sir!'
Here we can see an example of a coroutine awaiting on a bare
Future. We create an event loop, which is the
asyncio core scheduling object, and we use its
call_later() method to set the result of the
Future after a delay. If you’re trying to replicate this yourself, note that the time starts ticking the moment you execute
call_later(), as you can tell from the timestamps that I printed.
>>> import asyncio >>> import time >>> >>> async def pass_on_result(awaitable): ... return await awaitable ... >>> loop = asyncio.get_event_loop() >>> fut = loop.create_future() >>> print(time.time()); loop.call_later(20, fut.set_result, "my result") 1618217675.213969 <TimerHandle when=1187862.730957315 Future.set_result('my result')> >>> loop.run_until_complete(pass_on_result(fut)); print(time.time()) 'my result' 1618217695.219507
A few notes here. Firstly, the use of
create_future() on the loop is the preferred way to create futures, as this allows an event loop to provide an alternative implementation if appropriate. Secondly, the use of the
call_later() method of the event loop here to set a
Future result is very similar to the approach
asyncio.sleep() uses to delay for a specified interval. Thirdly, the reason why the
when parameter of the
TimerHandle is different to the time I’m printing is because I’m using
time.time() to get epoch time, whereas
time.monotonic() which has no relationship with actual time of day.
Since the only requirement here is that the parameter to
pass_on_result() is awaitable, then it doesn’t have to be a
Future. It can be another coroutine, as demonstrated by nesting the calls to the coroutine in the snippet below. The innermost call to
pass_on_result() is waiting on the
Future, but the other two are waiting on the nested coroutines.
>>> fut = loop.create_future() >>> loop.call_later(20, fut.set_result, "my other result") <TimerHandle when=1499600.826035453 Future.set_result('my other result')> >>> loop.run_until_complete(pass_on_result(pass_on_result(pass_on_result(fut)))) 'my other result'
We can also have multiple coroutines waiting on the same future, and they’ll all be woken up once it’s ready — although of course because these are coroutines rather than real threads they’ll actually get run sequentially. The example below uses
asyncio.gather() to run two coroutines on the same future. This function waits on multiple awaitables on parallel, and also automatically wraps bare coroutines in tasks — we’ll discuss tasks in a moment. The return is a list of all the results thus obtained.
>>> fut = loop.create_future() >>> loop.call_later(20, fut.set_result, "yet another result") <TimerHandle when=1499632.721701156 Future.set_result('yet another result')> >>> loop.run_until_complete(asyncio.gather(pass_on_result(fut), pass_on_result(fut))) ['yet another result', 'yet another result']
If you prefer a lower level of access, you can also register one or more callback functions directly with a
Future, which will be invoked when it becomes completed, either with a result, an error or a cancellation. One important detail, however, is that this callback is not invoked immediately that the result is set — with
asyncio.Future the callback is instead scheduled with the event loop.
The code below illustrates this — take a read through to see if you can figure out what’s going on, and I’ll add a few points of interest after it. This code uses the
run_until_complete() method on the event loop, which continually executes the loop until the specified awaitable is done3.
>>> def callback_factory(name): ... def callback(fut): ... try: ... print("Callback " + name + " result:", fut.result()) ... except Exception as exc: ... print("Callback " + name + " no result") ... return callback ... >>> async def delayed_cancel(fut): ... await asyncio.sleep(5) ... fut.cancel() ... print("Coroutine exiting") ... >>> fut1 = loop.create_future() >>> fut2 = loop.create_future() >>> fut1.add_done_callback(callback_factory("one")) >>> fut2.add_done_callback(callback_factory("two")) >>> fut1.set_result("finished fut1") >>> del fut1 >>> loop.run_until_complete(delayed_cancel(fut2)) Callback one result: finished fut1 Coroutine exiting Callback two no result
So the first point to note here is that the callback on
fut1 isn’t invoked as soon as the result is set, it’s invoked later. The second interesting point is that even though we
del fut1, the callback still remains queued and the result can still be recovered — this makes sense, because the queued callback must keep some sort of reference to
fut1 which prevents it from being destroyed until the callback is finished. This is worth remembering because if you have some callback invoked but for some reason you don’t enter the event loop, it’ll remain queued and may pop up unexpectedly later on in a completely unrelated piece of code that enters the loop.
The third note here is that the
fut2 callback is invoked when
fut2 is cancelled, but of course there’s no result to collect so calling
result() yields a
CancelledError exception, which we catch in the callback in this case. The fourth and final interesting point I’ll note here is that the
fut2 callback was invoked at all. Bear in mind the semantics of
run_until_complete() are that as soon as the specified awaitable is done, the event loop returns control to the calling code. Also bear in mind the callbacks are invoked by the event loop, and we can see that because
Coroutine exiting is printed after cancelling
fut2 but before the coroutine is invoked. So once
delayed_cancel() has completed,
run_until_complete() isn’t returning immediately, it’s continuing to invoke pending callbacks before finally returning control.
A final quick note on exceptions before we move on from
asyncio.Future. In real-world code you’ll most likely want to put some error handling into place in your coroutines. If you do this, bear in mind that cancelling a
Future is implemented using exceptions and you might well catch that by mistake, since in Python 3.5 the
CancelledError exception is still a subclass of
>>> async def catch_errors(awaitable): ... try: ... return await awaitable ... except Exception as exc: ... print("We caught " + repr(exc)) ... return None ... >>> fut = loop.create_future() >>> loop.call_later(20, fut.cancel) <TimerHandle when=1189566.356254259 Future.cancel()> >>> loop.run_until_complete(catch_errors(fut)) We caught CancelledError()
This sort of thing is why Pokémon exception handling4 is often discouraged, but personally I think it’s a useful pattern in certain circumstances where you don’t know upfront what the code you’re executing will be doing. It’s a matter of taste. If you do end up using broad exception specifications like this, however, you need to be aware of this issue to make sure you don’t catch things you don’t intend.
So now we’ve got a good understanding of the simple interface of
asyncio.Future, and also we’ve played with coroutines and seen their similarities with generators. The last piece of the puzzle is how these things are all scheduled by the event loop. This is also where we give a callback5 to my earlier promises6 to explain about wrapping coroutines in tasks.
You’ve probably noticed that a coroutine is really just a generator under the hood. The special behaviour that’s layered on top is the way that it’s scheduled as it becomes blocked and unblocked. This is the glue which transfers control between the coroutines which aren’t blocked on other awaitables, and this glue is provided by the task.
To understand this, let’s see what happens with a bare coroutine and no event loop. I’ll use the simple
pass_on_result() definition from earlier:
>>> async def pass_on_result(awaitable): ... return await awaitable ... >>> fut = asyncio.Future() >>> coro = pass_on_result(fut) >>> result = coro.send(None) >>> result._asyncio_future_blocking True >>> result is fut True
We execute the coroutine using the
send() method that was added to generators in Python 2.5. It’s important to note that native coroutines are a distinct concept, however, and don’t implement
send() is the only way to resume execution within them.
When the coroutine blocks what comes back is the awaitable on which it’s blocked, in this case
fut. You’ll see also the special
_asyncio_future_blocking is set, but don’t worry too much about it — I think it’s mostly used to flag that this class meets the
Future interface, and also to detect some common pitfalls more gracefully. In the Python source code, all of the code paths where it has an unexpected value appear to lead to some exception being thrown.
At this point we have a native coroutine blocked on a future. Let’s give the future a value, and call
>>> fut.set_result("my result") >>> try: ... coro.send(None) ... except StopIteration as exc: ... print("Got result", exc.value) ... Got result my result
At this point the coroutine is unblocked and completes, which yields
StopIteration just as a generator exiting would. One difference is that the coroutine has an actual return value, as opposed to generators which only
yield values. As it happens, the
value attribute of the
StopException instance is used to hold the return value.
So you can already see that working this by handle is pretty clunky and
asyncio.Task exists to wrap this up into a cleaner interface. What it does is intercept the values emitted by the coroutine and schedule appropriate handlers in an
asyncio event loop to handle them. In the case that the coroutine is blocked on a future, the task adds a callback to that future so that it can reschedule the coroutine when the future is done. In the case that the coroutine does a bare
yield, which is effectively yielding to other coroutines whilst remaining runnable, then it uses the
call_soon() method on the event loop to reschedule itself to be invoked agaim immediately once anything else currently pending has been processed.
I won’t go into every detail of its handling, as it has a lot of tricky logic to handle lots of edge cases, such as futures being cancelled or coroutines raising exceptions. One other point to note about
Task is that it’s a subclass of
Future, so a coroutine wrapped in a task can be treated like any other
OK, so we know how to use coroutines now, and the way that control reverts back to the event loop when we
await on things. That does leave a rather big question, however — what happens when we perform a blocking operation that doesn’t have support for coroutines? Since we’re cooperatively multitasking, this would prevent all other coroutines and callbacks from being invoked.
The simple answer to this is, of course, “so don’t do that, then”. However, Python’s use of special methods (i.e.
__xxx__()) can make it easy to do this without being aware of it. Fortunately Python 3.5 also includes some additional changes to support various specific cases where this might happen.
One fairly obvious such case is with context managers, which often do things like opening files or acquiring locks which can block. Fortunately this release introduces some new syntax to make context managers coroutine-friendly.
You may well already know this, but the context manager protocol involves calling an
__enter__() method at the start of the
with block and an
__exit__() method when the block is exited, either through normal flow or via an exception. We can’t mess around with these methods because it would be likely to break all sorts of existing code. But what we can do is add some new methods.
The two new methods are
__aexit__(), which are directly analagous to their non-asynchronous counterparts. There’s also a new syntax
async with ... which calls into these versions instead of the original pair. The fact that there are two new methods means that existing context managers can support both use-cases simultaneously, which avoids having to declare a whole set of parallel async versions of all the context managers that already exist (but they do need changes to add the new methods, of course).
These methods are expected to return an awaitable object to do the actual work. This allows the event loop to keep running until the context manager is ready, and again if the exit method also needs to block.
The new syntax can be considered equivalent to the code below.
# This new syntax: async with context_manager as ctx: ... # ... is essentially equivalent to this: ctx = await context_manager.__aenter__() try: ... except Exception as exc: if not await context_manager.__aexit__(type(exc), exc, exc.__traceback__): raise exc else: await context_manager.__aexit__(None, None, None)
Iterators are another obvious case where you can end up calling arbitrary code behind the scenes, so it’s not surprising that there’s also new syntax for them.
There’s a new syntax
async for ... which causes the new methods
__anext__() to used instead of the traditional
__next__() respectively. The
__aiter__() method returns an async interator which supports an
__anext__() method, and this is expected to return an awaitable in the same way as in the context manager case. Instead of
StopIteration there’s also a new
StopAsyncIteration exception for termination.
Once again, there are some simple code equivalencies for
async for expressed below.
# This new syntax: async for item in async_iterable: # Body of loop goes here await my_function(item) # ... is essentially equivalent to this: iterator = async_iterable.__aiter__() while True: try: item = await iterator.__anext__() except StopAsyncIteration: break # Body of loop goes here await my_function(item)
One point that’s worth mentioning here is that with regular iterators, the
next() method is a convenient shorthand for caling the
__next__() method of the iterator. It’s hopefully clear from the explanation above that this won’t work with asynchronous iterators. Furthermore, there’s no equivalent
anext() method, you just have to call it yourself. Not a big deal, just an asymmetry to be aware of.
Also I should mention that implementing asynchronous iterators gets a bit easier in Python 3.6 due to the implementation of asynchronous generators — but you’re going to have to wait for a future article to talk about those.
Coroutines can be confusing at first to a lot of programmers. To those who’ve only ever written code that executes sequentially in a single thread, the notion of continually deferring execution back to some central loop is can seem odd, and to keep code readable it requires them to modularise code in ways that may not be natural at first. To those who’ve done a lot of multithreaded code, the lack of mutexes and other synchronisation primitives may seem unnecessarily dangerous, as they’ve been bitten by having to painfully debug those concurrency issues that you only ever seem to find under heavy load in production. Indeed, if you try to use multithreaded primitives into asynchronous code you’re more likely to introduce issues like deadlocks due to the coorperative nature of the multitasking.
However, once you become comfortable with asynchronous programming, I feel it can have a lot of advantages in encouraging well-structured modular code, and avoiding many of the risks inherent with true concurrency, as well as the overheads of OS-aware threads. Of course, it can also be layed on top of threads and/or processes for really optimum performance under significantly IO-bound activities.
There are definitely some pitfalls to keep in mind, which could take some getting used to. When you’re doing multithreaded coding, one of the main challenges is being aware of which libraries and functions are thread-safe (i.e. can be safely called from multiple threads concurrently). In a similar way, those using coroutines need to be aware if libraries they’re using are async-aware. For example, it’s not uncommon to perform I/O operations in the
__init__() method of a class, but that’s going to be a problem if someone uses that class in a coroutine. The best bet is to get into the habit of not doing any potentially blocking operations in
__init__() and instead make better use of context managers — this is going to be a pain to retrofit into some older code, however.
Hopefully this article has given you a good flavour of coroutines and how they work in Python. If you’re looking for a slower-paced and more detailed overview of
asyncio in general, check out the excellent import asyncio series of videos by Łukasz Langa from the EdgeDB team. It may start a little slowly for some, but I’d recommend at least checking out the fourth video which goes into the details of coroutines, including some great discussion of the generator heritage of coroutines in Python.
That’s it for this article, next time I’ll be going through another of the significant changes in Python 3.5, type hinting. Plus, if you like coroutines then do check back for when I’ve got the articles on Python 3.6-3.8 posted, as things have still got some way left to evolve on the coroutines front.
I say mostly because there are a few other cases where objects are awaitable without providing
__await__(). One example is a generator which has been decorated with
This is suboptimal for similar reasons why you don’t generally want to catch
StopIteration, and thankfully this was resolved in Python 3.8 by making
BaseException subclass. ↩
As an aside, this is another case where a bare coroutine passed is automatically wrapped in a task, a topic of which I’ll frustratingly once again defer discussion. ↩