☑ All Go: Closures, Deferred Actions and Generics

15 Jun 2023 at 1:10PM in Software
 |  | 

In this, my third article on the Go programming language, I’m going to look at some of the additional features. Specifically we’ll look at closures and deferred actions, and also Go’s support for generics.

This is the 3rd of the 6 articles that currently make up the “All Go” series.

go board

We’re going to kick off this article looking at closures, one of the functional programming features in Go. The language doesn’t have a great deal of functional features — for example, it lacks builtin functions map(), reduce() and filter(), and it doesn’t seem to define a standard interface for iterators1 except for builtin types like maps, slices, strings, arrays and channels. However, it does define closures which are useful in a number of circumstances.

As well as closures we’re going to look at deferred actions, which are Go’s equivalent of tryfinally blocks. Finally, we’ll end big by looking at generics.

Closures

To kick of with, then, we’ll look at closures. If you’re familiar with closures in Rust, of lambdas in Python or C++, these will probably seem fairly familiar. For those not familiar, a closure is like an anonymous function that’s defined in the context of another function. As well as its own local variables, it has access to any variables from the enclosing function which were in scope when it was defined — this continues to be true even if this anonymous function is returned out of the enclosing function and called from elsewhere.

This is probably best explained with an example.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
package main

import "fmt"

func ticketMachine(startingTicket int) func() int {
    return func() int {
        startingTicket += 1
        return startingTicket
    }
}

func main() {
    tickets1 := ticketMachine(100)
    tickets2 := ticketMachine(200)
    for i := 0; i < 10; i++ {
        fmt.Printf("Tickets: %d %d\n",
                   tickets1(), tickets2())
    }
}

Here the ticketMachine() function lists its return type as func() int — that is, it returns a function which itself takes no parameters but returns an int. The ticketMachine() function also takes a startingTicket parameter, which is also an int.

It defines a closure which first increments startingTicket within the enclosing scope, and then returns its new value. Having defined this closure, the ticketMachine() function then returns this function to the calling code.

In the example above, two closures are obtained by two calls to the function, stored in tickets1 and tickets2. We then call these 10 times each, printing the values. The output we get is this.

Output
Tickets: 101 201
Tickets: 102 202
Tickets: 103 203
Tickets: 104 204
Tickets: 105 205
Tickets: 106 206
Tickets: 107 207
Tickets: 108 208
Tickets: 109 209
Tickets: 110 210

This clearly demonstrates that the two closures have their own isolated copies of the ticketMachine() scope, because the increments don’t interfere with each other.

Note that the reason they have separate copies is because each was returned from a separate invocation of ticketMachine(), so they refer to two different instances of the enclosing scope. If a single function were to define and return two different closures, they would refer to the same instance of the enclosing scope, and thus they would share any variables. If they modify them the effects of this would impact both closures.

These semantics seem intuitive to me, but might be surprising to some who’d expect every closure to have its own copy of the enclosing scope regardless. In general, I would say it’s quite rare for a closure to need to modify values in its enclosing scope, but someone might choose to do that instead of declaring a new local variable to use in a loop, say, so it’s worth being aware of this behaviour.

Those familiar with closures in Rust or lambdas in C++ will note that there’s no definition of the variables which are captured from the enclosing scope — as in Python, any and all variables in the enclosing scope are candidates for capture and the ones captured are just determined by what the body of the function references.

Of course, if you wanted all the closures to share a variable you could pass in a pointer instead — but this should be self-evident if you understand the semantics of pointers.

Closure Implementation Details

This section is just for interest, I don’t think anyone needs to understand this to use the language! Therefore don’t let this level of detail concern you if you’re not interested in it.

The discussion above should likely be sufficient to understand and use closures within the language. But I like to drill a little deeper sometimes, and I was curious as to how the compiler represents these values — it presumably can’t be quite as simple as a pointer to a function because the enclosing scope has to be represented somehow.

On a brief Google I couldn’t find any authoritative information on how these things works — to be honest I might not have searched very hard, it’s more fun to find out — so I amended the code above to try to gather some info. I assumed that closures would be some sort of pointer, so I printed the value of tickets1 and tickets2.

fmt.Printf("tickets1=%p\n", tickets1)
fmt.Printf("tickets2=%p\n", tickets2)

I also used the runtime library to obtain the program counter (PC) within the closure, and print it.

return func() int {
    pc, _, _, _ := runtime.Caller(0)
    startingTicket += 1
    fmt.Printf("ticket=%d ticket_addr=%p pc=0x%x\n",
               startingTicket, &startingTicket, pc)
    return startingTicket
}
Output
tickets1=0x108f3a0
tickets2=0x108f3a0
ticket=101 ticket_addr=0xc00001e0d8 pc=0x108f3ea
ticket=201 ticket_addr=0xc00001e0e0 pc=0x108f3ea
Tickets: 101 201
...

Now this was unexpected to me — I rather assumed that the closure would be a pointer to some stack frame structure representing the instance and containing a pointer to the actual code. However, tickets1 and tickets2 had identical pointer values, which was really strange. The fact that the PC values didn’t match made more sense — the underlying code for the function is the same, it just needs to be called with a different stack pointer (SP). The fact that the addresses of the startingTicket variable in each case were different was also consistent with what I expected.

I wanted to get a better idea what was going on, so I pasted this into godbolt.org (and a shout out to Matt Godbolt for building such a great tool!) and took a look. Here’s the x86-64 assembler where it calls the two function objects in the loop.

MOVQ    main.tickets1+72(SP), DX
MOVQ    (DX), AX
CALL    AX
MOVQ    AX, main..autotmp_4+56(SP)
MOVQ    main.tickets2+64(SP), DX
MOVQ    (DX), AX
CALL    AX

We can see here that the value of both pointers is passed directly into CALL, so the value is a pointer to some sort of function. I’m presuming, therefore, that the compiler has generated some sort of wrapper function which sets things up and invokes the actual underlying function — the underlying function will be stored at the PC values we saw earlier.

When I tried to drill into how this happens, I got bogged down in my massively rusty comprehension of Intel assembler, so I’m still a little hazy on how the two functions are disambiguated. I can only assume that tickets1 and tickets2 actually hold pointers to different wrapper functions, but for some reason the %p formatting is hiding that difference and just showing the location of some underlying common pointer.

Clearly from the output, and just plain common sense, the two invocations end up using different stack frames — or at least different values of startingTicket. But exactly how that occurs is a little bit of a mystery to me right now — I may come back and revisit this once I’m familiar with the rest of the language, but for now let’s move on.

Deferred Actions

Next up we’ll take a look at deferring actions. This serves a similar purpose to RAII in C++, ensure in Ruby and tryfinally in Java and Python.

The mechanic is simply the defer keyword followed by either a function or method call — you can’t just use any old expression. The expression is evaluated straight away, but the function isn’t called until the scope of the surrounding function exits. Note that this is the enclosing function, not the enclosing block. At this point all deferred functions are invoked in reverse order of deferral. One thing to note is that these functions only exist for their side-effects — any return values are discarded.

You can see the semantics of this illustrated in the code below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
package main

import "fmt"

func myFunction() int {
    ret := 100
    fmt.Printf("Starting myFunction ret=%d\n", ret)
    one := func(x int) {
        fmt.Printf("one: %d\n", x)
        ret += 1
    }
    two := func(x int) {
        fmt.Printf("two: %d\n", x)
        ret *= 2
    }
    f := one
    for i := 0; i < 3; i++ {
        defer f(i)
    }
    f = two
    defer f(123)
    fmt.Printf("Leaving myFunction ret=%d\n", ret)
    return ret
}

func main() {
    x := myFunction()
    fmt.Printf("Got return %d\n", x)
}

In myFunction() we’re defining two closures, one() and two(), and then setting f to point at one(). We invoke it three times in a loop, passing 0, 1 and 2 to the calls, and then we update f to point to two() and invoke it with 123.

Output
Starting myFunction ret=100
Leaving myFunction ret=100
two: 123
one: 2
one: 1
one: 0
Got return 100

Hopefully this is what you expected. The deferred functions wait until the enclosing function exits, after which they’re invoked in reverse order of their deferral, and after they’ve all completed then control returns to the calling function.

The fact that we get one() invoked demonstrates that the value of f was evaluated at time of deferral, not time of invocation. The fact that 100 was returned from myFunction() demonstrates that the modification of the local ret variable within the closures hasn’t affected the return value.

However, if you modified myFunction() to return &ret, and made corresponding modifications to the return type and calling code, then you would find the updates to the value of ret in the deferred functions would take effect on the value returned to the caller.

5
func myFunction() *int {
23
return &ret
28
fmt.Printf("Got return %d\n", *x)
Output
Starting myFunction ret=100
Leaving myFunction ret=100
two: 123
one: 2
one: 1
one: 0
Got return 203

There is actually another, probably better, way that deferred functions can modify return values which is by naming the return values. For example, if we had declared myFunction() like this:

5
func myFunction() (result int) {

… then the deferred function could access and modify the value returned through the name result:

 8
 9
10
11
one := func(x int) {
    fmt.Printf("one: %d\n", x)
    result += 1
}

If you do want to modify the return value then I’m guessing this is the more idiomatic way to do it.

One final point that’s worth noting — if the function expression passed to defer evaluates to nil then you’ll get a panic when the function is invokved, not at the defer statement. As a result, make sure your function values are valid!

That’s about what there is to say about defer. It’s a fairly simple mechanism, but I can see it being a flexible way to perform all sorts of useful cleanup. Certainly nicer than using goto2. The restriction on being called only at function exit may be a little annoying in some cases, as it may force you to refactor in ways that don’t match what you’d like to do, but I think this is a fairly minor concern. Since Go lacks exceptions, I’d imagine the need for mechanisms like this are more limited than in exception-heavy langauges like Python anyway.

Generics

Last, but certainly not least we come to the topic of generics. I suspect most readers will have an idea what these are, but essentially it’s a way to write template code without specifying types, and then instantiating the template with specific types later. If you’d like to know more, you can read through the Generic Programming page on Wikipedia.

Generic Functions

First off let’s take a look at a very simple example of a generic function.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
package main

import "fmt"

func findIndex[T comparable](needle T, haystack []T) int {
    for i, value := range haystack {
        if value == needle {
            return i
        }
    }
    return -1
}

func main() {
    fmt.Printf("%d\n", findIndex(22, []int{0, 11, 22, 33}))
    fmt.Printf("%d\n", findIndex("one", []string{"one", "two"}))
    fmt.Printf("%d\n", findIndex(6.9, []float64{6.1, 6.2, 6.3}))
}

The findIndex() function accepts a value of indeterminate type, and a slice of such values, and returns the offset of the first such value within the slice, or -1 if the value wasn’t found.

The type parameter [T comparable] specifies the function is templated on a type T, and must also specify a constraint on that type. In this case we use comparable, which is a language keyword and specifies any builtin type for which == and != are implemented.

Type Constraints

The constraint is mandatory, although you can specify any to indicate that any type should be accepted. The compiler will validate that you haven’t performed any operations which aren’t necessarily defined on all types matching the constraint, however, regardless of which types you actually use. As a result, using any is going to significantly limit the operations you can perform on the values.

You can use an interface to specify type constraints beyond the few defined as builtins. To illustrate this, let’s say we wanted to build a max() function which takes two parameters and returns whichever of them is larger. You’d want to write something like this.

func max[T ordered](a, b, T) T {
    if b > a {
        return b
    } else {
        return a
    }
}

We can’t use comparable here because that only ensures the type supports equality and inequality operators — other comparisons, like the greater-than we’re using here, are not included. So we’ve used a constraint called ordered to represent all ordered types.

Sadly there’s no ordered builtin3, so we’ll need to define it ourselves. Here’s a first attempt.

type ordered interface {
    int | uint | string | float32 | float64
}

You can see we’ve used a new form of the interface definition, which allows types to be specified. We’ve provided a list of alternations which means that this interface matches any of those types. You can also provide methods, as with basic interfaces, and these act as a further constraint on the types that match — i.e. only those types which provide all the methods specified.

There’s a couple of things worth noting at this point. First is that these interfaces, with type limitations, can only be used for the purposes of constraints in generics — we couldn’t define a function which takes a value of type ordered as a normal parameter, for example.

The second point is that we’ve included all the types on the same line with the bitwise or operator |, which can be thought of as “union” in this context. This means that the interface matches any of these types. If we had listed types on separate lines they would be treated as “intersection”.

type pointless interface {
    int
    string
}

The interface above will only match types which are both int and string — since no type fulfills these criteria, the interface will never match.

Anyway, going back to our max() example, this works fine for some simple examples.

// Huzzah, our code works!
fmt.Printf("%d\n", max(1, 2))
fmt.Printf("%s\n", max("two", "one"))

However, we forgot that Go doesn’t do any implicit type conversion, so if we use subtly different types then we’re plumb out of luck.

// Drat, foiled again -- this doesn't work.
fmt.Printf("%d\n", max(int64(10), int64(20)))

We could manually and tediously enumerate every single builtin type, but actually someone has already done that — in the golang.org/x/exp/constraints library there are a variety of useful constraints, one of which is Ordered. My earlier statement about there not being an ordered builtin was technically correct since Ordered isn’t a builtin, it’s in a library, and it’s Ordered instead of ordered. I know, I know, but it gave me an excuse to run through using interfaces as type constraints.

Generic Types

As well as generic functions we can declare generic types as well. Since we’ve already seen how this works for functions, hopefully you’ll recognise how the syntax is being used in the example below. This code creates a simple doubly-linked list4, and adds some methods to it.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
package main

import (
    "fmt"
    "strings"
)

type List[T any] struct {
    value T
    prev *List[T]
    next *List[T]
}

Here we’re declaring the generic type, with a type parameter on the type declaration the same as we used on the generic function. We indicate that this is generic across all other types, and then we declare the structure to hold a member of that type value, along with two pointers prev and next which are pointers to the type that we’re declaring.

One important thing to remember here is that List[T] is the type we’re defining here — you can’t refer to it just with List, the type parameter must always be included when you refer to the type5.

14
15
16
func newList[T any](item T) *List[T] {
    return &List[T]{item, nil, nil}
}

A function to create a new instance of List[T] and return a pointer to it. Notice how this function must be generic across T, as in the earlier generic functions we wrote. Constrast this with the methods defined below.

18
19
20
21
22
23
24
25
26
27
28
29
30
31
func (node *List[T]) insertAfter(item T) *List[T] {
    node.next = &List[T]{item, node, node.next}
    return node.next
}

func (node *List[T]) unlink() {
    if node.prev != nil {
        node.prev.next = node.next
    }
    if node.next != nil {
        node.next.prev = node.prev
    }
    node.next, node.prev = nil, nil
}

We just define a couple of methods in this example, to keep things short — insertAfter() creates a new List[T] from the value provided and inserts it into the list immediately following rhe receiever, and unlink() removes the receiver from any list in which it’s located. Notice how the receiver type is specified with its type parameter, but the method names don’t have their own type parameters.

33
34
35
36
37
38
39
40
func (node *List[T]) String() string {
    var ret strings.Builder
    fmt.Fprintf(&ret, "%v", node.value)
    for node = node.next; node != nil; node = node.next {
        fmt.Fprintf(&ret, " => %v", node.value)
    }
    return ret.String()
}

An implementation of String() on *List[T] so we can print the list more easily. Once again using the useful strings.Builder object, which is reminiscent of std::ostringstream in C++ and io.StringIO objects in Python. Note that because this is defined to take a pointer as a receiver, it won’t work with an instance — that’s OK in this case because we use pointers exclusively to refer to these things.

42
43
44
45
46
47
48
49
50
51
52
53
func main() {
    first := newList(11)
    first.insertAfter(44)
    second := first.insertAfter(22)
    third := second.insertAfter(33)

    fmt.Printf("%s\n", first)

    third.unlink()

    fmt.Printf("%s\n", first)
}

Finally a main() function to run some basic illustrations of the functionality. Hopefully this is all fairly self-explanatory at this point.

I must say I found this all fairly straightforward — once you get the hang of the fact that you don’t need to make methods generic as long as the receiver is generic, it all falls out in a fairly straightforward way.

One detail that may not be obvious is that types are never inferred for generic types — all type parameters must be specified explicitly, even if the compiler could potentially infer them in some circumstances. The other interesting point is that methods can never be generic, regardless of whether they are defined on generic types or not — you’ll get an error if you try to apply a type parameter to any method directly. This appears to have been an explicit design decision by the language maintainers.

Conclusions

That’s it for this episode of my whirlwind exploration of Go features. Overall, I didn’t find there were too many surprises here, if I’m honest — most of the features worked more or less as I’d expect. This is generally a reassuring property in any programming language, although of course this may well be determined more by similarities to the languages with which I’m already familiar than any inherent intuitiveness of the language. But I’m taking it as a positive sign.

Closures

Closures are fairly straightforward, particularly given their lack of need to declare captures from the enclosing scope makes them feel quite close to closures in Python, a language with which I’m very familiar. I suppose some people might argue that the lack of need to declare captures makes dependencies less explicit to developers, and I think there’s some validity in that — but Go’s use of garbage collection and support for freely propagating locals from within a function probably make this quite safe in practice.

If you’re returning closures from a function then the only behaviour I can imagine being slightly surprising is that two closures defined in the same invocation of the returning function will share the state from the enclosing scope — I suppose developers might expect each would have its own copy, as when two closures come from separate invocations of the returning function. However, it seems to me fairly intuitive that this would be the case, so it’s pure speculation on my part whether anyone would actually be confused in this way.

Deferred Actions

Deferred actions also seem fairly simple, and once again the main concern in many languages would be the scope of variables, but with Go’s “just let the compiler worry about it” approach then this is quite straightforward. It feels like this is a more flexible version of tryfinally, and will be particularly helpful in the case of unwinding multiple allocations where an operation might fail halfway through the list, and you only want to unwind the ones that you actually got to.

My only slight gripe is the semantics that deferred functions are always invoked on function on function exit, not when the enclosing scope exits. Although this makes them more convenient in some ways, it arguably makes them less flexible in others. You can use context managers in Python or RAII in C++ to perform more activities than just freeing resources, such as creating log entries when a scope exits or committing a transaction. You can use defer to do the same in Go, but only if you’re willing to restructure your functions to accommodate this. It’s certainly not a glaring issue, andy only time will tell if this is an annoyance in practice or just one of those theoretical concerns that only comes up because I’m deliberately trying to weigh pros and cons.

Generics

Finally, generics are also pretty straightforward and easy to use — the syntax is fairly clear from a few examples, and I didn’t find myself having to decipher any weird compiler errors when I was getting it working. I would say that the system has some limitations compared to Rust’s flexible traits system, but I think it’s fair to say that a decent proportion of application programmers won’t be making significant use of them in Go.

Briefly, there are two advantages I can see in Rust traits over Go’s generics. The first is that Rust supports both static and dynamic dispatch — static dispatch being where the precise function to call can be determined at compile time, whereas in dynamic dispatch it’s determined at runtime. Go only offers dynamic dispatch, which harms performance a little when using these functions — partly by introducing an additional pipeline-busting jump instruction, and partly by reducing locality of reference.

The second advantage of Rust traits is that they allow very strong type safety guarantees, and also specialisations to be developed. I suspect that in Go there will be cases when writing very generic code that you need to use interface{} or similar hacks, and then do runtime switching based on specific types to achieve this. In general it’s always better to catch bugs at compile-time, and having to write your own runtime code to switch on types creates dependencies between otherwise unrelated pieces of code which can be painful.

Once again, these may well not impact a large proportion of application developers just trying to get things done. But I can imagine it being a headache for people trying to write things like the standard library which are intended to work across user-defined types as well as builtins. The payoff is that, for the many cases they do support well, Go generics are simple and straightforward to use.

Closing Thoughts

That wraps it up for this article. As usual, I hope this has been interesting and/or useful. I think next time it’s high time I look into Goroutines and channels, since these are a major feature of the language, and I feel like I have enough familiarity with the other semantics that I can try to do something useful with it. Have a great day!


  1. However, it seems that this is a known deficiency, and GitHub issues like this and this one seem to be trying to hammer out the details of how this might work. That said, I wouldn’t be holding my breath for its addition to the language any time soon. 

  2. Go does offer goto, which is unfortunate as it’s one of the most easily abused constructions for writing readable code. It’s not that I think it’s inherently bad — it’s just that the cases in which it’s the most elegant solution are, in my opinion, vanishingly small compared with its potential for misuse. 

  3. This is debatably a little white lie as we’ll see in a minute, but stick with me for a moment. 

  4. I probably don’t need to mention this, but this list implementation has some pretty poor design flaws, so don’t go using this in production code — but it makes a reasonably concise example. 

  5. Although you can use a different symbol for the type parameter in another context — for example, List[U] or List[mytype] would also work, although I suggest keeping things consistent across your code will make life easier. 

The next article in the “All Go” series is All Go: Concurrency
Fri 23 Jun, 2023
15 Jun 2023 at 1:10PM in Software
 |  |