Planet Haskell

January 23, 2021

Sandy Maguire

I Built a Terrible Roomba

I spent the last few months making a roomba. I mean, sorta. I made a robot that wanders around and ostensibly vacuums as it goes. But it gets stuck on things all the time, and the floor isn’t particularly clean by the time it runs out of batteries. Nevertheless, it was an experience1 and I learned a lot, so I though it would be a good idea to document. Consider the following a devlog, one that might be interesting for the historical record — but which I don’t suggest following to build your own roomba.

This all started when I googled for “fun robot projects,” and came across this terrible video. It goes through building a little roomba life-hack-style, which is to say, in absolutely no detail but vaguely making the whole procedure look simple. They use some cardboard and a cup-noodle container, so I thought “how hard could this be?” What I didn’t notice at the time was how the thing they build is not the thing they demo, nor how there are crucial components that are completely disconnected. As it happens, this video is completely smoke and mirrors.

I’d picked up some motors and a motor controller for a previous, ill-fated project. And I had an old sonar module lying around in order to do the range finding. So I figured all that was left was a cup-noodles container and a fan, and I’d be on my merry way.

Stupidly, I decided I wasn’t going to make this thing out of cardboard. I was going to design and 3D print the chassis and all of the fiddly little bits it would require. My 3D printer’s bed is 22cm square, which meant anything I wanted to make was going to need to be smaller than that. My first prototype was shaped like a flying disc, with a hole in the middle to put the noodles container, but I learned the hard way that there simply wasn’t enough floor-space on the disc to fit all of the necessary electronics. Back to the drawing board.

I farted around with the base plate design for like a month, trying things. In the meantime, I picked up some CPU fans, assuming that all fans are made equally. This is not true — CPU fans are good at moving air, but not good at, uh… pressurizing air? or something? The idea is that a CPU fan won’t force air somewhere it doesn’t want to go, for example, into a vacuum bag. For that you want a blower fan, but I spent $50 buying the wrong sorts of fans on amazon before I figured this out.

Fans are measured in CFM, which is some sort of non-standardized measurement of “cubic feet per minute.” How much more imperial can you get? Non-standardized here means that all fan manufactures have a different procedure for measuring the CFM, so you can’t just compare numbers. That would be too easy.

It took many weeks of not having my roomba suck enough before I realized that fans move a constant volume of air, not a constant mass. The difference is that, unless you have really good intake to your fan, it’ll just make your vacuum chamber really low pressure, and not actually translate into sucking hard at the nozzle. I sorta worked around this by just mounting the fan directly above the vacuum bag, which had a small cut-out to pull debris through. Pipes seem to be anathema to small fans that you’re trying to use as vacuum pumps.

I tried using some agitators to improve my suction by getting things moving. My first attempt was via a gear train that I didn’t realize was 10RPM — way too damn slow to get anything moving. I didn’t feel like waiting around for another amazon shipment, so I just tried running my 12V 2000RPM DC motors at 3V. It sorta worked, but the handmade brushes I built dissolved themselves by still spinning too fast. Since it didn’t seem to improve the suction by much, I ended up scrapping this idea.

While trying to prototype something together with alligator clamps, I accidentally shorted my battery and caused no-small amount of smoke and did some unintentional welding. Thankfully it didn’t explode! I was doing stupid, unsafe things with the battery, but I learned the wrong lesson from this — that I should properly solder all of my connections, even when prototyping. What I should have learned instead was to make a really safe breakout connector for my battery, and then play fast and loose with crimps and connectors from there. But I didn’t yet know about crimps and connectors, so I just hand-soldered everything. It took forever and my productivity went asymptotically towards zero. Especially because I didn’t yet know what I was making, so there was a lot of soldering and desoldering and resoldering.

To make things worse, I kept 3D printing half-figured out chassis — each one of which took like nine hours to print. Inevitably some part wouldn’t fit, or the suction would be off, or some other problem would arise. Cardboard next time, my dude.

Oh, and did I mention that I don’t know how to connect physical things, so I just ended up hot-glueing everything together? Like, everything.

One day I was hanging out on IRC, describing my project when Julia Longtin said “oh my god STOP. You’re going to burn down your house!” She had correctly noticed that I hadn’t put a battery management system in front of my battery. If you’re a hobbyist like I am, you might not know that LiPo batteries have a bad habit of catching on fire when you charge them after letting their voltage drop too low. A BMS board watches the voltage on the battery and cuts the circuit before it gets dangerously low. When testing this thing (after the BMS was installed,) it turned off quite often, so I’m pretty sure Julia saved me a ton in fire insurance claims. Thanks Julia!

The roomba’s only sensor is a sonar module that shoots sound waves and listens to hear when they come back. It’s essentially echo-location, like what bats have. Unfortunately for me, we also have the expression “blind as a bat,” which pretty adequately describes the robot. Sonar is a neat idea in theory, but in practice it only reliably works up to about a foot in front, and cloth-covered things like sofas muffle it. When added to the fact that DC motors offer no feedback if they’re stalled, it meant my roomba couldn’t detect if it were moving down a long hallway or stuck trying to drive into the couch. These are two scenarios you really want different behaviors for.

But even worse, due to my design and the limitations of my 3D printer bed, I couldn’t figure out how to fit the wheels inside the robot and still get all of the electronics and vacuum supplies on-board. As a compromise, the rubber tires jut out about two centimeters. Which is just about ideal for getting caught on chair legs and errant cables and walls and stuff like that. So if he hit the wall at a 45 degree angle, he’d just get wedged there. And at 45 degrees, sonar just bounces off of walls and doesn’t return, so again, you don’t know you’re stuck.

What a piece of work.

The software on this thing is a big state machine with things like “drive forward” and “bounce off of driving straight into the wall” and “try relocate because you’ve been stuck in the couch for too long.” I expected the software to be the easiest part of this project, since I’m an ex-professional software engineer. But not so! Even after you discount the time I accidentally melted my Arduino — by… well, I’m not sure what, but definitely related to plugging it into the computer — the software didn’t go smoothly. Arduino has this annoying forced event loop where it calls loop() just as fast as it can, and you can push back with a delay(long milliseconds) function call. But it’s all in C++ (but without the STL) and so getting any real work done is annoying. And have you ever tried to write a proper state machine without algebraic data types? I walked away with the impression that I’m going to need to do a lot of work improving the software end of things before I do another serious project with Arduino.

In short, I spent a few months accidentally setting stuff on fire, waiting too long for the wrong 3D shapes to print, and I all I got was this stupid T-shirt. But I guess that’s what learning feels like. But I’ve already bought a LIDAR module and some bumpers for mk 2, so maybe I’m just a glutton for punishment.

If you’re curious about actually building this thing, here’s all of the source materials. But please, do yourself a favor and don’t put yourself through the mental turmoil of trying to get it to work.

Building One For Yourself

Don’t. But if you do, you’ll need these parts:

Bill of Materials

3D Printed Parts

The whole thing is written in a Haskell DSL for 3D printing called ImplicitCAD, because of course it is. Have you met me? The source code is available here, where everything definition prefixed with final_ needs to be printed.

Oh yeah, and even after being very careful to model the negative space necessary for the fan’s exhaust, I forgot to connect that to the body of the roomba, so I needed to cut the exhaust hole out with my soldering iron. The resulting smoke was pretty noxious, so I just tried to not breathe too hard.

Source Code

Here’s the code I wrote for this thing. It’s not beautiful, and shouldn’t be considered as evidence as to how I write real code that I care abut. Sorry not sorry.


Roomba schematic

What’s labeled as the voltage source here should instead be the BMS. And no promises that the pins on the Arduino correspond exactly with what the source code actually does. I think it’s right, but the roomba might drive backwards.

  1. I’m hesitant to call it a good experience.↩︎

January 23, 2021 04:10 PM

January 22, 2021

Douglas M. Auclair (geophf)

January 2021 1HaskellADay Problems and Solutions

  • 2021-01-22: Today's #haskell problem: we inject countries of wineries extracted from @wikidata into the mix to ... simplify (?) things? ... wait? There're countries mismatched, too? How ... surprising. 🙄 Today's #haskell solution shows us that wine is "No Country" for old men. ... wait ... wut? 
  • 2021-01-21: Today's #haskell problem is to compare wineries from @wikidata to those in a @neo4j graph. Also. Did you know that there's a winery in Oregon named "Sweet Cheeks"? Now you do. You're welcome. Today's #haskell solution shows that 125 wineries match, more than 400 don't. Lots of aliasing work ahead of us. 

  • 2021-01-20: Today's #haskell problem asks you to parse wineries and their geo-locations from a JSON file. Simple problem; simple solution. Also: wine a bit. It helps. 

by geophf ( at January 22, 2021 09:03 PM

Alson Kemp

AWS ALB Lambda

Generally, the tooling around Amazon Web Services’ services is pretty good but the newness of the Application Load Balancer <-> Lambda linkage has been a bit lacking (the linkage is a bit new). Also, we use Typescript and the tooling didn’t support TS very well.

The AWS-Serverless-Express node module was migrated to Vendicia’s Serverless-Express ( and the new maintainer is making rapid progress fixing this up but we were trying to get into production at the same time so went ahead and built a straightforward ALB<->Lambda adapter (heavily borrowing from Serverless-Express). See this GitHub Gist:

Usage is pretty simple: just convert an ALB Request Lamba Event to an Express Request; then convert the Express Response back to an ALB Response Lambda Event.

let app: any;
module.exports.handler = async (event: Lambda.ALBEvent, context: ALBEventRequestContext): Promise&lt;Lambda.ALBResult&gt; =&gt; {
  // Start a server (if one doesn't exist), then memoize it.
  if (!app) {
    app = await __setup();

  const req = alb.fromALBEvent(event); // <-- From ALB to Express
  let resp = new alb.ServerlessResponse(req); // Need to setup a Response for use in the app handle

  return new Promise((resolve, reject) =&gt; {
    app.handle(req, resp); // Have Express handle the request and return the response.
    return alb.waitForStreamComplete(resp).then(() => {
      const albResp = alb.toALBResult(resp, binaryMimeTypes); // <-- From Express to ALB
      return resolve(albResp);
    }).catch((err: any) =&gt; {
      console.err(`HANDLER ERROR: ${err}`);
      return reject(err);

by alson at January 22, 2021 02:14 AM

Tweag I/O

Programming with contracts in Nickel

In a previous post, I gave a taste of Nickel, a configuration language we are developing at Tweag. One cool feature of Nickel is the ability to validate data and enforce program invariants using so-called contracts. In this post, I introduce the general concept of programming with contracts and illustrate it in Nickel.

Contracts are everywhere

You go to your favorite bakery and buy a croissant. Is there a contract binding you to the baker?

A long time ago, I was puzzled by this very first question of a law class exam. It looked really simple, yet I had absolutely no clue.


A contract should write down terms and conditions, and be signed by both parties. How could buying a croissant involve such a daunting liability?

Well, I have to confess that this exam didn’t go very well.

It turns out the sheer act of selling something implicitly and automatically establishes a legally binding contract between both parties (at least in France). For once, the programming world is not that different from the physical world: if I see a ConcurrentHashmap class in a Java library, given the context of Java’s naming conventions, I rightfully expect it to be a thread-safe implementation of a hashmap. This is a form of contract. If a programmer uses ConcurrentHashmap to name a class that implements a non-thread safe linked list, they should probably be sent to court.

Contracts may take multiple forms. A contract can be explicit, such as in a formal specification, or implicit, as in the ConcurrentHashMap example. They can be enforced or not, such as a type signature in a statically typed language versus an invariant written as a comment in a dynamically typed language. Here are a few examples:

Contract Explicitness Enforced
Static types Implicit if inferred, explicit otherwise Yes, at compile time
Dynamic types Implicit Yes, at run-time
Documentation Explicit No
Naming Implicit No
assert() primitive Explicit Yes, at run-time
pre/post conditions Explicit Yes, at run-time or compile time

As often, explicit is better than implicit: it leaves no room for misunderstanding. Enforced is better than not, because I would rather be protected by a proper legal system in case of contract violation.

Programming with Contracts

Until now, I’ve been using the word contract in a wide sense. It turns out contracts also refer to a particular programming paradigm which embodies the general notion pretty well. Such contracts are explicit and enforced, following our terminology. They are most notably used in Racket. From now on, I shall use contract in this more specific sense.

To first approximation, contracts are assertions. They check that a value satisfies some property at run-time. If the test passes, the execution can go on normally. Otherwise, an error is raised.

In Nickel, one can enforce a contract using the | operator:

let x = (1 + 1 | Num) in 2*x

Here, x is bound to a Num contract. When evaluating x, the following steps are performed:

  1. evaluate 1 + 1
  2. check that the result is a number
  3. if it is, return the expression unchanged. Otherwise, raise an error that halts the program.

Let’s see it in action:

$nickel <<< '1 + 1 | Num'
Done: Num(2.0)

$nickel <<< 'false | Num'
error: Blame error: contract broken by a value.
  ┌─ :1:1
1 │ Num
  │ --- expected type
  ┌─ <stdin>:1:9
1 │ false | Num
  │         ^^^ bound here

Contracts versus types

I’ve described contracts as assertions, but the above snippet suspiciously resembles a type annotation. How do contracts compare to types? First of all, contracts are checked at run-time, so they would correspond to dynamic typing rather than static typing. Secondly, contracts can check more than just the membership to a type:

let GreaterThan2 = fun label x =>
  if builtins.isNum x then
    if x > 2 then
      contracts.blame (contracts.tag "smaller or equals" label)
    contracts.blame (contracts.tag "not a number" label)

(3 | #GreaterThan2) // Ok, evaluate to 3
(1 | #GreaterThan2) // Err, `smaller or equals`
("a" | #GreaterThan2) // Err, `not a number`

Here, we just built a custom contract. A custom contract is a function of two arguments:

  • the label label, carrying information for error reporting.
  • the value x to be tested.

If the value satisfies the condition, it is returned. Otherwise, a call to blame signals rejection with an optional error message attached via tag. When evaluating value | #Contract, the interpreter calls Contract with an appropriate label and value as arguments.

Such custom contracts can check arbitrary properties. Enforcing the property of being greater than two using static types is rather hard, requiring a fancy type system such as refinement types , while the role of dynamic types generally stops at distinguishing basic datatypes and functions.

Back to our first example 1 + 1 | Num, we could have written instead:

let MyNum = fun label x =>
  if builtins.isNum x then x else contracts.blame label in
(1 + 1 | #MyNum)

This is in fact pretty much what 1 + 1 | Num evaluates to. While a contract is not the same entity as a type, one can derive a contract from any type. Writing 1 + 1 | Num asks the interpreter to derive a contract from the type Num and to check 1 + 1 against it. This is just a convenient syntax to specify common contracts. The# character distinguishes contracts as types from contracts as functions (that is, custom contracts).

To sum up, contracts are just glorified assertions. Also, there is this incredibly convenient syntax that spares us a whole three characters by writing Num instead of #MyNum. So… is that all the fuss is about?

Function contracts

Until now, we have only considered what are called flat contracts, which operate on data. But Nickel is a functional programming language: so what about function contracts? They exist too!

let f | Str -> Num = fun x => if x == "a" then 0 else 1 in ...

Here again, we ask Nickel to derive a contract for us, from the type Str -> Num of functions sending strings to numbers. To find out how this contract could work, we must understand what is the defining property of a function of type Str -> Num that the contract should enforce.

A function of type Str -> Num has a duty: it must produce a number. But what if I call f on a boolean? That’s unfair, because the function has also a right: the argument must be a string. The full contract is thus: if you give me a string, I give you a number. If you give me something else, you broke the contract, so I can’t guarantee anything. Another way of viewing it is that the left side of the arrow represents preconditions on the input while the right side represents postconditions on the output.

More than flat contracts, function contracts show similarities with traditional legal contracts. We have two parties: the caller, f "b", and the callee, f. Both must meet conditions: the caller must provide a string while the callee must return a number.

In practice, inspecting the term f can tell us if it is a function at most. This is because a function is inert, waiting for an argument to hand back a result. In consequence, the contract is doomed to fire only when f is applied to an argument, in which case it checks that:

  1. The argument satisfies the Str contract
  2. The return value satisfies the Num contract

The interpreter performs additional bookkeeping to be able to correctly blame the offending code in case of a higher-order contract violation:

$nickel <<< 'let f | Str -> Num = fun x => if x == "a" then 0 else 1 in f "a"'
Done: Num(0.0)

$nickel <<< '... in f 0'
error: Blame error: contract broken by the caller.
  ┌─ :1:1
1 │ Str -> Num
  │ --- expected type of the argument provided by the caller
  ┌─ <stdin>:1:9
1 │ let f | Str -> Num = fun x => if x == "a" then 0 else 1 in f 0
  │         ^^^^^^^^^^ bound here

$nickel <<< 'let f | Str -> Num = fun x => x in f "a"'
error: Blame error: contract broken by a function.
  ┌─ :1:8
1 │ Str -> Num
  │        --- expected return type
  ┌─ <stdin>:1:9
1 │ let f | Str -> Num = fun x => x in f "a"
  │         ^^^^^^^^^^ bound here

These examples illustrate three possible situations:

  1. The contract is honored by both parties.
  2. The contract is broken by the caller, which provides a number instead of a string.
  3. The contract is broken by the function (callee), which rightfully got a string but returned a string instead of a number.

Combined with custom contracts, function contracts make it possible to express succinctly non-trivial invariants:

let f | #GreaterThan2 -> #GreaterThan2 = fun x => x + 1 in ..

A warning about laziness

Nickel is a lazy programming language. This means that expressions, including contracts, are evaluated only if they are needed. If you are experimenting with contracts and some checks buried inside lists or records do not seem to trigger, you can use the deepSeq operator to recursively force the evaluation of all subterms, including contracts:

let exp = ..YOUR CODE WITH CONTRACTS.. in builtins.deepSeq exp exp


In this post, I introduced programming with contracts. Contracts offer a principled and ergonomic way of validating data and enforcing invariants with a good error reporting story. Contracts can express arbitrary properties that are hard to enforce statically, and they can handle higher-order functions.

Contracts also have a special relationship with static typing. While we compared them as competitors somehow, contracts and static types are actually complementary, reunited in the setting of gradual typing. Nickel has gradual types, which will be the subject of a coming post.

The examples here are illustrative, but we’ll see more specific and compelling usages of contracts in yet another coming post about Nickel’s meta-values, which, together with contracts, serve as a unified way to describe and validate configurations.

January 22, 2021 12:00 AM

January 21, 2021

Douglas M. Auclair (geophf)

January 2021 1HaskellADay 1Liners

  • Today, 2021/01/21, is:
  1. Can be written with only 3 digits, What other dates can be so written? Also:
  2. a day where the month and day are NOT amalgamation the year. But which dates are amalgamations?

by geophf ( at January 21, 2021 06:43 PM

Ken T Takusagawa

[opautqcc] deconstruct

on one hand, it would be nice if Haskell had syntactic sugar like "DECONSTRUCT Constructor" which had the meaning

\x -> case x of { Constructor y -> y }

.  then, we could avoid having to manually create deconstruction (unwrapping) functions, as commonly manually done with field labels:

newtype Mytype = Constructor { unConstructor :: Innertype }

.  in the above example, we are having to repeat ourselves in "Constructor" then "unConstructor", violating the programming mantra Don't Repeat Yourself.  (often Mytype == Constructor , another form of repetition, but avoiding that is an issue for another day.)

on the other hand, using the LambdaCase LANGUAGE pragma, the DECONSTRUCT lambda function above can be written fairly succinctly without needing any additional syntactic sugar:

\case{Constructor y->y}

.  this lambda can then be used, for example, in function pipelines joined by ($), (.), (Data.Function.&), or (Control.Category.>>>).  There remains a little bit of Repeating Yourself in the pattern match variable y.

lenses also achieve wrapping and unwrapping, often using Template Haskell to avoid Repeating Yourself.

by Unknown ( at January 21, 2021 04:30 AM

Michael Snoyman

Stack Governance

A few months back I wrote about the Haskell Foundation and some of my plans with it. Earlier than that, I also spoke about my thoughts on transparency. Continuing on those topics, I want to talk today about Stack's governance.

While it takes many things to make a project successful, clarity around goals and governance is vital. Let me provide two concrete examples I'm involved in:

  • Yesod's goal is: provide a standard MVC web framework that leverages Haskell's type system to avoid large classes of bugs. Its governance is: there are a bunch of committers who can do things, and ultimately a (hopefully benevolent) dictator for life, me.
  • Stackage's goal is: test large sets of open source Haskell packages for compatibility, making it easy for library users to pull in compatible package sets, and easy for authors to maintain their packages. Its governance is: there's a group of curators who come to consensus on decisions.

With clarity on these two points, it's fairly straightforward for people to decide if they want to use a project, if they're willing to contribute to it, and how to try to effect change. And if these points haven't been clear to others: sorry, transparency is hard.

The Stack project should also have such clarity, and it's lacking right now. There is a channel for discussions, and there are decision making processes. But they are fairly relaxed. Plenty of people have asked to see this solidified, and I agree with that. It's time to do so, and I'm intentionally carving out some time and energy to make that happen.

Below is my initial proposal. I wouldn't call it a strawman, but I also wouldn't call it complete. Feedback is welcome. And as you'll see, Stack maintainers are welcome as well!

Stack goals

Stack's goals are:

  • Provide easy to use tooling for Haskell development
  • Provide complete support for at least the three primary development environments today: Linux, Mac, and Windows
  • Address the needs of industrial users, open source maintainers, and other individuals
  • Focus on the curated package set use case
  • Prioritize reproducible build plans

These goals are modifiable over time, but largescale veering away should be avoided. Any significant changes should involve significant public discussion and a public vote by the Stack maintainer team.


Stack encourages a wide range of people to be granted commit bits to the repository. Individuals are encouraged to take initiative to make non-controversial changes, such as documentation improvements, bug fixes, performance improvements, and feature enhancements. Maintainers should be included in discussions of controversial changes and tricky code changes.

Generally: it's easier to ask forgiveness than permission. We can always roll back a bad change.


Maintainers are long term contributors to Stack. Maintainers are listed in the project's file (or perhaps elsewhere and linked, TBD). Maintainers are recognized for contributions including:

  • Direct code contribution
  • Review of pull requests
  • Interactions on the Github issue tracker
  • Documentation management
  • External support, e.g. hosting, training, etc

The maintainer team will make decisions when necessary, specifically:

  • If there is disagreement on how to proceed on a specific topic
  • Adding or removing a maintainer

Generally, maintainers are only removed due to non-participation or actions unhealthy to the project. The former is not a punishment, simply a recognition that maintainership is for active participants only. The latter will hopefully never be necessary, but would include protection for cases of:

  • Disruptive behavior in public channels related to Stack
  • Impairing the codebase through bad commits/merges

Following the same principle of committers, maintainers are broadly encouraged to make autonomous decisions. Each individual maintainer is empowered to make a unilateral decision, again with the principle that a bad change can be rolled back. Maintainers should favor getting consensus first if:

  • They are uncertain what the best course of action is
  • They believe other maintainers or users of Stack will disagree on the decision

As the de facto "maintainer" right now for Stack, I will be the initial maintainer, and add others to this group quickly. Some additions are obvious: people who are currently, actively, maintaining the code base and issue tracker. I intend to add more maintainers to this group, to an unspecified number. I hope that adding maintainers will increase participation in the project too.


This section should be discussed by the maintainer group once it's off the ground. Initial feedback is welcome.

A large part of the issue tracker and general discussion around Stack is support topics. In my opinion, these kinds of discussions clog up the issue tracker and make it more difficult to maintain the project. I believe we should have a dedicated support area, and think's Discourse instance may be a good choice for this. I would encourage getting the issue tracker into a state of maintainability by closing out old issues. (I am well aware that this is a highly contentious proposal.)


We should clarify all of the official Stack discussion areas, and what their purposes are. My initial proposal would be something like:

  • Close down the current mailing list, and move to Discourse, in line with other Haskell projects

    • Discourse will be used for general feature discussion and support requests
  • Issue tracker will be for concrete feature proposals, bug reports, and other code base discussions (e.g. refactorings)

  • Pull requests will be for... pull requests :)

  • We should also create three public text chat channels, covering:

    1. General public discussion of any type
    2. Committer and maintainer discussions to work out specific code issues in real time. Everyone can read and write to this channel, but off topic requests will be turned elsewhere
    3. A maintainer only discussion, which is publicly viewable, but only intended for maintainers to contribute to. This will be rarely used, only when a controversial topic comes up

    I don't have strong opinions here, but the initial discussion about this topic in the Haskell Foundation leaned towards Matrix/Element, and I have no objection to that. Overall, I would like to make choices in line with Haskell Foundation choices.


To get this off the ground, as mentioned above, I'll be selecting the initial maintainer group, and we'll put together the more official "Stack charter" based on my comments above. This will be informed by general community input.

Providing feedback

One of the problems with these charter discussions is that we don't yet have an official place for the discussion to take place! I'll try to pay attention to:

  • Reddit
  • Twitter
  • Discourse
  • Stack mailing list

Unfortunately some messages will be missed. Apologies in advance.

What you should do

If you want to see this take off, you should:

  • Share this post
  • Provide constructive feedback on how to improve the proposal
  • Speak up if you want to be on the initial maintainer team, or if you think someone should be included in it

January 21, 2021 12:00 AM

January 18, 2021

Monday Morning Haskell

Beginners Series Updated!


Where has Monday Morning Haskell been? Well, to ring in 2021, we've been making some big improvements to the permanent content on the site. So far we've focused on the Beginners section of the site. All the series here are updated with improved code blocks and syntax highlighting. In addition, we've fully revised most of them and added companion Github repositories so you can follow along!


Our Liftoff Series is our first stop for Haskell beginners. If you've never written a line of Haskell in your life but want to learn, this is the place to start! You can follow along with all the code in the series by using this Github repository.

Monads Series

Monads are a big "barrier" topic in Haskell. They don't really exist much in most other languages, but they're super important in Haskell. Our Monads Series breaks them down, starting with simpler functional structures so you can understand more easily! The code for this series can be found on Github here.

Testing Basics

You can't do serious production development in any language until you've mastered the basics of unit testing. Our Testing Series will school you on the basics of writing and running your first unit tests in Haskell. It'll also teach you about profiling your code so you can see improvements in its runtime! And you can follow along with the code right here on Github!

Haskell Data Basics

Haskell's data types are one of the first things that made me enjoy Haskell more than other languages. In this series we explore the ins and outs of Haskell's data declaration syntax and related topics like typeclasses. We compare it side-by-side with other languages and see how much easier it is to express certain concepts! Take a look at the code here!

What's Next?

Next up we'll be going through the same process for some of our more advanced series. So in the next couple weeks you can look forward to improvements there! Stay tuned!

by James Bowen at January 18, 2021 03:30 PM

January 17, 2021

Neil Mitchell

Recording video

Summary: Use OBS, Camo and Audacity.

I recently needed to record a presentation which had slides and my face combined, using a Mac. Based on suggestions from friends and searching the web, I came up with a recipe that worked reasonably well. I'm writing this down to both share that recipe, and so I can reuse the recipe next time.

Slide design: I used a slide template which had a vertical rectangle hole at the bottom left so I could overlay a picture of my video. It took a while to find a slide design that looked plausible, and make sure callouts/quotes etc didn't overlap into this area.

Camera: The best camera you have is probably the one on your phone. To hook up my iPhone to my Mac I used a £20 lightning to USB-C cable (next day shipping from Apple) along with the software Camo. I found Camo delightfully easy to use. I paid £5 per month to disable the logo and because I wanted to try out the portrait mode to blur my background - but that mode kept randomly blurring and unblurring things in the background, so I didn't use it. Camo is useful, but I record videos infrequently, and £5/month is way too steep. I'm not a fan of software subscriptions, so I'll remember to cancel Camo. Because it is subscription based, and subscribing/cancelling is a hassle, I'll probably just suck up the logo next time.

Composition: To put it all together I used OBS Studio. The lack of an undo feature is a bit annoying (click carefully), but otherwise everything was pretty smooth. I put my slide deck (in Keynote) on one monitor, and then had OBS grab the slide contents from it. I didn't use presentation mode in Keynote as that takes over all the screen, so I just used the slide editing view, with OBS cropping to the slide contents. One annoyance of slide editing view is that spelling mistakes (and variable names etc.) have red dotted underlines, so I had to go through every slide and make sure the spellings were ignored. Grabbing the video from Camo into OBS was very easy.

Camera angle: To get the best camera angle I used a lighting plus phone stand (which contains an impressive array of stands, clips, extensions etc) I'd already bought to position the camera right in front of me. Unfortunately, putting the camera right in front of me made it hard to see the screen, which is what I use to present from. It was awkward, and I had to make a real effort to ensure I kept looking into the camera - using my reflection on the back of the shiny iPhone to make sure I kept in the right position. Even then, watching the video after, you can see my eyes dart to the screen to read the next slide. There must be something better out there - or maybe it's only a problem if you're thinking about it and most people won't notice.

Recording: For actual recording there are two approaches - record perfectly in one take (which may take many tries, or accepting a lower quality) or repeatedly record each section and edit it together after. I decided to go for a single take, which meant that if a few slides through I stumbled then I restarted. Looking at my output directory, I see 15 real takes, with a combined total of about an hour runtime, for a 20 minute talk. I did two complete run throughs, one before I noticed that spelling mistakes were underlined in dotted red.

Conversion to MP4: OBS records files as .mkv, so I used VLC to preview them. When I was happy with the result, I converted the file to .mp4 using the OBS feature "Remux recordings".

Audio post processing: After listening to the audio, there was a clear background hum, I suspect from the fan of the laptop. I removed that using Audacity. Getting Audacity to open a .mp4 file was a bit of an uphill struggle, following this guide. I then cleaned up the audio using this guide, saved it as .wav, and reintegrated it with the video using ffmpeg and this guide. I was amazed and impressed how well Audacity was able to clean up the audio with no manual adjustment.

Sharing: I shared the resulting video via DropBox. However, when sharing via DropBox I noticed that the audio quality was significantly degraded in the DropBox preview on the iOS app. Be sure to download the file to assess whether the audio quality is adequate (it was fine when downloaded).

by Neil Mitchell ( at January 17, 2021 05:35 PM

January 16, 2021

Lysxia's blog

Defunctionalizing dependent type families in Haskell

Posted on January 16, 2021
Extensions and import used in this Literate Haskell post.
{-# LANGUAGE TypeFamilies, DataKinds, PolyKinds, RankNTypes,
             GADTs, TypeOperators, UndecidableInstances #-}
import Data.Kind (Type)
import Data.Proxy

Type families in Haskell offer a flavor of dependent types: a function g or a type family G may have a result whose type F x depends on the argument x:

type family F (x :: Type) :: Type

g :: forall x. Proxy x -> F x  -- Proxy to avoid ambiguity
g = undefined  -- dummy

type family G (x :: Type) :: F x

But it is not quite clear how well features of other “truly” dependently typed languages translate to Haskell. The challenge we’ll face in this post is to do type-level pattern-matching on GADTs indexed by type families.


Sorry if that was a bit of a mouthful. Let me illustrate the problem with a minimal non-working example. You run right into this issue when you try to defunctionalize a dependent function, such as G, which is useful to reimplement “at the type level” libraries that use type families, such as recursion-schemes.

First encode G as an expression, a symbol SG, denoting a value of type F x:

type Exp a = a -> Type
data SG (x :: Type) :: Exp (F x)

Declare an evaluation function, mapping expressions to values:

type family Eval (e :: Exp a) :: a

Define that function on SG:

type instance Eval (SG x) = G x

And GHC complains with the following error message (on GHC 8.10.2):

    • Illegal type synonym family application ‘F x’ in instance:
        Eval @(F x) (SG x)
    • In the type instance declaration for ‘Eval’

The function Eval :: forall a. Exp a -> a has two arguments, the type a, which is implicit, and the expression e of type Exp a. In the clause for Eval (SG x), that type argument a must be F x. Problem: it contains a type family F. To put it simply, the arguments in each type instance must be “patterns”, made of constructors and variables only, and F x is not a pattern.

As a minor remark, it is necessary for the constructor SG to involve a type family in its result. We would not run into this problem with simpler GADTs where result types contain only constructors.

-- Example of a "simpler" GADT
data MiniExp a where
  Or :: Bool -> Bool -> MiniExp Bool
  Add :: Int -> Int -> MiniExp Int

How it’s solved elsewhere

It’s a problem specific to this usage of type families. For comparison, a similar value-level encoding does compile, where eval is a function on a GADT:

data Exp1 (a :: Type) where
  SG1 :: forall x. Proxy x -> Exp1 (F x)
  -- Proxy is necessary to avoid ambiguity.

eval :: Exp1 a -> a
eval (SG1 x) = g x

You can also try to promote that example as a type family, only to run into the same error as earlier. The only difference is that SG1 is a constructor of an actual GADT, whereas SG is a type contructor, using Type as a pseudo-GADT.

type family Eval1 (e :: Exp1 a) :: a
type instance Eval1 (SG1 (_ :: Proxy x)) = G x
    • Illegal type synonym family application ‘F x’ in instance:
        Eval1 @(F x) ('SG1 @x _1)
    • In the type instance declaration for ‘Eval1’

Type families in Haskell may have implicit parameters, but they behave like regular parameters. To evaluate an applied type family, we look for a clause with matching patterns; the “matching” is done left-to-right, and it’s not possible to match against an arbitrary function application F x. In contrast, in functions, type parameters are implicit, and also irrelevant. To evaluate an applied function, we jump straight to look at its non-type arguments, so it’s fine if some clauses instantiate type arguments with type families.

In Agda, an actual dependently-typed language, dot patterns generalize that idea: they indicate parameters (not only type parameters) whose values are determined by pattern-matching on later parameters.

GADTs = ADTs + type equality

A different way to understand this is that the constructors of GADTs hold type equalities that constrain preceding type arguments. For example, the SG1 constructor above really has the following type:

SG1 :: forall x y. (F x ~ y) => Proxy x -> Exp1 y

where the result type is the GADT Eval1 applied to a type variable, and the equality F x ~ y turns into a field of the constructor containing that equality proof.

So those are other systems where our example does work, and type families are just weird for historical reasons. We can hope that Dependent Haskell will make them less weird.


In today’s Almost-Dependent Haskell, the above desugaring of GADTs suggests a workaround: type equality allows us to comply with the restriction that the left-hand side of a type family must consist of patterns.

Although there are no constraints in the promoted world to translate (~), type equality can be encoded as a type:

data a :=: b where
  Refl :: a :=: a

A type equality e :: a :=: b gives us a coercion, a function Rewrite e :: a -> b. There is one case: if e is the constructor Refl :: a :=: a, then the coercion is the identity function:

type family Rewrite (e :: a :=: b) (x :: a) :: b
type instance Rewrite Refl x = x

Now we can define the defunctionalization symbol for G, using an equality to hide the actual result type behind a variable y:

data SG2_ (x :: Type) (e :: F x :=: y) :: Exp y
-- SG2_ :: forall y. forall x -> F x :=: y -> Exp y

We export a wrapper supplying the Refl proof, to expose the same type as the original SG above:

type SG2 x = SG2_ x Refl
-- SG2 :: forall x -> Exp (F x)

We can now define Eval on SG2_ (and thus SG2) similarly to the function eval on SG1, with the main difference being that the coercion is applied explicitly:

type instance Eval (SG2_ x e) = Rewrite e (G x)

To summarize, type families have limitations which get in the way of pattern-matching on GADTs, and we can overcome them by making type equalities explicit.


Thanks to Denis Stoyanov for discussing this issue with me.

by Lysxia at January 16, 2021 12:00 AM

January 15, 2021

Gil Mizrahi

A bulletin board website using Haskell, scotty and friends

tl;dr: I've built a bulletin-board website app - source code / video demo.

After writing my blog post on building a bulletin board, I decided spend a couple of weeks and build something a bit more featureful to serve as an example of doing something a bit more complex with scotty.

The result of my work is bulletin-app.

It includes user management (registration, auth, sessions, invites, profiles, etc), post editing, comments, basic mod tools and more.

The main Haskell libraries I've used in this project are:

bulletin-app might not be the most impressive forum software out there, but I had fun building it and I'm happy with the result. I especially like the fact that it's standalone and does not require any other software or dependencies to run. Just download the statically linked executable (if you're on linux), and run it. Like this:

  • Download and unpack with these commands:

tar xzvf bulletin-board-
cd bulletin-board
  • Run it using this command:

REGISTRATION='OpenRegistration' VISIBLE='Public' PORT=8080 SCOTTY_ENV='Development' CONN_STRING='file:/tmp/bullet.db' ./bulletin-app serve

I've also made a video demonstrating the website on youtube.

I hope this example will be useful for others who'd like to try and build similar websites in Haskell :)

by Gil at January 15, 2021 12:00 AM

January 13, 2021

FP Complete

Cloud Vendor Neutrality

Earlier this week, Amazon removed Parler from its platform. As a company hosting a network service on a cloud provider today, should you worry about such actions from cloud vendors? And what steps should you be taking now?

In this post, we'll explore some of the risks associated with being tied to a single vendor, and the costs involved in breaking the dependency. I'll also give some recommendations on low hanging fruit.

Ultimately, how far down the vendor neutrality path you want to go is a company specific risk mitigation strategy. In this post, we'll explore the raw information, but deeper analysis would be based on your company's specific situation. As usual, if you would like more direct help from the team at FP Complete in understanding these topics, please contact us for a consultation.

What is vendor neutrality?

Vendor neutrality is not a binary. There are various levels on a spectrum from an application that leverages many vendor-specific services to an application which runs on any Linux machine in the world. Achieving complete vendor neutrality is almost never the goal. Instead, most companies interested in this topic are looking to reduce their dependencies where reasonable.

To be more concrete, let's say you're on Amazon, and you're looking into what database options to use in your application. Your team comes up with three options:

  1. Build it using DynamoDB, an Amazon-specific proprietary offering
  2. Build it using PostgreSQL hosted on Amazon's RDS service
  3. Build it using PostgreSQL which your team manages themselves

Option (1) provides no vendor neutrality. If you, for any reason, decide to leave Amazon, you'll need to rewrite large parts of your application to move from DynamoDB. This may be a significant undertaking, introducing a major barrier to exit from Amazon.

Option (2), while still leveraging an Amazon service, does not fall into that same trap. Your application will speak to PostgreSQL, an open source database that can be hosted anywhere in the world. If you're dissatisfied with RDS, you can migrate to another offering fairly easy. PostgreSQL hosted offerings are available on other cloud providers. And by using RDS, you'll get some features more easily, such as backups and replication.

Option (3) is the most vendor neutral. You'll be forced to implement all features of PostgreSQL you want yourself. Maybe this will entail creating a Docker image with a fully configured PostgreSQL instance. Moving this to Azure or on-prem is even easier than option (2). But we may be at the point of diminishing returns, as we'll discuss below.

To summarize: vendor neutrality is a spectrum measuring how tied you are to a specific vendor, and how difficult it would be to move to a different one.

Advantages of vendor neutrality

The current situation with Parler is an extreme example of the advantages of vendor neutrality. I would imagine most companies doing business with Amazon don't have a reasonable expectation that Amazon would decide to remove them from their platform. Again, this is a risk assessment scenario, and you need to analyze the risk for your own business. A company hosting uncensored political discourse is in a different risk category from a someone running a personal blog.

But this is far from the only advantage of vendor neutrality. Let's analyze some of the most common concerns I've seen for companies to remain vendor neutral.

  • Price sensitivity Cloud costs can be a major part of a company's budget, and costs can vary radically between providers. Various providers are also willing to give large incentives for companies to switch platforms. But if you've designed your application deeply around one provider, the cost of switching may not exceed the long term cost savings, leaving you at your current provider's mercy.
  • Regulatory obligations Some governments may have requirements that your software run on specific vendor hardware, or specific on-prem environments. Building up your software around one provider may prevent you from offering your services in those cases.
  • Client preference Similarly, if you provide managed software to companies, they may have a built-in cloud provider preference. If you've built your software on Google Cloud, but they have a corporate policy that all new projects live on Azure, you may lose the sale.
  • Geographic distribution For lowest latency, you'll want to put your services as close to the clients as possible. And it may turn out that the provider you've chosen simply doesn't have a presence there. Or a competitor may be closer. Or a service you want to peer with is on different provider, and the data costs will be much lower if you switch providers.

There are many more examples, this isn't an exhaustive list. What I want to motivate here is that vendor neutrality isn't just a fringe ideal for companies afraid of platform eviction. There are many reasons a normal company in its normal course of business may wish to be vendor neutral. You should analyze these cases, as well as others that may apply to your company, and assess the value of neutrality.

Costs of vendor neutrality

Vendor neutrality does not come for free. A primary value proposition of most cloud providers is quick time to market. By leveraging existing services, your team can offload creation and maintenance of complex systems. Eschewing such services and building from scratch will impact your time to market, and potentially have other impacts (like increase bug rate, reduced reliability, etc).

I often see engineers decrying the evils of vendor lock-in without taking these costs into account. As a business, you'll need to find a way to adequately and accurately measure these costs as you make decisions, instead of turning it into a quasi-religious crusade against all forms of lock-in.

With these trade-offs in mind, I'll finish off this post by explaining some of the most bang-for-the-buck moves you can make, which:

  • Move you much farther along the vendor neutral spectrum
  • Do not cost significant engineering work, if undertaken early on and designed correctly
  • Provide additional benefits whenever possible

Leverage open source tools

The hardest lock-in to overcome is dedication to a proprietary tool. Without naming names, some large 6-letter database companies have made a great reputation of leveraging lock-in with major increases in licensing fees. Once you're tied into that model, it's difficult to disengage.

Open source tools provide a major protection against this. Assuming the licenses are correct—and you should be sure to check that—no one can ever take your open source tools away from you. Sure, a provider may decide to stop maintaining the software. Or perhaps future releases may be closed source instead. Or perhaps they won't address your bug reports without paying for a support contract. But ultimately, you retain lots of freedom to take the software, modify it as necessary, and deploy it everywhere.

There has long been a debate between the features and maturity of proprietary versus open source tooling. As always, we cannot make our decisions in a vacuum, and the flexibility of open source is not the be-all and end-all for a business. However, in the past decade in particular, open source has come to dominate large parts of the deployment space.

To pick on the example above: while DynamoDB is a powerful and flexible database option on AWS, it's far from unique. Cassandra, Redis, PostgreSQL, and dozens of other open source databases are readily available, with companies offering support, commercial hosting, and paid consulting services.

We've seen a major shift occur as well in the software development language space. Many of the biggest tech companies in the world not only use open source languages, but provide their own complete language ecosystems, free of charge. Google's Go, Microsoft's .NET Core, Mozilla's Rust, and Apple's Swift are some prime examples.

Far from being the scrappy underdog, we've seen a shift where open source is the de facto standard, and proprietary options are viewed as niche. You're no longer trading quality for flexibility. You can often have your cake and eat it too.


I decided to give one open source player its own subsection in this context. Kubernetes is an orchestration management tool, managing various cloud resources for hosting containerized applications in both Linux and Windows. The first notable thing in this context is that Kubernetes has effectively supplanted other proprietary and cloud-specific offerings. Those offerings still exist, but from a market share standpoint, Kubernetes is clearly in a dominant position.

The second thing to note is that Kubernetes is a tool supported by many of the largest cloud providers. Google created Kubernetes, Microsoft provides significant support, and all three top cloud providers (Google, Azure, and AWS) offer native Kubernetes services.

The final thing to note is that Kubernetes really goes beyond a single service. In many ways, it functions as a cloud abstraction layer. When you use Kubernetes, you often times write your applications to target Kubernetes instead of targeting the underlying vendor. Instead of using a cloud Load Balancer, you'll use an ingress and service in Kubernetes. This drastically reduces the cost of remaining vendor neutral.

As a plug, in our own Kubernetes offering, we've focused on combining commonly used open source components to provide a batteries-included experience with minimized vendor lock-in. We've already used it internally and for customers to easily migrate services between different cloud providers, and from the cloud to on-prem.

High value cloud services

Some cloud services provide an interesting combination of delivering high value with minimal lock-in costs. The greatest example of that is blob storage services, such as S3. The durability and availability guarantees cloud providers offer around your data is far greater than most teams would be able to provide on their own. The cost of usage is significantly far lower than rolling your own solution using block storage in the cloud. And finally: the lock-in risks tend to be small. There are tools available to abstract the different vendor APIs for blob storage (and we include such a tool in Kube360). And even without such tools, generally the impact on a codebase from blob storage selection is minimal.

Another example is services which host open source offerings. The RDS example above fits in nicely here. We generally recommend using hosted database offerings from cloud providers, since the cost is close to what you would pay to set it up yourself, you get lots of features quickly, and migration to a different option is trivial.

And one final example is services like load balancers and auto-scaling groups. These are services that are impossible to implement fully yourself, would be far more expensive to implement to any extent using cloud virtual machines, and introduce virtually no lock-in. If you're moving from AWS to Azure, you'll need to change your infrastructure code to use Azure equivalents to those services. But generally, these can be seen at the same level of commodity as the virtual machines themselves. You're paying for a fairly standard service, you're rarely locking yourself in to a vendor-specific feature.

Multicloud vs hybrid cloud

In previous discussions, the topic of vendor neutrality typically introduces the two confusing terms "multicloud" and "hybrid cloud." There is some disagreement in the tech space around what the former term means, but I'm going to define these two terms as:

  • Multicloud means that your service is capable of running on multiple different cloud providers and/or on-prem environments, but each environment will be autonomous from others
  • Hybrid cloud means that you can simultaneously run your service on multiple cloud providers, and they will replicate data, load balance, and perform other intelligent operations between the different providers

Multicloud is a much easier thing to attain than hybrid cloud. Hybrid cloud introduces many new kinds of distributed systems failure models, as well as risks around major data transfer costs and latencies. There are certainly some potential advantages for hybrid cloud setups, but in our experience the much lower hanging fruit is in targeting multicloud.


Summing up, there are many reasons a company may decide to keep their applications vendor neutral. Each of these reasons can be seen as a risk mitigation strategy, and a proper risk assessment and cost analysis should be performed. While current events has people's attention on vendor eviction, plenty of other reasons exist.

On the other hand, vendor neutrality is not free, and should not be pursued to the detriment of the business. Finding high value, low cost moves to increase your neutrality is your best bet. Such moves may include:

  • Opting for open source where possible
  • Using a platform like Kubernetes that encourages more neutrality
  • Opt for cloud services that are more easily swappable, such as load balancers

If you would like more information or help with a vendor neutrality risk assessment, we would love to chat.

If you liked this post, you may also like:

January 13, 2021 12:00 AM

January 11, 2021

FP Complete

Philosophies of Rust and Haskell

Rust is a systems programming language following fairly standard imperative approaches and a C-style syntax. Haskell is a purely functional programming language, innovating in areas such as type theory and effect management. Viewed that way, these languages are polar opposites.

And yet, these two languages attract many of the same people, including the engineering team at FP Complete. Putting on a different set of lenses, both languages provide powerful abstractions, enforce different kinds of correctness via static analysis in the compiler, and favor powerful features over quick adoption.

In this post, I want to look at some of the philosophical underpinnings that explain some of the similarities and differences in the languages. Some of these are inherent. Rust's status as a systems programming language essentially requires some different approaches to Haskell's purely functional nature. But some of these are not. It wasn't strictly necessary for both languages to converge on similar systems for Algebraic Data Types (ADTs) and ad hoc polymorphism (via traits/type classes).

Keep in mind that in writing this post, I'm viewing it as a consumer of the languages, not a designer. The designers themselves may have different motivations than those I describe. It would certainly be interesting to see if others have different takes on this topic.

Rust: ownership

This is so obvious that I almost forgot to include it. If there's one thing that defines Rust versus any other language, it's ownership and the borrow checker. This speaks to two core pieces of Rust:

  • The goal of serving as a systems programming language, where garbage collection is not an option
  • The goal of providing a safe subset of the language, where undefined behavior cannot occur

The concept of ownership achieves both of these. Many additions have been made to the language to make it easier to work with ownership overall. This hints at the concept of ergonomics, which is fundamental to Rust philosophy. But ownership and borrow checking are also known as the harder parts of the language. Putting it together, we see a philosophy of striving to meet our goals safely, while making the usage of the features as easy as possible. However, if there's a conflict between the goals and ease of use, the goals win out.

All of this stands in stark contrast to Haskell, which is explicitly not a systems language, and does not attempt in any way to address those cases. Instead, it leverages garbage collection quite happily, with the trade-offs between performance and ease-of-use inherent in that choice.

Haskell: purely functional

The underlying goal of Haskell is ultimately to create a purely functional programming language. Many of the most notable and unusual features of Haskell directly derive from this goal, such as using monads to explicitly track effects.

Other parts of the language follow from this less directly. For example, Haskell strongly embraces Higher Order Functions, currying, and partial function application. This combination turns many common structures in other languages (like loops) into normal functions. But in order to make this feel natural, Haskell uses slightly odd (compared to other languages) syntax for function application.

And this gets into a more fundamental piece of philosophy. Haskell is willing to be quite dramatically different from other programming languages in its pursuit of its goals. In my opinion, Rust has been less willing to diverge from mainstream approaches, veering away only out of absolute necessity.

This results in a world where Haskell feels quite a bit more foreign to others, but has more freedom to innovate. Rust, on the other hand, has stuck to existing solutions when possible, such as eschewing monadic futures in favor of async/.await syntax.

Expression oriented

I undervalued how important this feature was for a while, but recently I've realized that it's one of the most important features in both languages for me.

Instead of relying on declare-then-assign patterns, both languages allow conditionals and other constructs to evaluate to values. This reduces the frequency of seeing mutable assignment and avoids cases of uninitialized variables. By restricting mutable assignment to cases where it's actual mutation, we get to free up a lot of head space to focus on the trickier parts of programming.

Type system

Rust and Haskell have very similar type systems. Both make it easy to create new types, provide for features like newtypes, provide type aliases, and offer a combination of product (struct) and sum (enum) types. Both allow labeling fields or accessing values positionally. Both offer pattern matching constructs. Overall, the similarities between the two languages far outweigh the differences.

I place a large part of the shared interest between these languages at the feet of the type system. Since I started using Haskell, I feel strongly hampered using any language without a rich, flexible, and powerful type system. Rust's embrace of Algebraic Data Types (ADTs) feels natural.

There are some differences between the languages in these topics, but they are mostly superficial. For example, Haskell uses the single keyword data for introducing both product and sum types, while Rust uses struct and enum, respectively. Haskell will allow creation of partial field accessors in sum types, while Rust does not. Haskell allows for partial pattern matches (with an optional warning), and Rust does not.

These are meaningful and affect the way you use the languages, but I don't see them as deeply philosophical. Instead, I see both languages embracing the idea that encouraging programmers to define and use strong typing mechanisms leads to better code. And it's a message I wholeheartedly endorse.

Traits and type classes

In the wide world of inheritance and polymorphism, there are a lot of different approaches. Within that, Rust's traits and Haskell's type classes are far more similar than different. Both of them allow you to separate out functionality (methods) from data (struct/data). Both allow you to create new types or traits/classes yourself and add them on to existing types/traits/classes. Both of them support a concept of associated types, and multiple parameters (either via parameterized traits or multi-param type classes).

There are some differences between the two. For one, Rust doesn't allow orphans. An implementation must appear in the same crate as either the type definition or the trait definition. (The fact that Rust treats an entire crate as a compilation unit instead of a single module makes this restriction less of an imposition.) Also, Haskell supports functional dependencies, but that's not terribly interesting, since that can be closely approximated with associated types. And there are other, more subtle differences, around issues like overlapping instances. Rust's lack of orphans allows it to make some closed world assumptions that Haskell cannot.

Ultimately, the distinctions above don't lend themselves to a deep philosophical difference, but rather minor variations on a theme. There is, however, one major distinction in this area between the two languages: Higher Kinded Types (HKTs). In Haskell, HKTs provide the basis for such typeclasses as Functor, Applicative, Monad, Foldable, and Traversable. In Rust, implementing some kind of traits around these concepts is a bit more complicated.

And this is one of the deeper philosophical differences between the two languages. Haskellers readily embrace concepts like HKTs. The Rust community has adamantly avoided embracing them, due to their perceived complexity. Instead, in Rust, alternative and arguably simpler approaches have been used to solve the same problems these typeclasses solve in Haskell. Which leads us to probably the biggest philosophical difference between the languages.

General vs specific

Let's say I want to have early termination in the case of an error. Or asynchronous coding capabilities. Or the ability to pass information to the rest of a computation. How would I achieve this?

In Haskell, the answer is obviously Monads. do-notation is a general purpose "programmable semicolon." It generally solves all of these cases. And many, many more. Writing a parser? Monad. Concurrency? Maybe Monad, or maybe Applicative with ApplicativeDo turned on. But the common factor: we can express large classes of problems as do-notation.

How about Rust? Well, if you want early termination for errors, you'll use a Result return type and the ? try operator. Async? async/.await syntax. Pass in information? Maybe use method syntax, maybe use thread-local state, maybe something else.

The point is that the Haskell community overall reaches for generalizing a solution as far as possible, usually along the lines of some abstract mathematical underpinning. There are huge advantages to this. We build out solutions to problems we didn't even know we had. We are able to rely on mathematical laws to guide our designs and ensure concepts compose nicely.

The Rust community, instead, favors specific, ergonomic solutions. Error handling is really common, so give it a single character operator. Make sure that it handles common cases, like unifying error types via the From trait. Make sure error messages are as clear as possible. Optimize for the 95%, and don't worry about the 5% yet. (And see the next section for the 5%.)

To me, this is the deepest non-inherent divide between the languages. Sure, ownership versus purity is huge, but it's right there on the label of the languages. This distinction ends up impacting how new language features are added, how people generally think about solutions, and how libraries are designed.

One final point. As much as I've implied that the Rust and Haskell communities are in two camps here, that's not quite fair. There are people in the Haskell community looking to make more specific solutions to some problems. (I'm probably one of them with things like RIO.) And while I can't think of a concrete Rust example to the contrary, I have no doubt that there are cases where people design general solutions when a more specific one would suffice.

Code generation/metaprogramming/macros

Haskell has metaprogramming via Template Haskell (TH). It's almost universally viewed as a necessary evil, but evil nonetheless. It screws up compilation in some cases via stage restrictions, it requires a language pragma to enable, and introduces awkward syntax. Features like deriving serialization instances are generally moving towards in-language features via the Generic typeclass.

Rust's "Hello World" sticks a macro call on the second line via println!. The syntax for calling macros looks almost identical to function calls. Common libraries encourage macro usage all over the place. serde serialization deriving, structopt command line parsing, and snafu/thiserror error type creation all leverage macro attributes and deriving.

This is a fascinating distinction to me. I've been on both sides of the TH divide. Yesod famously uses TH for a lot of code generation, which has earned the ire of many Haskellers. I've since generally avoided using TH when possible in the past few years. And when I picked up Rust, I studiously avoided learning how to create macros until relatively recently, lest I be tempted to slip back into my old, evil ways.

Metaprogramming definitely complicates some things. It makes it harder to debug some problems. Rust does a pretty good job at making sure error messages can be comprehensible. But documentation on macro arguments and return types is still not as nice as functions and methods.

I think I'm still mostly in the Haskell camp of avoiding unnecessary metaprogramming in my API design, but I'm beginning to be more free with it. And I have no reservations in Rust about using macros; they're wonderful. I do wonder if the main issue in Haskell isn't the overall concept of metaprogramming, but the specific implementation with Template Haskell.

Backwards compatibility

Rust overall has a more coherent and consistent story around backwards compatibility. It's almost always painless to upgrade to new versions of the Rust compiler. This puts an extra burden on the compiler team, and constrains changes that can be made to the language. And in one case (the module system update), it required a new edition system to allow for full backwards compatibility.

The Haskell community overall cares less about backwards compatibility. New versions of the compiler regularly break code. New versions of libraries will get released to smooth out rough edges in the APIs. (I used to do this regularly, and now regret that. I've tried hard to keep backwards compatibility in my libraries.)

Overall, I think the Rust community's approach here is better for producing production software. Arguably the Haskell approach allows for much more exploration and attainment of some higher level of beauty. Or as they say, "avoid (success at all costs)."

Optimistic optimizations

GHC has a powerful rewrite rules system, which can rewrite less efficient combinations of functions to more optimized ones. This plays in a big way in the vector package, where rewrite rules implement stream fusion, allowing many classes of vector pipelines to completely avoid allocation. This is a massive optimization. At least when it works. As I've personally experienced, and many others have too, rewrite rules can be finicky. The Haskell approach is to be happy that our code sometimes gets much faster, and that we get to keep elegant, easy-to-understand code.

The Rust approach is the polar opposite. Either code will definitely be fast or definitely be slow. I learned this a while ago when looking into recursive functions and tail call optimization (TCO). The Rust compiler will not perform a TCO, because it's so easy to accidentally change a TCO-able implementation into something that eats up stack space. There are plans to make explicit tail calls possible with the become keyword someday.

More generally, Rust embraces the concept of zero cost abstractions. The idea is that you should be able to abstract and simplify code, when we can guarantee that there is no cost. In the Haskell world, we tend to focus on the elegant abstraction, even if a cost will be involved.

Learning curve

A short one here. Both languages have a higher-than-average learning curve compared with other languages. Both languages embrace their learning curves. As much as possible, we try to make learning and using the languages easy. But neither language shies away from powerful features, even if it will make the language a bit harder to learn.

To quote a Perlism: you'll only learn the language once, you'll use it for the rest of your life.

Explicitly mark things

Both languages embrace the idea of explicitly marking things. For example, both languages encourage (in Haskell's case) or enforce (in Rust's case) marking the type signature of all functions. But that's pretty common. Haskell goes further, and requires that you mark all effectful computations with the IO type (or something similar, like MonadIO). Rust requires than anything which may fail be marked with a Result return value.

You may argue that these are actually a difference in the language, and to some extent that's true. But I think the difference is about what the language considers important. Haskell, for reasons of purity, values deeply the idea that an effect may be performed. It then lumps errors and exceptions into the contract of IO and the concept of laziness (for better or worse). Rust, on the other hand, doesn't care if you may perform an effect, but deeply cares about whether an error may occur.

Type enforce everything?

When I initially implemented Haskell's monad-logger, I provided an instance for IO which performed no output. I received many complaints that people would rather get a compile time error if they forgot to initialize the logging system, and I removed the IO instance. (Without getting into details: this was definitely the right decision for the API, regardless of the distinction with Rust.)

That's why I was so amused when I first used the log crate in Rust, and realized that if you don't initialize the logging system, it produces no output. There's no runtime error, just silence.

Similarly, many functions in the Tokio crate will fail at runtime if run from outside of the context of a Tokio runtime. But nothing in the type system enforces this idea.

And finally, I've been bitten a few times by actix-web's state management. If you mismatch the type of the state between your handlers and your service declaration, you'll end up with a runtime error instead of a compile time bug.

In the Haskell world, the overall philosophy is generally to approach "if it compiles, it works." Haskellers love enforcing almost every invariant at the type level.

I haven't discussed this much with Rustaceans, but it seems to me that the overall Rust philosophy here is slightly different. Instead, we like to express tricky invariants at the type level. But if something is so obviously going to fail or behave incorrectly in the most basic smoke testing, such as a Tokio function crashing, there's no need to develop type-level protections against it.


I hope this laundry list comparison was interesting. I've been meaning to write it down for a while, so I kind of feel like I checked off a New Year's Resolution in doing so. I'd be curious to hear any other points of comparison people have, or disagreements about my assessments.

You may also like:

January 11, 2021 12:00 AM

January 10, 2021

Michael Snoyman

Securing internet communications: a layman's guide (2021)

There has been rising concern over the past number of years around security in personal communications. More recently, with censorship on social media platforms occurring on a grand scale, people are wondering about securing social media platforms. I wouldn't call myself an expert on either topic, but I have good knowledge of the underlying technologies and principles, and have done some investigative work into the specific platforms for work. With many friends and family suddenly interested in this topic, I figured I would write up a layman's guide.

One word of caveat, since everything is politically charged today. I definitely have my own political views, and I mostly elect to keep them separate from my discussions of technology. I'm going to do so here as well. I'm not endorsing or objecting to any recent actions or desires of people to communicate in certain ways. If I ever decided to publicly share my political views, I would do that separately. For now, I'm simply going through the technological implications.

UPDATE I decided to make a video version of this information as well for those who prefer getting the content in that format. You can check it out on YouTube or on BitChute.

Executive summary

I know there are a lot of details below, but I strongly encourage people to read through it, or at least skim, to understand my recommendations here. But for the busy (or lazy), here are my recommendations:

  • Private communication:
    • Use Signal, Wire, or Matrix
    • Be careful about what you say in a group chat
    • Assume anything you say will last forever
    • If you want semi-secure email, use ProtonMail, but don't rely on it
  • Social media
    • Your best bet for censorship freedom: host the content yourself, but that's hard, and you have to find a place to host it
    • When you use any platform, you're at their mercy regarding censorship
    • Each platform is pretty upfront at this point about what you're getting. Facebook and Twitter will remove some voices, Gab and Parler claim they won't
  • General security
    • Use a password manager!!!
    • Don't install random executables and extensions
    • Don't trust messages from people, confirm who they are, check URLs

OK, with that out of the way: the analysis.

Security doesn't mean anything

This is the first major point to get across. People often use the term "security" for lots of different things. In a vacuum, the term doesn't mean much. That's because security only applies in the presence of a specific kind of attack, and we haven't defined that attack. It could be a Denial of Service attack, where someone tries to prevent the service from working. It could be a physical attack, like stealing your phone.

What I hear people concerned about right now are two kinds of attacks. They think they're related, but in reality they are not. Let me explain those attacks:

  • I'm worried that my private communications are being read by Big Tech or the government
  • I'm worried that my social media posts are going to be censored by Big Tech

Notice that, in many ways, these are opposite concerns. The former is about ensuring you can say something without anyone else knowing it. The latter is about ensuring you can say something loudly without anyone stopping it. The two different threats necessitate two different analyses, which I'll provide below.

Also, let me address a threat I'm not addressing, since it's an inherent contradiction. You can't worry about privacy on social media, not in the "blast to the world" public concept it's typically used in. If you want something to be private, don't put it on social media. This may seem obvious, but many people seem to want to have their cake and eat it too. If you post on social media, you can always be held accountable for what you've said there. If you want privacy, use private communications.

Metadata versus data

Metadata is a term like "algorithm" which has a real meaning, but is (ab)used in the media so much to make it seem like scary, unknowable information. It's not. Metadata is just "data about data." Let's take a simple private communication example. My doctor wants to send me a message about my test results. Most people, and most governments in fact, recognize a right to privacy for this, and enforce this through law (e.g., HIPAA).

In this example, the "data" would be the results themselves: what test I took, who administered the test, when I took the test, and the results. The metadata would be information about sending the message: who the sender is, who the receiver is, the size of the message, the timestamp of when it was sent, etc.

Many messaging protocols try to ensure two things:

  • The data is completely private to only the participants of the conversation
  • The metadata has as little useful information in it as possible

The reason is that, often times, metadata can be read by intervening services. In our test result example, it would be best to assume that a nefarious party will be able to find out that my doctor sent me some message at 4:32pm on Tuesday, and that it was 5mb in size. Most messaging systems try to hide even that, but you don't usually get the same guarantees as with the underlying data.

And that brings us to the first important point.

Email is busted

Don't use email for private communications, full stop. It's an antiquated, poorly designed system. There are a lot of tools out there to try to secure email, but they are all as holey (not holy) as swiss cheese. The primary issue: most of them have no ability to secure your metadata, and that will often include the subject. Imagine your nefarious character can read:

From: Dr. Smith
To: John Doe
Subject: Test results, not looking good

Sure, the rest of the message is secure, but does it matter?

Email is a necessary evil. Many services require it as some kind of identity provider. You'll have to use it. But don't consider anything you put in email safe.

With major providers like Gmail and Outlook, you can safely assume that the companies are spying on everything you're saying. I use both, and assume nothing that goes on there is truly private. If you want to harden your email usage a bit, ProtonMail is a good choice.

Messaging apps

Perhaps surprisingly, your best bet for privately communicating with others is messaging apps. These include options like WhatsApp, Telegram, and Signal, and I'll get into the trade-offs. As with most things in security, there's a major trade-off between security and convenience. Since I'm writing this for a general purpose audience, I'm not going to go into the intricacies of things like secure key transfer, since I don't think most people will have the stomach for what's involved here. Suffice it to say: these options are, in my opinion, the best options that I think most people will be comfortable using. And they're secure enough for me to use, as you'll see.


The primary question for messaging apps is encryption, and specifically public key cryptography. With public key cryptography, I can send you a message that only you can read, and you can verify that I'm in fact the one who sent it. Done correctly, public key cryptography prevents many kinds of attacks.

But there's a twist here. Let's say Alice and Bob are trying to talk to each other using a messaging app called SecureTalk. Alice uses the SecureTalk app on her iPhone, and it sends messages to the SecureTalk server, which sends the messages on to Bob's Samsung device. Alice encrypts her message, so only the recipient can read it. The question is: who is the recipient? This may seem obvious (Bob), but it's not. There are two different possibilities here:

  1. In something called end-to-end encryption, Alice will encrypt the message so only Bob can read it. She'll send the encrypted message and some metadata to the SecureTalk server. The server will be able to read the metadata, but won't know what the message itself says. Then the SecureTalk server sends the metadata and encrypted message to Bob, who is the only person who can read the message itself.
  2. The simpler approach is that Alice will encrypt the message so that the SecureTalk server can read it. This prevents random other people on the internet from reading the message, but doesn't prevent SecureTalk from reading the message. SecureTalk then reencrypts the message for Bob and sends it to him.

You may think that (1) actually sounds simpler than (2), but for various technical reasons it isn't. And therefore, a lot of systems out there still do not provide end-to-end encryption. The primary culprit I want to call out here is Telegram. Telegram is viewed as a secure messaging platform. And it does provide end-to-end encryption, but only if you use its "secret chat" feature. But you lose features with it, and most people don't use it most of the time. In other words:

Telegram is not a good choice for security, despite its reputation to the contrary


How do you identify yourself to the messaging service and your colleagues? Most apps use one of two methods: phone number (verified via SMS) and email address (verified via confirmation email). For the most part, the former is most popular, and is used exclusively by systems like WhatsApp, Telegram, and Signal. This is good from an ease of use standpoint. But there's a major issue to be aware of:

Your secure chat identity is tied to your phone number, and most countries track ownership of phones

Maybe you can buy a burner phone without the number being tied to your identity, I'm not sure. I've never needed that level of privacy. But the other system, email address, is easier for creating a more anonymous identity. Most people can easily create a ProtonMail account and have an anonymous experience.

This is outside the bounds of security, but another advantage of email-based identity is that family members without their own cell phones (like my kids) can use those systems.

If you want to use email as your identity, and make it easier for people to communicate fully anonymously, systems like Wire and Matrix are your best bet. Wire honestly overall seems like the best system for secure communication to me, but it has the downside of not being as popular.

Exploding messages

Many systems offer a feature called "exploding messages" or similar. It's a nice idea: you can send a message, and the message will be deleted in a certain amount of time. I've used it at work for sending temporary passwords to people when signing into new accounts. It works great when you have full trust in the other side.


There is no way at all to prevent a nefarious message receiver from screenshotting or otherwise capturing the contents of the message. We've probably all heard horror stories of highschool girls sending their boyfriends inappropriate Snapchat messages, and the boyfriend screenshotting and sharing those pictures with his friends.

Group chats

The easiest way to keep a secret among three people is to kill two of them. Group chats are the same. Security is always multilayered, and there are lots of ways of breaching it. In the case of secure messaging, group conversations are far easier to intercept, since:

  • If you break into just one person's device, you've won
  • If you can get just one bad actor included in the group, you've won
  • If you can spoof just one person (pretending that you are that person), you've won

Treat as suspect any group chats, unless you absolutely know and trust every single person in the group. And maybe not even then. Like email, consider group chats semi-compromised at all times.


WhatsApp ostensibly uses the same secure protocol as Signal. WhatsApp is widely used. In principle, it's an ideal platform for secure communication, assuming you're OK with phone number based identity. But there's a huge elephant in the room: WhatsApp is owned by Facebook.

If you trust their claims, the messages in WhatsApp are fully end-to-end encrypted. WhatsApp cannot read what you're sending to your friends and family. But they can definitely read the metadata. And their privacy policy is troubling at best.

I used to recommend WhatsApp, but I no longer feel comfortable doing so. It's not the worst communication platform, not by far. But at this point, I would recommend that you move over to a different platform, and begin recommending the same to your friends and family.


The top contenders in this category are WhatsApp, Telegram, and Signal. I've eliminated Telegram due to poor encryption, and WhatsApp due to privacy policy and corporate ownership. Signal wins by default. Additionally, if you're looking for a platform that allows more anonymity, Wire lets you identify yourself via email, and is just as easy to use. The downside is that it's less popular. Finally, Matrix is more sophisticated, and offers some really nice features I haven't touched on (specifically, federation). But it's not as easy to use as the others. Look to it if you're trying to create a community, but it's probably not the best choice.

Also, start considering using Signal's voice chat as an alternative to normal phone calls, it's also end-to-end encrypted.

The platform

As I said above, security is multilayered. You can have the most secure message protocol, with the best encryption, and the most well written application to communicate. But if your platform (your phone, tablet, or computer) are compromised, none of that will matter.

I honestly don't know what to say about phones. Apple has a good reputation here. Android has a middling reputation here. There are concerns on spying from certain manufacturers, I don't know if there's validity. Windows and Mac computers presumably don't spy on people, but they might.

Again, I'm writing this for normal people, so I don't want to go too deep on the options here. Experts in the field will start looking at hardened operating systems, and hardware vulnerabilities, and other such things. For the most part, for most people, I'd recommend making sure your devices are secured. That means good passwords, two factor authentication where relevant, and not losing the physical device.

Social media

If you want to share cat pictures with your family, probably any social media platform will allow you to do it. That is assuming, of course, that you don't express the wrong ideas elsewhere. Ultimately the thing to keep in mind is:

If you use someone's service, they control your voice

There's no value judgement in that statement, it's a statement of fact. If you use Twitter, they currently have the full authority, with no legal impediments, to silence you at will, for any reason. Some people believe that they are judiciously exercising a necessary power to protect people from violence. Others believe that they are capriciously censoring people with WrongThink. Again, this post isn't about the politics of it, just the facts.

I'm assuming here that your goal is to get your message out to the world without being stifled. The bad news is: you can't do it. Any social media platform can ultimately exercise control over you, despite what they claim. And even if you trust the people running the company, the company may be purchased. Or the people running the computers that the social media platform is using may deny them usage. Or the government may shut them down.

If you want a message out there that no one can stop, your best bet is hosting it yourself. Even that is difficult. I run this website myself ( I bought a domain name. The government could decide I don't have a right to have that domain name and take it away. That's happened in the past with copyright claims and pirating. I use Amazon as a cloud provider. Amazon may decide I'm not entitled to use their services. So you have more control with your own hosting, but not total control.

There are more sophisticated censorship-resistant ideas in the world. These focus on decentralization, and leveraging technologies like blockchain to mathematically prevent removal of data. Those systems are not yet mainstream, and there are many practical things that may prevent them.

So if you want your message out there without censorship, here is my advice:

  • Recognize the reality that if you say the wrong thing anywhere on the internet, some platforms my remove you.
  • Look for providers that espouse a speech policy that you agree with. If you like Twitter and Facebook's policies on hate speech, awesome, use them! If you like Gab and Parler's approach to free speech, use them!
  • Don't be afraid to use multiple platforms simultaneously. I anticipate an uptick in tooling to automatically post to multiple platforms, though for now simply having a presence in multiple places is possible manually.

You may notice that we're heading towards a world of bifurcated communication. Not passing a value judgement, but it's likely that Twitter and Facebook will become more filled with people in favor of censoring hate speech, and Gab, Parler, and others will become more filled with people who believe there's no such thing as hate speech.

Specific options

Here's a short analysis of different platforms I'm aware of for different purposes. I don't actually use all of these, and can't really speak to how well they work, what kind of content they have, or what their censorship practices are. There are many, many other options out there too.

Text based microblogging

  • Twitter is the obvious king here, with the largest platform by far. It's also clearly on the side of censoring some content.
  • Gab is one of the oldest free speech competitors to Twitter. It's been labeled as "Nazi Twitter," and so far has stood up to all attempts to censor it. It runs its own physical machines to avoid companies like Amazon, and has been banned from many payment processors. It has no mobile apps in the app stores, since Apple and Google have banned them. Without much experience with it, it's probably the best "free speech microblogging" platform out there. It's also developing additional features, like news aggregation, video hosting, and commenting.
  • Parler is recently becoming popular as a free speech competitor. Unlike Gab, it has (or, at least, had) mobile apps. And it hosts on Amazon. Unlike Gab, it seems to have a strong presence from major right-wing political figures, which has seemingly given in meteoric popularity. However, as I'm writing this, Parler is being banned from the Google PlayStore, may be banned by Apple, and Amazon is planning on kicking them off their servers. Ultimately, it looks like Parler may simply be relearning lessons Gab already learned.

You likely will be tarred and feathered as a Nazi for using Gab or Parler. I'll admit that I've mostly avoided Gab because the content I discuss is purely technical and not at censorship risk, and I didn't feel like bringing ire down upon myself. But I am starting to post my content to Gab and Parler as well. Getting ever so slightly political: I hope people start using these platforms for everyday posts so that we don't have two raging political extremes yelling at each other from separate platforms.

Video hosting

  • YouTube is the clear winner here. It's owned by Google, and has been pretty much in line with Twitter and Facebook's approaches to censorship.
  • Vimeo is one of the oldest alternatives to YouTube. I've used it in the past. I honestly have no idea what its take is on censorship.
  • BitChute and Rumble are two alternative video hosting platforms I've seen. I've barely used either one. I will admit that I was a little shocked one of the times I checked out BitChute to see some hardcore Nazi videos. Whether you see that statement as an indictment or endorsement of BitChute should give you an indication of whether you embrace anti-hate speech or free speech more.

Live streaming

  • YouTube and Twitch are the two most popular live streaming platforms, and both have embraced some level of censorship.
  • I've heard of DLive as an alternative, and tested it out briefly when I was learning about live streaming myself, but haven't investigated it further. I've also heard it referred to as "Nazi Twitch."
  • I've heard that Rumble is going to be adding live streaming functionality.


  • I don't know anything about photo sharing ala Instragram
  • I don't know about podcast apps, I've recently given up on podcasts to spend more time on more useful activities
  • I don't think there's a real alternative to Facebook from a "stay in touch with family" standpoint
  • I can't even begin to guess at what would be considered a "good" news aggregator.
  • Reddit is the largest forum/discussion board. Famously, they blocked r/thedonald, which created its own site

Network effects

With any of these systems, an important aspect is what's known as "network effects." That means that the system becomes more valuable the more people that use them. WhatsApp is dominant because it's dominant. The more people using it, the more people want to use it. In economic terms, these might be termed natural monopolies.

The fact that people who use alternative platforms can be attacked for this increases the network effects. "You're using Signal? What are you, some tinfoil hat conspiracy theorist?" "You're using Gab? What are you, a Nazi?" Again, not passing judgement, just an honest assessment. It's mostly considered non-controversial to use Twitter, Facebook, YouTube, and WhatsApp. You must have some weird reason to use the others.

Keep that in mind when making your decisions. If you decide to ask your family to communicate on Signal, give them a reason for why you're doing it. And if you can't justify it, maybe the privacy and anti-censorship arguments don't really resonate with you.


No blog post in the past 5 years is complete without mentioning blockchain. I've worked extensively in blockchain for the past four years or so. My company has built blockchain systems, and we've audited a number of them as well. I can tell you a few things:

  • There are a lot of incorrect ideas out there about blockchain
  • Blockchain concepts could definitely be used to build private communications systems and anti-censorship social media
  • There is a majorly incorrect belief that systems like Bitcoin are "anonymous." They are not. All transactions are publicly recorded. If anyone figures out what your address is, they know what you've done with your money. Caveat emptor.

There are other blockchain systems that introduce true privacy through very sophisticated means. I personally don't use cryptocurrencies in any meaningful way, and I'm not going to recommend them to everyday people. Maybe at some point in the future I'll write more about them, but not today. For now: I don't consider blockchain any kind of solution to the problems listed here, for most normal people, at today's level of maturity of the technology.

Other security recommendations

If you've gotten this far, congratulations. I've covered everything I promised I'd cover. But since you're here, let me lay out a few other security recommendations for laymen.

  • Install a password manager! Stop using the same password on every site. Password managers are secure and easy to use. I personally use BitWarden and have gotten my family and company onto it. Others include LastPass, 1Password, and KeePass.
  • Don't install random software for the internet. You're far more likely to lose privacy by installing some spyware than anything else.
  • Limit the number of browser extensions you use. Each extension is a potential intrusion into your online communications. I use the BitWarden extension, with some level of trepidation.
  • Look at URLs before clicking a link to avoid phishing. Phishing attacks, and social engineering attacks in general, are a primary way to be compromised online.
  • Secure your devices. Make sure you have a password or biometric lock on your devices. Don't lose your devices. If you do lose your device, reset your passwords.
  • Use advanced credential management like two factor auth whenever possible. Especially if you use the same password everywhere. But seriously: use a password manager. Consider using a system with cloud backup like Authy.

OK, that's it, I promise. Let me know in the comments below, or on Twitter or Gab, if you have additional questions that you'd like me to answer. And if people want, I may turn this blog post into a video as well.

January 10, 2021 12:00 AM

January 09, 2021

Michael Snoyman

A parents' guide to Minecraft

With the COVID-19 lockdowns, our children started getting interested in Minecraft. It started with our eldest (12 years old), who wanted to play on a server with his friends. Later, our 10 and 8 year olds wanted to get involved too. This seems like it should have been a straightforward experience: buy the game, install it, walk away.

However, between the different platforms, accounts, editions, and servers, we ran into sufficient pain points that I decided it would be worth writing up a guide for other parents getting their kids started with things.

As a quick plug: among video games for kids, we put Minecraft towards the top of our recommended list. It requires thought, planning, and strategizing to play. Especially during lockdown, together with a good Discord audio channel, Minecraft can get a great way to keep the children semi-social with their peers. And as parents, even we have gotten into the game more recently.

OK, let's get into it!


If you look around the internet, there are a lot of different editions of Minecraft. Most of that is historical. These days, there are just two editions: Java and Bedrock. Pocket, Console, and "Nintendo Switch" editions (and I'm sure there are others) are now part of Bedrock.

This is the first major decision you'll need to make for/with the kids: which edition of Minecraft are you going to buy? My short recommendation is: if you don't have a reason to do otherwise, go with Bedrock. But let's dive into the reasons for Java first, and then come back to Bedrock.

Java edition

The Java edition of Minecraft is the original version of the game. It runs on Windows (including older Windows versions), Mac, and Linux. There are a lot of "mods" out there that allow you to change how the games work. There are various servers with interesting alternative versions of the game in place.

Each individual user of the Java edition will need their own license to play. You can definitely let multiple children all play on a single computer, but they won't be able to play together.

And finally: the Java edition can only play multiplayer with other Java players. So if your kids have friends playing on Java (which ours did), you'll need to use Java as well to play.

Bedrock edition

After Microsoft bought Mojang (the company behind Minecraft), they created a new version of the codebase in C++. I can only assume that they did this to make a more efficient and more compatible version of the game. That's the first thing to point out: Bedrock runs a bit faster than Java, and runs on many more platforms.

Bedrock is available for Windows 10 PCs, though no other computers. If you're on older versions of Windows, or on Mac or Linux, Java is your only option. There are mobile versions of iOS and Android. And there are versions of Xbox, PlayStation, and Nintendo Switch. If you have any desire to play from a mobile or console, you'll have to use Bedrock.

The ability to share the game across multiple users depends entirely on the platform. On the Switch, for instance, everyone in the family can play from their own account on our main Switch device. I believe family sharing for iOS and Android would let the whole family play with a single purchase, though we haven't tried it.

The Windows 10 story is the most confusing. I've seen conflicting information on family sharing for the Microsoft Store. I think the intent is that you can make one purchase for the entire family, but I'm honestly not certain. Microsoft should improve their documentation on this. In any event, here's an article on from 2015 that seems to address this case. Given that I really don't know Microsoft's intent, I'm not going to give any concrete recommendations on how many licenses to purchase.

Our decision

We bought the Nintendo Switch edition a while ago (which ultimately changed to the Bedrock edition). We also bought the Java edition for our eldest, Bedrock next for the Windows 10 PCs in the house, and ultimately after a lot of nagging bought the other two kids the Java edition as well. On the one hand, the costs add up. On the other hand, this game has been a hit for about 9 months now, so amortized the cost is completely reasonable.


Microsoft jumps in yet again to confuse things! This one's a bit easier to explain though, now that we know about editions. Java edition originally had its own user accounts, known as Mojang Accounts. Those are currently being converted into Microsoft accounts, which I'll describe next. If you purchase a new Java license, just go ahead and use a Microsoft account. If you already have a Java license, you'll need to move the Mojang account to a Microsoft account in the near future.

If you're not in the Microsoft world—and that describes me about 2 years ago—you may know nothing about Microsoft accounts. Microsoft accounts tie into many different services, such as Microsoft 365 (no surprise there), Skype, and Xbox Live. Minecraft is really leveraging that last bit. If you're on Bedrock edition, Minecraft will use your Xbox Live profile, your Xbox Live friends, and so on.

As a parent, you can create a family group and add your children to it, which will let you know how much time they spend playing games on Windows 10 and Xbox.

NOTE Microsoft really screwed up naming, and has both a "Microsoft Account" and a "Microsoft Work or School Account". If you're at a company that uses something like Microsoft Teams, you have one of the latter. Just be aware that these two kinds of accounts are mostly separate, except for a few websites (like Azure DevOps) that let you log in as both. All of this is stated as someone who pulled off a Google=>Microsoft migration for my company about a year ago.


Server based play, which we'll get to next, is one way of doing multiplayer Minecraft. But it's not the only one, and definitely not the easiest method. This point definitely confused me at first.

When you start a Minecraft game, you are creating a world. On the Java edition, you'll do this from the "Single Player" menu. Despite that nomenclature, you are still able to play with other people in a world you created in single player. The same idea applies in Bedrock, though Bedrock fortunately doesn't apply the confusing term "single player" to it.

On both Java and Bedrock, you can find games being played by others inside your local area network (LAN), meaning other people in the house. If you're on Bedrock, you can join worlds of other Xbox Live friends. The latter is a much easier way to connect with friends.

The caveat to this is that you can only play this multiplayer game when the host—the person who created the world—is online. If you want to be able to play regardless of who's online, you'll need a server.


If you want to leave a world up for any of your family or friends to join, you'll want a server. Both Java and Bedrock provide server software which you can freely run yourself. Personally, I decided that I spend enough time at work running servers for people, and I'd rather just pay someone else to do it for me.

Microsoft offers a feature called Realms, where you can pay them $8 a month (at time of writing) to host your realm. It's basically just a server with membership controlled via Xbox Live accounts. It's a nice, easy way to invite others to join you.

There are plenty of other companies out there offering free and paid hosting options. Keep in mind that, since it's relatively new, there aren't as many Bedrock hosting options out there. And the free options typically have some significant limitations, such as requiring that you watch an ad on their site every 90 minutes to keep the server running.

If you're like me, and don't want to have to bother with maintaining this stuff, I'd recommend budgeting $5-$10 per month for a server, if that's what your kids are going to enjoy.

DNS madness

And one final note. The thing that finally got me to write this blog post was a really frustrating bug where my daughter's computer couldn't connect to servers. I still haven't discovered exactly why that happened, but I learned more about how DNS resolution works:

If, like me, you're a tech person/network guy who is stumped as to "where the hell is the DNS record," you're looking for an SRV record with _minecraft._tcp prepended. I have no idea where this is officially documented, but since I spend so much time pulling out my three remaining hairs on this, I figured I'd share it here.


Our kids have absolutely fallen in love with Minecraft, and spend basically every waking moment talking about it. They play Minecraft music videos, have convinced the 4-year-old to play make-believe Minecraft, play with Minecraft foam swords and lego sets, etc. You get it, it's an obsession. If you go down this path, prepare for the possibility that they will talk at you about Minecraft every chance they get.

For me, when I finally started playing the game, it was a major bonding experience with the kids. They got to teach me a lot, which was fun for them. I had better planning and organization skills, and for once they were willing to listen to me when I told them to organize their closets (i.e. chests).

If you're looking for milestones and achievements to set, I'd recommend these. Keep in mind that I'm still quite a newb at the game, and there are likely much more exciting things to discover too.

  • Build a base There are so many variations on bases you can build. We're currently working with an underground base, but there are so many other styles you can make.
  • Mine diamonds Diamonds are one of the best materials in the game, and proper mining of them requires quite a bit of patience. Read up on some tutorials about getting down to Y=12 (meaning, 12 layers from the bottom of the game) to maximize your mining potential.
  • Explore the map Consider taking a locator map and a boat for this.
  • Explore the nether You will die.
  • Start enchanting items You'll need diamonds, obsidian, and will want bookshelves to maximize your enchantments. That will require lots of leather (from cows) and paper (from sugar cane). The planning to make all of that work is a great exercise.
  • Mix potions I haven't done this yet, we still haven't found Nether Wart.
  • Defeat the Ender Dragon We did this once, and it was incredibly exciting.
  • Play with red stone I don't know much about it yet, but from what I've heard it's a lot like programming with transistors. Red stone conducts "red stone energy" (basically electricity), and then there are things you can create for gating that energy and other such things.
  • Be creative Survival mode, the default mode, will result in a lot of deaths from enemies, lava, falling, drowning, and more. Creative mode lets you explore your creative side, with no death, and an infinite supply of all of the materials. It can be a fun way to get started.

If I got anything wrong or left anything out, let me know! Happy Minecrafting!

January 09, 2021 12:00 AM

January 08, 2021

Oleg Grenrus

Indexed optics dilemma

Posted on 2021-01-08 by Oleg Grenrus lens, optics

Indexed optics are occasionally very useful. They generalize mapWithKey-like operations found for various containers.

iover (imapped % _2) :: (i -> b -> c) ->  Map i (a, b) -> Map i (a, c)

Indexed lens are constructed with ilens combinator.

ilens :: (s -> (i, a)) -> (s -> b -> t) -> IxLens i s t a b

It is implicit that the getter and the (indexed) setter part have to satisfy usual lens laws.

However there are problematic combinators, e.g. indices:

indices :: (Is k A_Traversal, is `HasSingleIndex` i)
    => (i -> Bool) -> Optic k is s t a a -> IxTraversal i s t a a

An example usage is

>>> toListOf (itraversed %& indices even) "foobar"

If we combine ilens and indices we get a nasty thing:

\p -> indices p (ilens (\a -> (a,a)) (\_ b ->  b))
    :: (i -> Bool) -> IxTraversal i i i i i

That is (almost) the type of unsafeFiltered, which has a warning sign:

Note: This is not a legal Traversal., unless you are very careful not to invalidate the predicate on the target.

However, neither indices nor ilens have warning attached.

There should be an additional indexed lens law(s).

My proposal is to require that indices and values are independent, which for indexed lens can be checked by following equation:

Whatever you put in, you cannot change the index.

fst (iview l s) ≡ fst (iview l (iover l f s))

where l :: IxLens i s t a b, s :: s, f :: i -> a -> b.

This law is generalisable to other optic kinds. For traversals replace fst and iview with map fst and itoList. For setters it is harder to specify, but the idea is the same.

Similarly we can talk about indexed prisms or even isomorphisms. The independence requirement would mean that that index have to be boring (i.e. isomorphic to ()), thus there isn't any additional power.

However sometimes violating laws might be justified, (e.g. when we quotient types would made program correct, but we don't have them in Haskell).

This new law doesn't prohibit having duplicate indices in a traversal.

This observation also extends in to TraversableWithIndex. As far as I can tell, all instances satisfy the above requirement (of indices being independent of values). Should we make that (IMHO natural) assumption explicit?

January 08, 2021 12:00 AM

January 07, 2021

Tweag I/O

Haskell dark arts, part I: importing hidden values

You are a Haskeller debugging a large codebase. After hours of hopping around the source code of different modules, you notice some dirty and interesting code in one of your dependency’s Util or Internal module. You want to try calling a function there in your code, but hold on — the module (or the function) is hidden! Now you need to make your own fork, change the project build plan and do a lot of rebuilding. Some extra coffee break time is not bad, but what if we tell you this encapsulation can be broken, and you can import hidden functions with ease? Of course, this comes with some caveats, but no spoilers — read the rest of the post to find out how (and when).

Importing a hidden value with Template Haskell

Suppose we’d like to use the func top-level value defined in the Hidden module of the pkg package. We can’t simply import Hidden and use it if func is not exported or Hidden is not exposed. But don’t worry, with a single line of code in our own codebase, we can jailbreak the encapsulation:

myFunc = $(importHidden "pkg" "Hidden" "func")

myFunc can now be used just like the original func value. It doesn’t need to be defined as a top-level value; one can drop an importHidden splice anywhere. We only need to ensure the pkg package is a transitive dependency of the current package, enable the TemplateHaskell extension and import the module which implements importHidden.

The curious reader may check the Template Haskell API documentation and try to come up with their own importHidden implementation. It is well known that with Template Haskell, one can reify the information of datatypes and summon its hidden constructors, but summoning arbitrary hidden values is not directly supported. The next section reveals the secret.

Implementing the importHidden splice

Finding a package’s unit id

Let’s forget about importHidden for a minute and consider how to handwrite Haskell code to bring a hidden value into scope. Since we already know the package/module/value name, we can construct a Template Haskell Name that refers to the value, then use it to create the Exp that brings the value back. Time to give it a try in ghci:

Prelude> :set -XTemplateHaskell
Prelude> import Language.Haskell.TH.Syntax
Prelude Language.Haskell.TH.Syntax> myFunc = $(pure $ VarE $ Name (OccName "func") (NameG VarName (PkgName "pkg") (ModName "Hidden")))

<interactive>:3:12: error:
    • Failed to load interface for ‘Hidden’
      no unit id matching ‘pkg’ was found
    • In the expression: (pkg:Hidden.func)
      In an equation for ‘myFunc’: myFunc = (pkg:Hidden.func)

Oops, GHC complains that the pkg package can’t be found. The PkgName type in Template Haskell is a bit misleading here; GHC expects it to be the full unit ID of a package instead of the package name. What do unit IDs look like?

For packages shipped with GHC, they’re either the package name (e.g. base), or the package name followed by the version number (e.g. Cabal- However, unit IDs of third-party packages have a unique ABI hash suffix (e.g. aeson-, and the hash suffix differs if a package is built with different build plans. Thanks to this mechanism, most packages can be rebuilt multiple times and coexist in the same package database, a cabal build run will never fail due to version conflict with existing packages, and the so-called “cabal hell” becomes an ancient memory.

For importHidden to be useful, it needs to support third-party packages, therefore we need to find a way to query the exact unit ID given a package name via Template Haskell. Among the existing Template Haskell APIs, the closest thing to achieve this goal is reifyModule, which given a module name, returns its import list. So if Hidden appears in the current module’s import list, we can use reifyModule to get Hidden metadata which includes pkg’s unit ID. However, this approach has a significant restriction: it doesn’t work for hidden modules.

Abusing GHC API in Template Haskell

Recall that Template Haskell is usually run by a GHC process, so it’s possible to jailbreak the usual Template Haskell API and access the full GHC state when running a Template Haskell splice. The Q monad is defined as:

newtype Q a = Q { unQ :: forall m. Quasi m => m a }

This encodes a program that uses the Quasi class as its “instruction set”. In GHC, the typechecker monad TcM implements its Quasi instance which drives the actual Template Haskell logic. When running a splice, the type variable m is instantiated to TcM. If we can disguise a TcM a value as a Q a value, then we can access the full GHC session state inside TcM, which grants us access to the complete GHC API:

import DynFlags
import FastString
import Language.Haskell.TH.Syntax
import Module
import Packages
import TcRnMonad
import Unsafe.Coerce

unsafeRunTcM :: TcM a -> Q a
unsafeRunTcM m = unsafeCoerce (\_ -> m)

The implementation of unsafeRunTcM requires a bit of understanding about the dictionary-passing mechanism of type classes in GHC. The definition of Q can be interpreted as:

data QuasiDict m = QuasiDict {
  qNewName :: String -> m Name,

newtype Q a = Q { unQ :: forall m . QuasiDict m -> m a }

A QuasiDict m value is a dictionary which carries the implementation of Quasi methods in the m monad. A Q a value is a function which takes a QuasiDict m dictionary and calls the methods in it to construct a computation of type m a. When we instantiate m to a specific type constructor like TcM, GHC picks the corresponding dictionary and passes it to the function.

In our case, we know in advance that the Q a type is just a newtype of the Quasi m => m a computation which will be coerced to run in the TcM monad, therefore we can wrap a TcM a value in a lambda which discards its argument (which will be the Quasi instance dictionary for TcM) and coerce it to Q a. Another way to implement the coercion is:

unsafeRunTcM :: TcM a -> Q a
unsafeRunTcM m = Q (unsafeCoerce m)

The unsafeCoerce application must return a polymorphic value with the Quasi class constraint, and if we simply do unsafeRunTcM = unsafeCoerce, the resulting Q a value has the wrong function arity which leads to a segmentation fault at runtime.

Now that we can hook into GHC internal workings by running TcM a computations, it’s trivial to query the package state and find a package’s unit ID given its name. The rest of importHidden implementation follows:

qGetDynFlags :: Q DynFlags
qGetDynFlags = unsafeRunTcM getDynFlags

qLookupUnitId :: String -> Q UnitId
qLookupUnitId pkg_name = do
  dflags <- qGetDynFlags
  comp_id <- case lookupPackageName dflags $ PackageName $ fsLit pkg_name of
    Just comp_id -> pure comp_id
    _ -> fail $ "Package not found: " ++ pkg_name
  pure $ DefiniteUnitId $ DefUnitId $ componentIdToInstalledUnitId comp_id

qLookupPkgName :: String -> Q PkgName
qLookupPkgName pkg_name = do
  unit_id <- qLookupUnitId pkg_name
  pure $ PkgName $ unitIdString unit_id

importHidden :: String -> String -> String -> Q Exp
importHidden pkg_name mod_name val_name = do
  pkg_name' <- qLookupPkgName pkg_name
  pure $
    VarE $
        (OccName val_name)
        (NameG VarName pkg_name' (ModName mod_name))

Summarizing, our summoning ritual consists of:

  • Use unsafeCoerce to enable running a typechecker action in the Template Haskell Q monad.
  • Obtain the DynFlags of the current GHC session and query the package state to find a package’s full unit ID.
  • Construct a Name that refers to the hidden value and create the corresponding Exp.

With these hacks combined, now you can transcend the barriers of modules and packages!


Through a bit of knowledge about GHC internal workings, we practiced some Haskell dark arts and were able to summon hidden values. Before plugging this hack into a real-world codebase, let’s discuss the drawbacks of this approach.

If a top-level value isn’t exported, then the GHC inliner may choose to inline it at its call sites, therefore the interface file won’t contain its entry, and the summoning will fail at compile-time.

Given that we expect the splices to be run in the GHC process, it surely won’t work with an external interpreter or cross GHCs. On the other hand, for the particular use case of importHidden, we just need to query a package’s unit ID, so it should be fairly easy to patch GHC to support it when cross compiling: just add a method in the Quasi class, and support one more message variant in the external interpreter.

Running TcM actions in the Q monad is an interesting hack that doesn’t seem to have been used in the wild, and Richard Eisenberg has a nice video that introduces it. However, there’s a more principled way: GHC plugins, since they have full access to the GHC session state and can call arbitrary GHC API anyway.

Should you use importHidden? Most likely not, since patching the desired dependencies is always simpler and more robust. Nevertheless, it’s a fun exercise, and we hope this post serves as a peek into how GHC works under the hood :)

January 07, 2021 12:00 AM

Oleg Grenrus

Benchmarks of discrimination package

Posted on 2021-01-07 by Oleg Grenrus

I originally posted these as a Twitter thread. Its point is to illustrate that constant factors matter, not only the whether it is \mathcal{O}(n) , \mathcal{O}(n \log n) or \mathcal{O}(n^2) (though quadratic is quite bad quite soon).

I have been playing with discrimination package.

It offers linear-time grouping and sorting. Data.List.nub vs Data.Discrimination.nub chart is fun to look at. (Look at the x-axis: size of input).

Don't use Data.List.nub.

List.nub graph

There is ordNub (in many librairies, e.g. in Cabal). And indeed, it is a good alternative. Data.Discrimination.nub is still faster, when n is large enough (around hundred thousands for Word64 I use in these benchmarks).

ordNub graph

hashNub :: (Eq a, Hashable a) => [a] -> [a]

performs well too:

hashNub graph

All four variants on the same graph. Even for small n, nub is bad. ordNub and Discrimination.nub are about the same. hashNub is fastest.

(I have to find something better to play with than Word64)

four nub variants

UUID is slightly more intestesting type.

data UUID = UUID !Word64 !Word64
  deriving (Eq, Ord, Generic, Show, NFData, Hashable, Grouping, Sorting)

Same pattern: hashNub is the fastest, ordNub becomes slower then Discrimination.nub when there are enough elements.

nub of UUID

I was asked about zooming for small n .

nub of UUID

Because it is hard to see, we can try loglog plot. Everything looks linear there, but we can see crossover points better.

nub of UUID

But what about sorting, you may ask.

It turns out that Data.List.sort is quite good, at least if your lists are less than a million in length. Comparison with Data.Discrimination.sort: (I sort UUIDs, for more fun).


Making a vector from a list, sorting it (using vector-algorithms) and converting back to list seems to be a good option too, (we only copy pointers, copying million pointers is ... cheap).

Vector sorts

Something weird happens on GHC-9.0 though. discrimination has the same performance, yet vector based degrades.

Vector sorts

Yet, @ekmett thinks there is still plenty of opportunity to make discrimination faster. In the meantime, I'll add (a variant of) these benchmarks to the repository.

January 07, 2021 12:00 AM

January 06, 2021

Edward Z. Yang

The PyTorch open source process

PyTorch is a fairly large and active open source project, and sometimes we have people come to us and ask if there are any lessons from how we run PyTorch that they could apply to their own projects. This post is an attempt to describe some of the processes as of 2021 that help PyTorch operate effectively as an open source project. I won't claim that everything we do necessarily the best way to go about doing things, but at the very least, everything I describe here is working in practice.

Background. Not all open source projects are the same, and there are some peculiarities to PyTorch which may reduce the applicability of some of what I describe below in other contexts. Here are some defining features of PyTorch, as a project:

  • The majority of full time PyTorch developers work at Facebook. To be clear, there are many full time PyTorch developers that work at other companies: NVIDIA, Intel, Quansight, Microsoft, AMD, IBM, Preferred Networks, Google and Amazon all employ people whose job it is to work on PyTorch. But the majority of full timers are at Facebook, distinguishing PyTorch from hobbyist open source projects or projects run by a foundation of some sort.
  • PyTorch is a federation. As coined by Nadia Eghbal, PyTorch is a project with high contributor growth and user growth. In my State of PyTorch (2020) talk, I go into more details, but suffice to say, we have over nine companies contributing to PyTorch, and a long tail of other contributors (making up 40% of all of our commits). This makes managing PyTorch sometimes particularly challenging, and many of the processes I will describe below arose from growing pains scaling this level of activity.
  • PyTorch has a lot of surface area. CPU, CUDA, ROCm, ONNX, XLA, serving, distributions, quantization, etc. It's impossible for a single contributor to be well-versed in every area of the project, and so some of the challenge is just making sure the right people see the things they need to see.

Alright, so how does PyTorch deal with its scale? Here are some of the things we do.

Issue triage. PyTorch receives too many bug reports a day for any one person to keep track of all of them. Largely inspired by this apenwarr post, we setup an oncall rotation amongst Facebook contributors to serve as first line triage for all of these issues. The golden rule of issue triage is that you DO NOT fix bugs in triage; the goal of triage is to (1) route bugs to the correct people via appropriate GitHub labels, and (2) look for high priority bugs and raise awareness of these bugs. Every week, we have a meeting to review high priority bugs (and other bugs marked for triage review) and talk about them. The oncall itself rotates daily, to discourage people from letting a week's worth of issues pile up in the backlog, and we use a relatively intricate search query to make sure only relevant issues show up for the oncall to handle.

The most important consequence of issue triage is that you can unwatch PyTorch repository as a whole. Instead, by watching various labels (using our cc bot), you can trust that you will get CC'ed to issues related to topics, even if the triager doesn't know that you're interested in the issue! The weekly meeting makes sure that all maintainers collectively have an idea about what major issues are currently affecting PyTorch, and helps socialize what we as a project think of as a "high priority" issue. Finally, the high priority label is a good way to find impactful problems to work on in the project, even if you don't know much else about the project.

Pull request triage. Similarly, we receive a decent number of drive by pull requests from one time contributors. Those people are not in a good position to find reviewers for their contributions, so we also have a triager look through these pull requests and make sure someone is assigned to review them. If the PR is particularly simple, the triager might just go ahead and merge it themselves. There's actually some good automation for doing this (e.g., homu) but we've been too lazy to set any of it up, and by hand reviewer assignment doesn't seem to be too much burden on top of the existing oncall.

Tree hugging oncall. PyTorch has a huge CI system covering many different system configurations which most contributors rely on to test if their changes are safe. Sometimes people break master. Separate from the triage oncall, we have a tree hugging oncall whose job it is to revert jobs if they break master. This oncall involves mostly paying attention to the CI HUD and reverting commits if they result in master breakage in one of the configurations.

Importing to Facebook infrastructure. We actually run Facebook infrastructure directly off of the HEAD branch in PyTorch. The tooling that makes this possible is fbshipit, which mirrors commits between Facebook's internal monorepo and our public GitHub repository. This setup has been something of a double-edged sword for us: requiring Facebook and GitHub to be in sync means that only Facebook employees can actually land pull requests (we try to streamline the process as much as possible for external maintainers, but at the end of the day someone at Facebook has to actually push the green button), but it means we don't have to worry about doing periodic "mega-imports" into Facebook infrastructure (which we have done in the past and were quite difficult to do). We are very interested in fixing this situation and have floated some proposals on changing how we do internal releases to make it possible to let external contributors land PRs directly.

RFCs. Most feature discussion happens on GitHub issues, but sometimes, a feature is too big and complicated to adequately discuss in a GitHub issue. In those cases, they can be discussed in the rfcs repository (inspired by the Rust RFCs process). The formal process on this repository isn't too solidified yet, but generally people go there if they feel that it is too difficult to discuss the issue in GitHub issues. We don't yet have a process for shepherding unsolicited RFCs.

Conclusion. PyTorch's open source process isn't rocket science: there's an oncall, the oncall does some things. The devil is in the details: all of PyTorch's oncall responsibilities are carefully scoped so that your oncall responsibilities aren't something that will take an unbounded amount of time; they're something you can knock out in an hour or two and call it a day. You could make the argument that we rely excessively on oncalls when automation is possible, but what we have found is that oncalls require less infrastructure investment, and integrate well with existing processes and flows at Facebook. They might not be right everywhere, but at least for us they seem to be doing a good job.

by Edward Z. Yang at January 06, 2021 04:56 PM

Derek Elkins


tl;dr The notion of two sets overlapping is very common. Often it is expressed via |A \cap B \neq \varnothing|. Constructively, this is not the best definition as it does not imply |\exists x. x \in A \land x \in B|. Even classically, this second-class treatment of overlapping obscures important and useful connections. In particular, writing |U \between A| for “|U| overlaps |A|”, we have a De Morgan-like duality situation with |\between| being dual to |\subseteq|. Recognizing and exploiting this duality, in part by using more appropriate notation for “overlaps”, can lead to new concepts and connections.


The most common way I’ve seen the statement “|A| overlaps |B|” formalized is |A \cap B \neq \varnothing|. To a constructivist, this definition isn’t very satisfying. In particular, this definition of overlaps does not allow us to constructively conclude that there exists an element contained in both |A| and |B|. That is, |A \cap B \neq \varnothing| does not imply |\exists x. x \in A \land x \in B| constructively.

As is usually the case, even if you are not philosophically a constructivist, taking a constructivist perspective can often lead to better definitions and easier to see connections. In this case, constructivism suggests the more positive statement |\exists x. x \in A \land x \in B| be the definition of “overlaps”. However, given that we now have two (constructively) non-equivalent definitions, it is better to introduce notation to abstract from the particular definition. In many cases, it makes sense to have a primitive notion of “overlaps”. Here I will use the notation |A \between B| which is the most common option I’ve seen.


We can more compactly write the quantifier-based definition as |\exists x \in A.x \in B| using a common set-theoretic abbreviation. This presentation suggests a perhaps surprising connection. If we swap the quantifier, we get |\forall x\in A.x \in B| which is commonly abbreviated |A \subseteq B|. This leads to a duality between |\subseteq| and |\between|, particularly in topological contexts. In particular, if we pick a containing set |X|, then |\neg(U \between A) \iff U \subseteq A^c| where the complement is relative to |X|, and |A| is assumed to be a subset of |X|. This is a De Morgan-like duality.

If we want to characterize these operations via an adjunction, or, more precisely, a Galois connection, we have a slight awkwardness arising from |\subseteq| and |\between| being binary predicates on sets. So, as a first step we’ll identify sets with predicates via, for a set |A|, |\underline A(x) \equiv x \in A|. In terms of predicates, the adjunctions we want are just a special case of the adjunctions characterizing the quantifiers.

\[\underline U(x) \land P \to \underline A(x) \iff P \to U \subseteq A\]

\[U \between B \to Q \iff \underline B(x) \to (\underline U(x) \to Q)\]

What we actually want is a formula of the form |U \between B \to Q \iff B \subseteq (\dots)|. To do this, we need an operation that will allow us to produce a set from a predicate. This is exactly what set comprehension does. For reasons that will become increasingly clear, we’ll assume that |A| and |B| are subsets of a set |X|. We will then consider quantification relative to |X|. The result we get is:

\[\{x \in U \mid P\} \subseteq A \iff \{x \in X \mid x \in U \land P\} \subseteq A \iff P \to U \subseteq A\]

\[U \between B \to Q \iff B \subseteq \{x \in X \mid x \in U \to Q\} \iff B \subseteq \{x \in U \mid \neg Q\}^c\]

The first and last equivalences require additionally assuming |U \subseteq X|. The last equivalence requires classical reasoning. You can already see motivation to limit to subsets of |X| here. First, set complementation, the |(-)^c|, only makes sense relative to some containing set. Next, if we choose |Q \equiv \top|, then the latter formulas state that no matter what |B| is it should be a subset of the expression that follows it. Without constraining to subsets of |X|, this would require a universal set which doesn’t exist in typical set theories.

Choosing |P| as |\top|, |Q| as |\bot|, and |B| as |A^c| leads to the familiar |\neg (U \between A^c) \iff U \subseteq A|, i.e. |U| is a subset of |A| if and only if it doesn’t overlap |A|’s complement.

Incidentally, characterizing |\subseteq| and |\between| in terms of Galois connections, i.e. adjunctions, immediately gives us some properties for free via continuity. We have |U \subseteq \bigcap_{i \in I}A_i \iff \forall i\in I.U \subseteq A_i| and |U \between \bigcup_{i \in I}A_i \iff \exists i \in I.U \between A_i|. This is relative to a containing set |X|, so |\bigcap_{i \in \varnothing}A_i = X|, and |U| and each |A_i| are assumed to be subsets of |X|.

Categorical Perspective

Below I’ll perform a categorical analysis of the situation. I’ll mostly be using categorical notation and perspectives to manipulate normal sets. That said, almost all of what I say will be able to be generalized immediately just by reinterpreting the symbols.

To make things a bit cleaner in the future, and to make it easier to apply these ideas beyond sets, I’ll introduce the concept of a Heyting algebra. A Heyting algebra is a partially ordered set |H| satisfying the following:

  1. |H| has two elements called |\top| and |\bot| satisfying for all |x| in |H|, |\bot \leq x \leq \top|.
  2. We have operations |\land| and |\lor| satisfying for all |x|, |y|, |z| in |H|, |x \leq y \land z| if and only |x \leq y| and |x \leq z|, and similarly for |\lor|, |x \lor y \leq z| if and only |x \leq z| and |y \leq z|.
  3. We have an operation |\to| satisfying for all |x|, |y|, and |z| in |H|, |x \land y \leq z| if and only if |x \leq y \to z|.

For those familiar with category theory, you might recognize this as simply the decategorification of the notion of a bicartesian closed category. We can define the pseudo-complement, |\neg x \equiv x \to \bot|.

Any Boolean algebra is an example of a Heyting algebra where we can define |x \to y| via |\neg x \lor y| where here |\neg| is taken as primitive. In particular, subsets of a given set ordered by inclusion form a Boolean algebra, and thus a Heyting algebra. The |\to| operation can also be characterized by |x \leq y \iff (x \to y) = \top|. This lets us immediately see that for subsets of |X|, |(A \to B) = \{x \in X \mid x \in A \to x \in B\}|. All this can be generalized to the subobjects in any Heyting category.

As the notation suggests, intuitionistic logic (and thus classical logic) is another example of a Heyting algebra.

We’ll write |\mathsf{Sub}(X)| for the partially ordered set of subsets of |X| ordered by inclusion. As mentioned above, this is (classically) a Boolean algebra and thus a Heyting algebra. Any function |f : X \to Y| gives a monotonic function |f^* : \mathsf{Sub}(Y) \to \mathsf{Sub}(X)|. Note the swap. |f^*(U) \equiv f^{-1}(U)|. (Alternatively, if we think of subsets in terms of characteristic functions, |f^*(U) \equiv U \circ f|.) Earlier, we needed a way to turn predicates into sets. In this case, we’ll go the other way and identify truth values with subsets of |1| where |1| stands for an arbitrary singleton set. That is, |\mathsf{Sub}(1)| is the poset of truth values. |1| being the terminal object of |\mathbf{Set}| induces the (unique) function |!_U : U \to 1| for any set |U|. This leads to the important monotonic function |!_U^* : \mathsf{Sub}(1) \to \mathsf{Sub}(U)|. This can be described as |!_U^*(P) = \{x \in U \mid P\}|. Note, |P| cannot contain |x| as a free variable. In particular |!_U^*(\bot) = \varnothing| and |!_U^*(\top) = U|. This monotonic function has left and right adjoints:

\[\exists_U \dashv {!_U^*} \dashv \forall_U : \mathsf{Sub}(U) \to \mathsf{Sub}(1)\]

|F \dashv G| for monotonic functions |F : X \to Y| and |G : Y \to X| means |\forall x \in X. \forall y \in Y.F(x) \leq_Y y \iff x \leq_X G(y)|.

|\exists_U(A) \equiv \exists x \in U. x \in A| and |\forall_U(A) \equiv \forall x \in U. x \in A|. It’s easily verified that each of these functions are monotonic.1

It seems like we should be done. These formulas are the formulas I originally gave for |\between| and |\subseteq| in terms of quantifiers. The problem here is that these functions are only defined for subsets of |U|. This is especially bad for interpreting |U \between A| as |\exists_U(A)| as it excludes most of the interesting cases where |U| partially overlaps |A|. What we need is a way to extend |\exists_U| / |\forall_U| beyond subsets of |U|. That is, we need a suitable monotonic function |\mathsf{Sub}(X) \to \mathsf{Sub}(U)|.

Assume |U \subseteq X| and that we have an inclusion |\iota_U : U \hookrightarrow X|. Then |\iota_U^* : \mathsf{Sub}(X) \to \mathsf{Sub}(U)| and |\iota_U^*(A) = U \cap A|. This will indeed allow us to define |\subseteq| and |\between| as |U \subseteq A \equiv \forall_U(\iota_U^*(A))| and |U \between A \equiv \exists_U(\iota_U^*(A))|. We have:

\[\iota_U[-] \dashv \iota_U^* \dashv U \to \iota_U[-] : \mathsf{Sub}(U) \to \mathsf{Sub}(X)\]

Here, |\iota_U[-]| is the direct image of |\iota_U|. This doesn’t really do anything in this case except witness that if |A \subseteq U| then |A \subseteq X| because |U \subseteq X|.2

We can recover the earlier adjunctions by simply using these two pairs of adjunctions. \[\begin{align} U \between B \to Q & \iff \exists_U(\iota_U^*(B)) \to Q \\ & \iff \iota_U^*(B) \subseteq {!}_U^*(Q) \\ & \iff B \subseteq U \to \iota_U[{!}_U^*(Q)] \\ & \iff B \subseteq \{x \in X \mid x \in U \to Q\} \end{align}\]

Here the |\iota_U[-]| is crucial so that we use the |\to| of |\mathsf{Sub}(X)| and not |\mathsf{Sub}(U)|.

\[\begin{align} P \to U \subseteq A & \iff P \to \forall_U(\iota_U^*(A)) \\ & \iff {!}_U^*(P) \subseteq \iota_U^*(A) \\ & \iff \iota_U[{!}_U^*(P)] \subseteq A \\ & \iff \{x \in X \mid x \in U \land P\} \subseteq A \end{align}\]

In this case, the |\iota_U[-]| is truly doing nothing because |\{x \in X \mid x \in U \land P\}| is the same as |\{x \in U \mid P\}|.

While we have |{!}_U^* \circ \exists_U \dashv {!}_U^* \circ \forall_U|, we see that the inclusion of |\iota_U^*| is what breaks the direct connection between |U \between A| and |U \subseteq A|.


As a first example, write |\mathsf{Int}A| for the interior of |A| and |\bar A| for the closure of |A| each with respect to some topology on a containing set |X|. One way to define |\mathsf{Int}A| is |x \in \mathsf{Int}A| if and only if there exists an open set containing |x| that’s a subset of |A|. Writing |\mathcal O(X)| for the set of open sets, we can express this definition in symbols: \[x \in \mathsf{Int}A \iff \exists U \in \mathcal O(X). x \in U \land U \subseteq A\] We have a “dual” notion: \[x \in \bar A \iff \forall U \in \mathcal O(X). x \in U \to U \between A\] That is, |x| is in the closure of |A| if and only if every open set containing |x| overlaps |A|.

As another example, here is a fairly unusual way of characterizing a compact subset |Q|. |Q| is compact if and only if |\{U \in \mathcal O(X) \mid Q \subseteq U\}| is open in |\mathcal O(X)| equipped with the Scott topology3. As before, this suggests a “dual” notion characterized by |\{U \in \mathcal O(X) \mid O \between U\}| being an open subset. A set |O| satisfying this is called overt. This concept is never mentioned in traditional presentations of point-set topology because every subset is overt. However, if we don’t require that arbitrary unions of open sets are open (and only require finite unions to be open) as happens in synthetic topology or if we aren’t working in a classical context then overtness becomes a meaningful concept.

One benefit of the intersection-based definition of overlaps is that it is straightforward to generalize to many sets overlapping, namely |\bigcap_{i\in I} A_i \neq \varnothing|. This is also readily expressible using quantifiers as: |\exists x.\forall i \in I. x \in A_i|. As before, having an explicit “universe” set also clarifies this. So, |\exists x \in X.\forall i \in I. x \in A_i| with |\forall i \in I. A_i \subseteq X| would be better. The connection of |\between| to |\subseteq| suggests instead of this fully symmetric presentation, it may still be worthwhile to single out a set producing |\exists x \in U.\forall i \in I. x \in A_i| where |U \subseteq X|. This can be read as “there is a point in |U| that touches/meets/overlaps every |A_i|”. If desired we could notate this as |U \between \bigcap_{i \in I}A_i|. Negating and complementing the |A_i| leads to the dual notion |\forall x \in U.\exists i \in I.x \in A_i| which is equivalent to |U \subseteq \bigcup_{i \in I}A_i|. This dual notion could be read as “the |A_i| (jointly) cover |U|” which is another common and important concept in mathematics.


Ultimately, the concept of two (or more) sets overlapping comes up quite often. The usual circumlocution, |A \cap B \neq \varnothing|, is both notationally and conceptually clumsy. Treating overlapping as a first-class notion via notation and formulating definitions in terms of it can reveal some common and important patterns.

  1. If one wanted to be super pedantic, I should technically write something like |\{\star \mid \exists x \in U. x \in A\}| where |1 = \{\star\}| because elements of |\mathsf{Sub}(1)| are subsets of |1|. Instead, we’ll conflate subsets of |1| and truth values.↩︎

  2. If we think of subobjects as (equivalence classes of) monomorphisms as is typical in category theory, then because |\iota_U| is itself a monomorphism, the direct image, |\iota_U[-]|, is simply post-composition by |\iota_U|, i.e. |\iota_U \circ {-}|.↩︎

  3. The Scott topology is the natural topology on the space of continuous functions |X \to \Sigma| where |\Sigma| is the Sierpinski space.↩︎

January 06, 2021 03:46 AM

January 05, 2021

Gabriel Gonzalez

The visitor pattern is essentially the same thing as Church encoding


This post explains how the visitor pattern is essentially the same thing as Church encoding (or Böhm-Berarducci encoding). This post also explains how you can usefully employ the visitor pattern / Church encoding / Böhm-Berarducci encoding to expand your programming toolbox.


Church encoding is named after Alonzo Church, who discovered that you could model any type of data structure in the untyped lambda calculus using only functions. The context for this was that he was trying to show that lambda calculus could be treated as a universal computational engine, even though the only features it supported were functions.

Note: Later on, Corrado Böhm and Alessandro Berarducci devised the equivalent solution in a typed lambda calculus (specifically, System F):

… so I’ll use “Church encoding” when talking about this trick in the context of an untyped language and use “Böhm-Berarducci” encoding when talking about the same trick in the context of a typed language. If we’re not talking about any specific language then I’ll use “Church encoding”.

In particular, you can model the following types of data structures using language support for functions and nothing else:

  • records / structs (known as “product types” if you want to get fancy)

    The “product” of two types A and B is a type that stores both an A and a B (e.g. a record with two fields, where the first field has type A and the second has type B)

  • enums / tagged unions (known as “sum types”)

    The “sum” of two types A and B is a type that stores either an A or a B. (e.g. a tagged union where the first tag stores a value of type A and the second tag stores a value of type B)

  • recursive data structures

… and if you can precisely model product types, sum types, and recursion, then you can essentially model any data structure. I’m oversimplifying things, but that’s close enough to true for our purposes.


The reason we care about Church-encoding is because not all programming languages natively support sum types or recursion (although most programming languages support product types in the form of records / structs).

However, most programming languages do support functions, so if we have functions then we can use them as a “backdoor” to introduce support for sum types or recursion into our language. This is the essence of the visitor pattern: using functions to Church-encode sum types or recursion into a language that does not natively support sum types or recursion.

To illustrate this, suppose that we begin from the following Haskell code:

data Shape
= Circle{ x :: Double, y :: Double, r :: Double }
| Rectangle{ x :: Double, y :: Double, w :: Double, h :: Double }

exampleCircle :: Shape
exampleCircle = Circle 2.0 1.4 4.5

exampleRectangle :: Shape
exampleRectangle = Rectangle 1.3 3.1 10.3 7.7

area :: Shape -> Double
area shape = case shape of
Circle x y r -> pi * r ^ 2
Rectangle x y w h -> w * h

main :: IO ()
main = do
print (area exampleCircle)
print (area exampleRectangle)

… but then we hypothetically disable Haskell’s support for algebraic data types. How would we amend our example to still work in such a restricted subset of the language?

We’d use Böhm-Berarducci encoding (the typed version of Church-encoding), and the solution would look like this:

{-# LANGUAGE RankNTypes #-}

-- | This plays the same role as the old `Shape` type
type Shape = forall shape
. (Double -> Double -> Double -> shape)
-> (Double -> Double -> Double -> Double -> shape)
-> shape

-- | This plays the same role as the old `Circle` constructor
_Circle :: Double -> Double -> Double -> Shape
_Circle x y r = \_Circle _Rectangle -> _Circle x y r

-- | This plays the same role as the old `Rectangle` constructor
_Rectangle :: Double -> Double -> Double -> Double -> Shape
_Rectangle x y w h = \_Circle _Rectangle -> _Rectangle x y w h

exampleCircle :: Shape
exampleCircle = _Circle 2.0 1.4 4.5

exampleRectangle :: Shape
exampleRectangle = _Rectangle 1.3 3.1 10.3 7.7

area :: Shape -> Double
area shape = shape
(\x y r -> pi * r ^ 2)
(\x y w h -> w * h)

main :: IO ()
main = do
print (area exampleCircle)
print (area exampleRectangle)

The key is the new representation of the Shape type, which is the type of a higher-order function. In fact, if we squint we might recognize that the Shape type synonym:

type Shape = forall shape
. (Double -> Double -> Double -> shape)
-> (Double -> Double -> Double -> Double -> shape)
-> shape

… looks an awful lot like a GADT-style definition for the Shape type:


data Shape where
Circle :: Double -> Double -> Double -> Shape
Rectangle :: Double -> Double -> Double -> Double -> Shape

This is not a coincidence! Essentially, Böhm-Berarducci encoding models a type as a function that expects each “constructor” as a function argument that has the same type as that constructor. I put “constructor” in quotes since we never actually use a real constructor. Those function arguments are place-holders that will remain abstract until we attempt to “pattern match” on a value of type Shape.

In the area function we “pattern match” on Shape by supplying handlers instead of constructors. To make this explicit, let’s use equational reasoning to see what happens when we evaluate area exampleCircle:

area exampleCircle

-- Substitute the `area` function with its definition
= exampleCircle
(\x y r -> pi * r ^ 2)
(\x y w h -> w * h)

-- Substitute `exampleCircle` with its definition
= _Circle 2.0 1.4 4.5
(\x y r -> pi * r ^ 2)
(\x y w h -> w * h)

-- Substitute the `_Circle` function with its definition
= (\_Circle _Rectangle -> _Circle 2.0 1.4 4.5)
(\x y r -> pi * r ^ 2)
(\x y w h -> w * h)

-- Evaluate the outer-most anonymous function
= (\x y r -> pi * r ^ 2) 2.0 1.4 4.5

-- Evaluate the anonymous function
= pi * 4.5 ^ 2

In other words, Church encoding / Böhm-Berarducci encoding both work by maintaining a fiction that eventually somebody will provide us the “real” constructors right up until we actually need them. Then when we “pattern match” on the value we pull a last-minute bait-and-switch and use each “handler” of the pattern match where the constructor would normally go and everything works out so that we don’t need the constructor after all. Church-encoding is sort of like the functional programming equivalent of “fake it until you make it”.

The same trick works for recursive data structures as well. For example, the way that we Böhm-Berarducci-encode this Haskell data structure:

data Tree = Node Int Tree Tree | Leaf

exampleTree :: Tree
exampleTree = Node 1 (Node 2 Leaf Leaf) (Node 3 Leaf Leaf)

preorder :: Tree -> [Int]
preorder tree = case tree of
Node value left right -> value : preorder left ++ preorder right
Leaf -> []

main :: IO ()
main = print (preorder exampleTree)

… is like this:

{-# LANGUAGE RankNTypes #-}

type Tree = forall tree
. (Int -> tree -> tree -> tree) -- Node :: Int -> Tree -> Tree -> Tree
-> tree -- Leaf :: Tree
-> tree

_Node :: Int -> Tree -> Tree -> Tree
_Node value left right =
\_Node _Leaf -> _Node value (left _Node _Leaf) (right _Node _Leaf)

_Leaf :: Tree
_Leaf = \_Node _Leaf -> _Leaf

exampleTree :: Tree
exampleTree = _Node 1 (_Node 2 _Leaf _Leaf) (_Node 3 _Leaf _Leaf)

preorder :: Tree -> [Int]
preorder tree = tree
(\value left right -> value : left ++ right)

main :: IO ()
main = print (preorder exampleTree)

This time the translation is not quite as mechanical as before, due to the introduction of recursion. In particular, two differences stand out.

First, the way we encode the _Node constructor is not as straightforward as we thought:

_Node :: Int -> Tree -> Tree -> Tree
_Node value left right =
\_Node _Leaf -> _Node value (left _Node _Leaf) (right _Node _Leaf)

This is because we need to thread through the _Node / _Leaf function arguments through to the node’s children.

Second, the way we consume the Tree is also different. Compare the original code:

preorder :: Tree -> [Int]
preorder tree = case tree of
Node value left right -> value : preorder left ++ preorder right
Leaf -> []

… to the Böhm-Berarducci-encoded version:

preorder :: Tree -> [Int]
preorder tree = tree
(\value left right -> value : left ++ right)

The latter version doesn’t require the preorder function to recursively call itself. The preorder function is performing a task that is morally recursive but the preorder function is, strictly speaking, not recursive at all.

In fact, if we look at the Böhm-Berarducci-encoded solution closely we see that we never use recursion anywhere within the code! There are no recursive datatypes and there are also no recursive functions, yet somehow we still managed to encode a recursive data type and recursive functions on that type. This is what I mean when I say that Church encoding / Böhm-Berarducci encoding let you encode recursion in a language that does not natively support recursion. Our code would work just fine in a recursion-free subset of Haskell!

For example, Dhall is a real example of a language that does not natively support recursion and Dhall uses this same trick to model recursive data types and recursive functions:

That post goes into more detail about the algorithm for Böhm-Berarducci-encoding Haskell types, so you might find that post useful if the above examples were not sufficiently intuitive or clear.

Visitor pattern

The visitor pattern is a special case of Church encoding / Böhm Berarducci encoding. I’m not going to provide a standalone explanation of the visitor pattern since the linked Wikipedia page already does that. This section will focus on explaining the correspondence between Church encoding / Böhm-Berarducci encoding and the visitor pattern.

The exact correspondence goes like this. Given:

  • a Church-encoded / Böhm-Berarducci-encoded type T

    e.g. Shape in the first example

  • … with constructors C₀, C₁, C₂, …

    e.g. Circle, Rectangle

  • … and values of type T named v₀, v₁, v₂, …

    e.g. exampleCircle, exampleRectangle

… then the correspondence (using terminology from the Wikipedia article) is:

  • The “element” class corresponds to the type T

    e.g. Shape

  • A “concrete element” (i.e. an object of the “element” class) corresponds to a constructor for the type T

    e.g. Circle, Rectangle

    The accept method of the element selects which handler from the visitor to use, in the same way that our Church-encoded constructors would select one handler (named after the matching constructor) out of all the handler functions supplied to them.

    _Circle :: Double -> Double -> Double -> Shape
    _Circle x y r = \_Circle _Rectangle -> _Circle x y r

    _Rectangle :: Double -> Double -> Double -> Double -> Shape
    _Rectangle x y w h = \_Circle _Rectangle -> _Rectangle x y w h
  • A “visitor” class corresponds to the type of a function that pattern matches on a value of type T

    Specifically, a “visitor” class is equivalent to the following Haskell type:

    T -> IO ()

    This is more restrictive than Böhm-Berarducci encoding, which permits pattern matches that return any type of value, like our area function, which returns a Double. In other words, Böhm-Berarducci encoding is not limited to just performing side effects when “visiting” constructors.

    (Edit: Travis Brown notes that the visitor pattern is not restricted to performing side effects. This might be an idiosyncracy of how Wikipedia presents the design pattern)

  • A “concrete visitor” (i.e. an object of the “visitor” class) corresponds to a function that “pattern matches” on a value of type T

    e.g. area

    … where each overloaded visit method of the visitor corresponds to a branch of our Church-encoded “pattern match”:

    area :: Shape -> Double
    area shape = shape
    (\x y r -> pi * r ^ 2)
    (\x y w h -> w * h)
  • The “client” corresponds to a value of type T

    e.g. exampleCircle, exampleRectangle:

    exampleCircle :: Shape
    exampleCircle = _Circle 2.0 1.4 4.5

    exampleRectangle :: Shape
    exampleRectangle = _Rectangle 1.3 3.1 10.3 7.7

    The Wikipedia explanation of the visitor pattern adds the wrinkle that the client can represent more than one such value. In my opinion, what the visitor pattern should say is that the client can be a recursive value which may have self-similar children (like our example Tree). This small change would improve the correspondence between the visitor pattern and Church-encoding.

Limitations of Böhm-Berarducci encoding

Church encoding works in any untyped language, but Böhm-Berarducci encoding does not work in all typed languages!

Specifically, Böhm-Berarducci only works in general for languages that support polymorphic types (a.k.a. generic programming). This is because the type of a Böhm-Berarducci-encoded value is a polymorphic type:

type Shape = forall shape
. (Double -> Double -> Double -> shape)
-> (Double -> Double -> Double -> Double -> shape)
-> shape

… but such a type cannot be represented in a language that lacks polymorphism. So what the visitor pattern commonly does to work around this limitation is to pick a specific result type, and since there isn’t a one-size-fits-all type, they’ll usually make the result a side effect, as if we had specialized the universally quantified type to IO ():

type Shape =
. (Double -> Double -> Double -> IO ())
-> (Double -> Double -> Double -> Double -> IO ())
-> IO ()

This is why Go has great difficulty modeling sum types accurately, because Go does not support polymorphism (“generics”) and therefore Böhm-Berarducci encoding does not work in general for introducing sum types in Go. This is also why people with programming language theory backgrounds make a bigger deal out of Go’s lack of generics than Go’s lack of sum types, because if Go had generics then people could work around the lack of sum types using a Böhm-Berarducci encoding.


Hopefully this gives you a better idea of what Church encoding and Böhm-Berarducci encoding are and how they relate to the visitor pattern.

In my opinion, Böhm-Berarducci encoding is a bigger deal in statically-typed languages because it provides a way to introduce sum types and recursion into a language in a type-safe way that makes invalid states unrepresentable. Conversely, Church encoding is not as big of a deal in dynamically-typed languages because a Church-encoded type is still vulnerable to runtime exceptions.

by Gabriel Gonzalez ( at January 05, 2021 04:05 PM

January 04, 2021


A First Look at Info Table Profiling

In this post, we are going to use a brand-new (at the time of writing) and still somewhat experimental profiling method in GHC to show how to identify a memory leak and the code causing it. This new profiling method, implemented by Matthew, allows us to map heap closures to source locations. A key feature of this new profiling mode is that it does not require a profiled build (i.e. building with -prof). That’s desirable because non-profiled builds use less memory and tend to be faster, primarily by avoiding bookkeeping that prevents optimization opportunities.

A big thank you to Hasura for partnering with us and making the work presented here possible.

Let’s jump right in and try to analyze a memory leak. While you don’t need to read it to follow this blog post, we’ll be using this code: LargeThunk.hs. The rest of the blog post will show how to identify the problems in that code without having to read it in advance.

Status Quo

Let’s build/run LargeThunk.hs using a standard, -hT, memory profile that’s already available in GHC. We’ll render it with eventlog2html. Note that this is NOT a profiled (-prof) build of the LargeThunk executable:

$ ghc -eventlog -rtsopts -O2 LargeThunk
$ ./LargeThunk 100000 100000 30000000 +RTS -l -hT -i0.5 -RTS
$ eventlog2html LargeThunk.eventlog

The -eventlog -rtsopts options allow us to use the -l run time system option, which generates the LargeThunk.eventlog file. -O2 turns on optimizations. The -i0.5 RTS option forces a heap profile every 0.5 seconds and -hT indicates that the heap profile should be broken down by closure type. The 100000 100000 30000000 are arguments to the LargeThunk program that result in a few gigabytes of memory usage. Finally, eventlog2html renders the eventlog into a convenient html file: LargeThunk.eventlog.html.

You can see that there is a large build-up of THUNKs (unevaluated closures) starting from 11 seconds till about 54 seconds. This is despite using various !s and NFData/force in an attempt to avoid such THUNKs. We’ve identified a possible memory leak, but we don’t know what code is causing it. In larger applications, identifying the offending code is difficult and a profile like this would leave us scratching our heads.

Our new profiling method

Our new profiling method is a combination of 3 new flags: the -hi runtime system flag and the -finfo-table-map and -fdistinct-constructor-tables compile time flags.

These options are all related to info tables. The heap is made up of “closures” which include evaluated Haskell values as well as unevaluated thunks. Each closure on the heap contains a pointer to an info table which is some data about the closure’s type and memory layout. Many closures may refer to the same info table, for example thunks created at the same source location. Evaluated closures of the same type and the same data constructor will also point to the same info table, though we’ll see this will change with -fdistinct-constructor-tables.

When the runtime system performs a memory profile, enabled by one of various options, the run time system occasionally scans all live closures on the heap. The size of each closure is measured and closures are broken down (i.e. placed into groups) according to the the profiling option used. For example, in the previous section, the -hT option breaks down the profile by each closure’s “closure type”. The new -hi option simply breaks down the heap profile by each closure’s info table pointer. It will soon be apparent why that is useful.

When compiled with the -finfo-table-map option, GHC will build a mapping from info table pointers to source locations and some extra type information. We call each entry of this mapping an “info table provenance entry” or IPE for short. This map will be baked into resulting binaries. This, of course, increases binary size. Given a closure’s info table pointer (e.g. taken from the -hi heap profile), we can use the mapping to get the source location at which the closure was created. The extra type information means we don’t just know that thunks are thunks, but we also know the type once that thunk is evaluated e.g. Double or String. This source location and type information is invaluable for debugging and the key advantage of using this new profiling method.

With -hi and -finfo-table-map we’ll get useful source locations for thunks but not for evaluated closures. Hence the -fdistinct-constructor-tables compile time option which creates distinct info tables per usage of data constructors. This further increases binary size, but results in useful source locations for evaluated closures which is crucial for debugging.

Trying it out

Now lets try these new build options with the LargeThunk.hs example:

$ ghc -eventlog -rtsopts -O2 -finfo-table-map -fdistinct-constructor-tables LargeThunk
$ ./LargeThunk 100000 100000 30000000 +RTS -l -hi -i0.5 -RTS
$ eventlog2html LargeThunk.eventlog

This generates: LargeThunk.eventlog.html.

Unsurprisingly, the graph looks similar to our -hT profile, but our modified version of eventlog2html generated a “Detailed” tab in LargeThunk.eventlog.html. This tab lets us see the profile as an interactive table:

We hope to find interesting things near the top of this table as it is sorted by integrated size i.e. the integral of residency with respect to time. The first row corresponds to closures with info table address 0x70b718. From the Description and CTy column we can tell that these closures are the : list constructor. Specifically, those used in the GHC.List module at the source location libraries/base/GHC/List.hs:836:25 as listed in the Module and Loc columns. We can also see from the sparkline in the leftmost column that the residency of these closures remains mostly constant throughout the profile. This is not particularly suspect and with a little domain knowledge we know that the program builds a large list on startup which could explain this row of the profile.

The next 3 rows are far more interesting. The sparklines show the residency of these 3 closures are closely correlated. The closure types indicate they are THUNKs and the Type indicates that they will be Doubles once evaluated. Combined, these make up about 3 GiB of memory. Let’s have a look at the corresponding code in LargeThunk.hs. The Loc columns points to weightedScore' and weight' on line 149 as well as the full definition of userTrust on line 154:

{- 142 -}            go :: Map MovieID (Double, Double) -> Rating -> Map MovieID (Double, Double)
{- 143 -}            go weights (Rating userID movieID score) = M.alter
{- 144 -}                (\case
{- 145 -}                    Nothing -> Just (userTrust * score, userTrust)
{- 146 -}                    Just (weightedScore, weight) -> let
{- 147 -}                        weightedScore' = weightedScore + (userTrust * score)
{- 148 -}                        weight'        = weight        + userTrust
{- 149 -}                        in Just (weightedScore', weight')
{- 150 -}                )
{- 151 -}                movieID
{- 152 -}                weights
{- 153 -}                where
{- 154 -}                userTrust = trust ((users movieDB) M.! userID)

We can now see that the THUNK closures are coming from 3 lazy bindings: weightedScore', weight', and userTrust. By looking at the usage of go we can see that we’re folding over a large list of ratings:

foldl' go M.empty (ratings movieDB)

Each call to go creates new THUNKs that kept alive any previous such THUNKs. This leads to a long chain of THUNKS that causes the memory leak. Note, even though foldl' is strict in the accumulator and the accumulator is a strict map, the elements of the map are only evaluated to weak head normal form. In this case, the elements are tuples and their contents are lazy. By adding !s, we make the 3 bindings strict, avoid the build up of THUNKs, and hence avoid the memory leak:

{- 142 -}            go :: Map MovieID (Double, Double) -> Rating -> Map MovieID (Double, Double)
{- 143 -}            go weights (Rating userID movieID score) = M.alter
{- 144 -}                (\case
{- 145 -}                    Nothing -> Just (userTrust * score, userTrust)
{- 146 -}                    Just (weightedScore, weight) -> let
{- 147 -}                        !weightedScore' = weightedScore + (userTrust * score)
{- 148 -}                        !weight'        = weight        + userTrust
{- 149 -}                        in Just (weightedScore', weight')
{- 150 -}                )
{- 151 -}                movieID
{- 152 -}                weights
{- 153 -}                where
{- 154 -}                !userTrust = trust ((users movieDB) M.! userID)

If we run the same profile as before we see that the memory leak has disappeared! We’ve saved about 3 GiB of heap residency and shaved a good 30 seconds off of the run time: LargeThunk.eventlog.html.


We used our new profiling method, -hi, to directly identify source locations related to a memory leak due to laziness. We managed to do this without a profiled build. This is a step in the direction toward debugging production builds which are rarely built with profiling enabled.

A few of us have already been using the new profiling mode to help analyse memory problems:

  • We (Matthew and David) have used this method to identify a small memory leak in GHC. We both quickly and independently identified the same memory leak. This was based off of a profile used when compiling Cabal. See #18925 and !4429. While the magnitude of the leak was relatively small, this anecdote goes to show that the new profiling method allows developers to quickly pinpoint memory issues even in a large code base like GHC’s.

  • Matthew tried using the profiling mode to analyse a memory issue in pandoc and quickly found a problem where the XML parsing library retained all the input tokens due to excessive laziness.

  • Zubin Duggal debugged a memory leak in his XMonad config. The profiling mode allowed him to very quickly identify where in his source program the leak was happening. This one wasn’t caused by laziness but a list continually growing during the execution of the program.

At the time of writing, this work is not yet merged into GHC, but is under review in MR !3469. For those eager to try it out, we’d recommend building this backport to GHC 8.10.2. See these instructions as you’ll likely want to build core libraries with -finfo-table-map -fdistinct-constructor-tables and use an eventlog2html build with IPE support. Please try it out, and let us know if you have any success using the profiling mode for your own applications.

Binary size

Increased binary size was mentioned, but how big an effect is it? Taking cabal-install as an example, a clean build without IPE yields a 44.4 MiB binary. Building with -finfo-table-map -fdistinct-constructor-tables for cabal-install and all its dependencies including base yields a 318.8 MiB binary. That’s a significant increase and a cost to consider when using this new profiling method. This could be mitigated by building IPE only for select code such as the Cabal library and cabal-install binary. This yields a 135 MiB binary, but of course will only have source location information for a smaller subset of closures: only those with a source location in Cabal/cabal-install.

Comparison to DWARF

GHC already supports DWARF debugging output. The primary difference between IPE based profiling and DWARF based profiling is that IPE relates data (i.e. closures) to source locations while DWARF relates machine code to source locations. In practice, DWARF allows time profiling with tools such as prof (see Ben’s post on DWARF support in GHC). That can tell us what source code is responsible for the runtime of our program. IPE on the other hand, allows memory profiling which can tell us what source code is responsible for the allocation of data. An advantage that both IPE and DWARF share is that they can be used with minimal time and memory performance overheads. In both cases, this is at the cost of larger binary size.

by matthew, davide at January 04, 2021 12:00 AM

Oleg Grenrus

Coindexed optics

Posted on 2021-01-04 by Oleg Grenrus lens, optics

The term coindexed optics is sometimes brought up. But what are they? One interpretation is optics with error reporting, i.e. which can tell why e.g. Prism didn’t match1. For some time I started to dislike that interpretation. It doesn’t feel right.

Recently I run into documentation of witherable. There is Wither, which is like a lens, but not quite. I think that is closer to what coindexed optics could be. (However, there are plenty arrows to flip, and you may flip others).

This blog post is a literate Haskell file, so we start with language extensions

{-# LANGUAGE DataKinds #-}
{-# LANGUAGE DeriveTraversable #-}
{-# LANGUAGE EmptyCase #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE FunctionalDependencies #-}
{-# LANGUAGE KindSignatures #-}
{-# LANGUAGE RankNTypes #-}
{-# LANGUAGE TypeOperators #-}

The import list is shorter, Data.SOP is the only module (from sop-core) which is not in base library.

import Control.Applicative (liftA2)
import Data.Kind           (Type)
import Data.Bifunctor      (Bifunctor (first))
import Data.Char           (toUpper)
import Data.Coerce         (coerce)
import Data.SOP            (NP (..), NS (..), I (..))

import qualified Data.Maybe as L (mapMaybe)

I will use a variant of profunctor encoding of optics. The plan is to show

  • ordinary, unindexed optics;

  • then indexed optics;

  • and finally coindexed optics.

Ordinary, unindexed optics

The profunctor encoding of optics is relatively simple. However, instead of ordinary profunctors, we will use a variant with additional type level list argument. This is similar to indexed-profunctors, though there the type list is curried. Currying works well for indexed optics, but complicates coindexed story.

type Optic p (is :: [Type]) (js :: [Type]) s t a b = p is a b -> p js s t

To not make this post unnecessarily long, we will use only a subset of profunctor hierarchy: Profunctor and Mapping for now. Profunctor is used to encode isomorphisms (isos), and Mapping is used to encode setters.

class Profunctor (p :: [Type] -> Type -> Type -> Type) where
    dimap :: (a -> b) -> (c -> d) -> p is b c -> p is a d

class Profunctor p => Mapping p where
    roam :: ((a -> b) -> s -> t) -> p is a b -> p is s t

A go to example of a setter is mapped. With mapped we can set, or map over, elements in a Functor.

mapped :: (Mapping p, Functor f)
       => Optic p is is (f a) (f b) a b
mapped = roam fmap

To implement the over operation we need a concrete profunctor. If we used ordinary profunctors, the function arrow would do. In this setup we need a newtype to adjust the kind:

newtype FunArrow (is :: [Type]) a b =
    FunArrow { runFunArrow :: a  -> b }

Instance implementations are straight forward:

instance Profunctor FunArrow where
    dimap f g (FunArrow k) = FunArrow (g . k . f)

instance Mapping FunArrow where
    roam f (FunArrow k) = FunArrow (f k)

Using FunArrow we can implement over. over uses a setter to map over focused elements in a bigger structure. Note, we allow any index type-lists. We (or rather FunArrow) simply ignore them.

over :: Optic FunArrow is js s t a b
     -> (a -> b)
     -> (s -> t)
over o f = runFunArrow (o (FunArrow f))

Some examples, to show it works what we have written so far. over mapped is a complicated way to say fmap:

example01 :: String
example01 = over mapped toUpper "foobar"

Optics compose. over (mapped . mapped) maps over two composed functors:

example02 :: [String]
example02 = over (mapped . mapped) toUpper ["foobar", "xyzzy"]

This was a brief refresher of profunctor optics. It is "standard", except that we added an additional type-level list argument to the profunctors.

We will next use that type-level list to implement indexed optics.

Indexed optics

Indexed optics let us set, map, traverse etc. using an additional index. The operation we want to generalize with optics is provided by few classes, like FunctorWithIndex:

class Functor f => FunctorWithIndex i f | f -> i where
    imap :: (i -> a -> b) -> f a -> f b

imap operation is sometimes called mapWithKey (for example in containers library).

Ordinary lists are indexed with integers. Map k v is indexed with k. We will use lists in the examples, so let us define an instance:

instance FunctorWithIndex Int [] where
    imap f = zipWith f [0..]

Next, we need to make that available in optics framework. New functionality means new profunctor type class. Note how an indexed combinator conses the index to the list.

class Mapping p => IMapping p where
    iroam :: ((i -> a -> b) -> s -> t) -> p (i ': is) a b -> p is s t

Using iroam and imap we can define imapped, which is an example of indexed setter.

imapped :: (FunctorWithIndex i f, IMapping p)
        => p (i ': is) a b -> p is (f a) (f b)
imapped = iroam imap

Here, we should note that FunArrow can be given an instance of IMapping. We simply ignore the index argument.

instance IMapping FunArrow where
    iroam f (FunArrow k) = FunArrow (f (\_ -> k))

That allows us to use imapped instead of mapped. (both optics and lens libraries have tricks to make that efficient).

example03 :: [String]
example03 = over (mapped . imapped) toUpper ["foobar", "xyzzy"]

To actually use indices, we need new concrete profunctor. The IxFunArrow takes a heterogeneous list, NP I (n-ary product), of indices in addition to the element as an argument of an arrow.

newtype IxFunArrow is a b =
    IxFunArrow { runIxFunArrow :: (NP I is, a) -> b }

The IxFunArrow instances are similar to FunArrow ones, they involve just a bit of additional plumbing.

instance Profunctor IxFunArrow where
    dimap f g (IxFunArrow k) = IxFunArrow (g . k . fmap f)

instance Mapping IxFunArrow where
    roam f (IxFunArrow k) = IxFunArrow (\(is, s) -> f (\a -> k (is, a)) s)

IMapping instance is the most interesting. As the argument provides an additional index i, it is consed to the the list of existing indices.

instance IMapping IxFunArrow where
    iroam f (IxFunArrow k) = IxFunArrow $
        \(is, s) -> f (\i a -> k (I i :* is, a)) s

As I already mentioned, indexed-profunctors uses curried variant, so the index list is implicitly encoded in uncurried form i1 -> i2 -> .... That is clever, but hides the point.

Next, the indexed over. The general variant takes an optic with any indices list.

gen_iover :: Optic IxFunArrow is '[] s t a b
         -> ((NP I is, a) -> b)
         -> s -> t
gen_iover o f s = runIxFunArrow (o (IxFunArrow f)) (Nil, s)

Usually we use the single-index variant, iover:

iover :: Optic IxFunArrow '[i] '[] s t a b
      -> (i -> a -> b)
      -> s -> t
iover o f = gen_iover o (\(I i :* Nil, a) -> f i a)

We can also define the double-index variant, iover2 (and so on).

iover2 :: Optic IxFunArrow '[i,j] '[] s t a b
       -> (i -> j -> a -> b)
       -> (s -> t)
iover2 o f = gen_iover o (\(I i :* I j :* Nil, a) -> f i j a)

Lets see what we can do with indexed setters. For example we can upper case every odd character in the string:

-- "fOoBaR"
example04 :: String
example04 = iover imapped (\i a -> if odd i then toUpper a else a) "foobar"

In nested case, we have access to all indices:

-- ["fOoBaR","XyZzY","uNoRdErEd-cOnTaInErS"]
example05 :: [String]
example05 = iover2
    (imapped . imapped)
    (\i j a -> if odd (i + j) then toUpper a else a)
    ["foobar", "xyzzy", "unordered-containers"]

We don’t need to index at each step, e.g. we can index only at the top level:

-- ["foobar","XYZZY","unordered-containers"]
example06 :: [String]
example06 = iover
    (imapped . mapped)
    (\i a -> if odd i then toUpper a else a)
    ["foobar", "xyzzy", "unordered-containers"]

Indexed optics are occasionally very useful. We can provide extra information in indices, which would otherwise not fit into optical frameworks.


The indexed optics from previous sections can be flipped to be coindexed ones. As I mentioned in the introduction, I got an idea at looking at witherable. package.

witherable provides (among many things) a useful type-class, in a simplified form:

class Functor f => Filterable f where
    mapMaybe :: (a -> Maybe b) -> f a -> f b

It is however too simple. (Hah!). The connection to indexed optics is easier to see using an Either variant:

class Functor f => FunctorWithCoindex j f | f -> j where
    jmap :: (a -> Either j b) -> f a -> f b

We’ll also need a Traversable variant (Witherable in witherable):

class (Traversable f, FunctorWithCoindex j f)
    => TraversableWithCoindex j f | f -> j
    jtraverse :: Applicative m => (a -> m (Either j b)) -> f a -> m (f b)

Instances for list are not complicated. The coindex of list is a unit ().

instance FunctorWithCoindex () [] where
    jmap f = L.mapMaybe (either (const Nothing) Just . f)

instance TraversableWithCoindex () [] where
    jtraverse _ [] = pure []
    jtraverse f (x:xs) = liftA2 g (f x) (jtraverse f xs) where
        g (Left ()) ys = ys
        g (Right y) ys = y : ys

With "boring" coindex, like the unit, we can recover mapMaybe:

mapMaybe' :: FunctorWithCoindex () f => (a -> Maybe b) -> f a -> f b
mapMaybe' f = jmap (maybe (Left ()) Right . f)

With TraversableWithCoindex class, doing the same tricks as previously with indexed optics, we get coindexed optics. Easy.

I didn’t manage to get JMapping (a coindexed mapping) to work, so I’ll use JTraversing. We abuse the index list for coindices.

class Profunctor p => Traversing p where
    wander :: (forall f. Applicative f => (a -> f b) -> s -> f t)
           -> p js a b -> p js s t

class Traversing p => JTraversing p where
        :: (forall f. Applicative f => (a -> f (Either j b)) -> s -> f t)
        -> p (j : js) a b -> p js s t

Using JTraversing we can define our first coindexed optic.

traversed :: (Traversable f, Traversing p) => p js a b -> p js (f a) (f b)
traversed = wander traverse

jtraversed :: (TraversableWithCoindex j f, JTraversing p)
           => p (j : js) a b -> p js (f a) (f b)
jtraversed = jwander jtraverse

To make use of it we once again need a concrete profunctor.

newtype CoixFunArrow js a b = CoixFunArrow
    { runCoixFunArrow :: a -> Either (NS I js) b }

instance Profunctor CoixFunArrow where
    dimap f g (CoixFunArrow p) = CoixFunArrow (fmap g . p . f)

instance Traversing CoixFunArrow where
    wander f (CoixFunArrow p) = CoixFunArrow $ f p

instance JTraversing CoixFunArrow where
    jwander f (CoixFunArrow p) = CoixFunArrow $ f (plumb . p) where
        plumb :: Either (NS I (j : js)) b -> Either (NS I js) (Either j b)
        plumb (Right x)        = Right (Right x)
        plumb (Left (Z (I y))) = Right (Left y)
        plumb (Left (S z))     = Left z

Interestingly, Traversing CoixFunArrow instance looks like Mapping FunArrow, and it seems to be impossible to write Mapping IxFunArrow.

Anyway, next we define a coindexed over, which I unimaginatively call jover. Like in the previous section, I start with a generic version first.

    :: Optic CoixFunArrow is '[] s t a b
    -> (a -> Either (NS I is) b) -> s -> t
gen_jover o f s = either nsAbsurd id
                $ runCoixFunArrow (o (CoixFunArrow f)) s

    :: Optic CoixFunArrow '[i] '[] s t a b
    -> (a -> Either i b) -> s -> t
jover o f = gen_jover o (first (Z . I) . f)

    :: Optic CoixFunArrow '[i,j] '[] s t a b
    -> (a -> Either (Either i j) b)
    -> s -> t
jover2 o f = gen_jover o (first plumb . f) where
    plumb (Left i)  = Z (I i)
    plumb (Right j) = S (Z (I j)) 

And now the most fun: the coindexed optics examples. First we can recover the mapMaybe behavior:

-- ["foobar"]
example07 :: [String]
example07 = jover jtraversed
    (\s -> if length s > 5 then Right s else Left ())
    ["foobar", "xyzzy"]

And because we have separate coindexes in the type level list, we can filter on the different levels of the structure! If we find a character 'y', we skip the whole word, otherwise we skip all vowels.

-- ["fbr","nrdrd-cntnrs"]
example08 :: [String]
example08 = jover2 (jtraversed . jtraversed)
    ["foobar", "xyzzy", "unordered-containers"]
    predicate 'y'           = Left (Right ())  -- skip word
    predicate c | isVowel c = Left (Left ())   -- skip character
    predicate c             = Right c

isVowel :: Char -> Bool
isVowel c = elem c ['a','o','u','i','e']

Note, the coindex doesn’t need to mean filtering. For example, consider the following type:

newtype JList j a = JList { unJList :: [Either j a] }
  deriving (Functor, Foldable, Traversable, Show)

It’s not Filterable, but it can write a FunctorWithCoindex instance:

instance FunctorWithCoindex j (JList j) where
    jmap f (JList xs) = JList (map (>>= f) xs) where

instance TraversableWithCoindex j (JList j) where
    jtraverse f (JList xs) = fmap JList (go xs) where
        go []             = pure []
        go (Left j : ys)  = fmap (Left j :) (go ys)
        go (Right x : ys) = liftA2 (:) (f x) (go ys)

Using JList we can do different things. In this example, we return why elements didn’t match, but that information is returned embedded inside the structure itself. We "filter" long strings:

jlist :: [a] -> JList j a
jlist = JList . map Right

ex_jlist_a :: JList Int String
ex_jlist_a = jlist ["foobar", "xyzzy", "unordered-containers"]

-- JList {unJList = [Left 6,Right "xyzzy",Left 20]}
example09 :: JList Int String
example09 = jover jtraversed
    (\s -> let l = length s in if l > 5 then Left l else Right s)

Similarly we can filter, or rather "change structure", on different levels, and these levels can have different coindices:

ex_jlist_b :: JList Int (JList Bool Char)
ex_jlist_b = fmap jlist ex_jlist_a

example88b :: JList Int (JList Bool Char)
example88b = jover2
    (jtraversed . jtraversed)
    predicate 'x'           = Left (Right 0)
    predicate 'y'           = Left (Right 1)
    predicate 'z'           = Left (Right 2)
    predicate c | isVowel c = Left (Left (c == 'o'))
    predicate c             = Right c

[ Right [Right 'f',Left True,Left True,Right 'b',Left False,Right 'r']
, Left 0
, Right [Left False,Right 'n',Left True,Right 'r',Right 'd',Left False, ...
example88b' :: [Either Int [Either Bool Char]]
example88b' = coerce example88b

The "xyzzy" is filtered immediately, we see Left 0 as a reason. We can also see how vowels are filtered, and 'o' are marked specifically with Left True.

Having coindices reside inside the structure makes composition just work. That what makes this different from "error reporting optics". And using coindices approach we can compose filters, the Wither from witherable doesn’t seem to compose with itself.


Obvious follow up question is whether we can have both indices and coindices. Why not, the concrete profunctor would look like:

newtype DuplexFunArrow is js a b = DuplexFunArrow
    { runDuplexFunArrow :: (NP I is, a) -> Either (NS I js) b }

Intuitively, the structure traversals would provide additional information in indices, and we’ll be able to alter it by optionally returning coindices.

Would that be useful? I have no idea.



nsAbsurd :: NS I '[] -> a
nsAbsurd x = case x of {}

  1. looks like an example of that. In Scala.↩︎

January 04, 2021 12:00 AM

Donnacha Oisín Kidney

Master's Thesis

Posted on January 4, 2021
Tags: Agda

The final version of my master’s thesis got approved recently so I thought I’d post it here for people who might be interested.

Here’s the pdf.

And all of the theorems in the thesis have been formalised in Agda. The code is organised to follow the structure of the pdf here.

The title of the thesis is “Finiteness in Cubical Type Theory”: basically it’s all about formalising the notion of “this type is finite” in CuTT. I also wanted to write something that could serve as a kind of introduction to some components of modern dependent type theory which didn’t go the standard length-indexed vector route.

by Donnacha Oisín Kidney at January 04, 2021 12:00 AM

Abhinav Sarkar

Solving Advent of Code “Handy Haversacks” in Type-level Haskell

I have been trying to use type-level programming in Haskell to solve interesting problems since I read Thinking with Types by Sandy Maguire. Then I found myself solving the problems in Advent of Code 2020 and some of them seemed suitable to be solved with type-level programming. So I decided to give it a shot.

This post was originally published on

Type-level Programming

Type-level programming (TLP) is, simply put, using the type system of a language to solve a problem, or a part of a problem. In a way, we already do TLP when we create the right types to represent our problems and solutions in code. The right types do a lot of work for us by making sure that wrong models and states do not compile, hence reducing the solution-space for us. But in some languages like Haskell and Idris, we can do much more than just crafting the right types. We can leverage the type-system itself for computation! Recent versions of Haskell have introduced superb support for various extensions and primitives to make TLP in Haskell easier than ever before1. Let’s use TLP to solve an interesting problem in this post.

Handy Haversacks

Handy Haversacks is the problem for the day seven of Advent of Code 20202. In this problem, our input is a set of rules about some bags. The bags have different colors and may contain zero or more bags of other colors. Here are the rules for the example problem:

light red bags contain 1 bright white bag, 2 muted yellow bags.
dark orange bags contain 3 bright white bags, 4 muted yellow bags.
bright white bags contain 1 shiny gold bag.
muted yellow bags contain 2 shiny gold bags, 9 faded blue bags.
shiny gold bags contain 1 dark olive bag, 2 vibrant plum bags.
dark olive bags contain 3 faded blue bags, 4 dotted black bags.
vibrant plum bags contain 5 faded blue bags, 6 dotted black bags.
faded blue bags contain no other bags.
dotted black bags contain no other bags.

We are going to solve the part two of the problem: given the color of a bag, find out how many other bags in total that bag contains. Since the bags can contain more bags, this is a recursive problem. For the rules above, a shiny gold bag contains …

1 dark olive bag (and the 7 bags within it) plus 2 vibrant plum bags (and the 11 bags within each of those): 1 + 1*7 + 2 + 2*11 = 32 bags!

At this point, many of the readers would have already solved this problem in their heads: just parse the input to a lookup table and use it to recursively calculate the number of bags. Easy, isn’t it? But what if we want to solve it with type-level programming?

Terms, Types, and Kinds

We are used to working in the world of Terms. Terms are “things” that programs manipulate at the runtime, for example, integers, strings, and user-defined data type instances. Terms have Types which are used by the compiler to prevent certain behaviors at compile-time, even before the programs are run. For example, it prevents you from adding a string to an integer.

The compiler works (or computes) with types instead of terms. This chain goes further. Like terms have types, types have Kinds. Kinds can be thought of as “the types of the Types”. The compiler uses kinds to prevent certain behaviors of the types at compile-time. Let’s use GHCi to explore terms, types, and kinds:

> True -- a term
> :type True -- and its type
True :: Bool
> :kind Bool -- and the kind of the Bool type
Bool :: *

All terms have only one kind: *. That is, all types like Int, String and whatever data types we define ourselves, have the kind *.

It’s trivial to create new types in Haskell with data and newtype definitions. How do we go about creating new kinds? The DataKinds extension lets us do that:

> :set -XDataKinds
> data Allow = Yes | No
> :type Yes -- Yes is data constructor with type Allow
Yes :: Allow
> :kind Allow -- Allow is a type with kind *
Allow :: *
> :kind Yes -- Yes is a type too. Its kind is Allow.
Yes :: Allow

The DataKinds extension promotes types to kinds, and data constructors of types to the types of corresponding kinds. In the example above, Yes and No are the promoted types of the promoted kind Allow. Even though the constructors, types, and kinds may have same names, the compiler can tell apart from their context.

Now we know how to create our own kinds. What if we check for the promoted kinds of the built-in types?

> :type True
True :: Bool
> :type 1 :: Int
1 :: Int :: Int
> :type "hello"
"hello" :: [Char]
> :kind True
True :: Bool
> :kind 1
1 :: Nat
> :kind "hello"
"hello" :: Symbol

As expected, the Bool type is promoted to the Bool kind. But numbers and strings have kinds Nat and Symbol respectively. What are these new kinds?

Type-level Primitives

To be able to do useful computation at type-level, we need type-level numbers and strings. We can use Peano numbers to encode natural numbers as a type and use the DataKinds extension to promote them to type-level. With numbers as types now, we can use them for interesting things like sized vectors with compile-time validation for index bound checks etc. But Peano numbers are awkward to work with because of their verbosity. Fortunately, GHC has built-in support for type-level natural numbers with the GHC.TypeLits package.

> :kind 1 -- 1 is a type-level number here
1 :: Nat
> :kind 10000 -- kind of all type-level numbers is GHC.TypeLits.Nat
10000 :: Nat

GHC supports type-level strings as well through the same package. Unlike term-level strings which are lists of Chars, type-level strings are defined as a primitive and their kind is Symbol3.

> :kind "hello at type-level"
"hello at type-level" :: Symbol

GHC also supports type-level lists and tuples. Type-level lists can contain zero or more types of same kind, while type-level tuples can contain zero or more types of possibly different kinds.

> :kind [1, 2, 3]
[1, 2, 3] :: [Nat]
> :kind ["hello", "world"]
["hello", "world"] :: [Symbol]
> -- prefix the tuple with ' to disambiguate it as a type-level tuple
> :kind '(1, "one")
'(1, "one") :: (Nat, Symbol)

Now we are familiar with the primitives for type-level computations. How exactly do we do these computations though?

Type Families

Type families can be thought of as functions that work at type-level. Just like we use functions to do computations at term-level, we use type families to do computations at type-level. Type families are enabled by the TypeFamilies extension4.

Let’s write a simple type family to compute the logical conjunction of two type-level booleans:

> :set -XTypeFamilies
> :set +m
> type family And (x :: Bool) (y :: Bool) :: Bool where
>   And True True = True
>   And _     _   = False
> :kind And
And :: Bool -> Bool -> Bool
> :kind! And True False
And True False :: Bool
= 'False
> :kind! And True True
And True True :: Bool
= 'True
> :kind! And False True
And False True :: Bool
= 'False

Kind of And shows that it is a function at type-level. We apply it using the :kind! command in GHCi to see that it indeed works as expected.

GHC comes with some useful type families to do computations on Nats and Symbols in the GHC.TypeLits package. Let’s see them in action:

> import GHC.TypeLits
> :set -XTypeOperators
> :kind! 1 + 2 -- addition at type-level
1 + 2 :: Nat
= 3
> :kind! CmpNat 1 2 -- comparison at type-level, return lifted Ordering
CmpNat 1 2 :: Ordering
= 'LT
> :kind! AppendSymbol "hello" "world" -- appending two symbols at type-level
AppendSymbol "hello" "world" :: Symbol
= "helloworld"

The TypeOperators extension enables us to define and use type families with symbolic names.

We have learned the basics of TLP in Haskell. Next, we can proceed to solve the actual problem.


This post is written in a literate programming style, meaning if you take all the code snippets from the post (excluding the GHCi examples) in the order they appear and put them in a file, you’ll have a real working program. First go the extensions and imports:

{-# LANGUAGE DataKinds, TypeFamilies, TypeOperators, UndecidableInstances #-}
module AoC7 where

import Data.Proxy
import Data.Symbol.Ascii 
import GHC.TypeLits
import Prelude hiding (words, reverse)

We have already encountered some of these extensions in the sections above. We’ll learn about the rest of them as we go along.

Consuming Strings at Type-level

The first capability required for parsing is to consume the input string character-by-character. It’s easy to do that with term-level strings as they are simply lists of characters. But Symbols are built-in primitives and cannot be consumed character-by-character using the built-in functionalities. Therefore, the first thing we should do is to figure out how to break a symbol into its constituent characters. Fortunately for us, the symbols library implements just that with the ToList type family5. It also provide a few more utilities to work with symbols which we use later for solving our problem. Let’s see what ToList gives us:

> import Data.Symbol.Ascii (ToList)
> :kind! "hello there"
"hello there" :: Symbol
= "hello there"
> :kind! ToList "hello there"
ToList "hello there" :: [Symbol]
= '["h", "e", "l", "l", "o", " ", "t", "h", "e", "r", "e"]

It does what we want. However, for the purpose of parsing rules for this problem, it’s better to have the symbols broken as words. We already have the capability to break a symbol into a list of symbols of its characters. Now, we can combine the character symbols to create a list of word symbols.

We start with solving this problem with a term-level function. It is like the words function from the Prelude but of type [String] -> [String] instead of String -> [String].

words :: [String] -> [String]
words s = reverse $ words2 [] s

words2 :: [String] -> [String] -> [String]
words2 acc []        = acc
words2 [] (" ":xs)   = words2 [] xs
words2 [] (x:xs)     = words2 [x] xs
words2 acc (" ":xs)  = words2 ("":acc) xs
words2 (a:as) (x:xs) = words2 ((a ++ x):as) xs

reverse :: [a] -> [a]
reverse l =  rev l []

rev :: [a] -> [a] -> [a]
rev []     a = a
rev (x:xs) a = rev xs (x:a)

This code may look unidiomatic Haskell but it’s written this way because we have to translate it to the type families version and type families do not support let or where bindings and case or if constructs. They support only recursion and pattern matching.

words works as expected:

> words ["h", "e", "l", "l", "o", " ", "t", "h", "e", "r", "e"]

Translating words to type-level is almost mechanical. Starting with the last function above:

type family Rev (acc :: [Symbol]) (chrs :: [Symbol]) :: [Symbol] where
  Rev '[] a = a
  Rev (x : xs) a = Rev xs (x : a)

type family Reverse (chrs :: [Symbol]) :: [Symbol] where
  Reverse l = Rev l '[]

type family Words2 (acc :: [Symbol]) (chrs :: [Symbol]) :: [Symbol] where
  Words2 acc '[]           = acc
  Words2 '[] (" " : xs)    = Words2 '[] xs
  Words2 '[] (x : xs)      = Words2 '[x] xs
  Words2 acc (" " : xs)    = Words2 ("" : acc) xs
  Words2 (a : as) (x : xs) = Words2 (AppendSymbol a x : as) xs

type family Words (chrs :: [Symbol]) :: [Symbol] where
  Words s = Reverse (Words2 '[] s)

We need the UndecidableInstances extension to write these type families. This extension relaxes some rules that GHC places to make sure that type inference in the compiler terminates. In other words, this extension lets us write recursive code at type-level which may never terminate. Since GHC cannot prove that the recursion terminates, it’s up to us programmers to make sure that it does.

Let’s see if it works:

> :kind! ToList "hello there"
ToList "hello there" :: [Symbol]
= '["h", "e", "l", "l", "o", " ", "t", "h", "e", "r", "e"]
> :kind! Words (ToList "hello there")
Words (ToList "hello there") :: [Symbol]
= '["hello", "there"]

Great! Now we can move on to the actual parsing of the rules.

Parsing at Type-level

Here are the rules of the problem as a list of symbols:

type Rules = [
    "light red bags contain 1 bright white bag, 2 muted yellow bags."
  , "dark orange bags contain 3 bright white bags, 4 muted yellow bags."
  , "bright white bags contain 1 shiny gold bag."
  , "muted yellow bags contain 2 shiny gold bags, 9 faded blue bags."
  , "shiny gold bags contain 1 dark olive bag, 2 vibrant plum bags."
  , "dark olive bags contain 3 faded blue bags, 4 dotted black bags."
  , "vibrant plum bags contain 5 faded blue bags, 6 dotted black bags."
  , "faded blue bags contain no other bags."
  , "dotted black bags contain no other bags."

We can see that the rules always start with the color of the container bag. Then they go on to either say that such-and-such bags “contain no other bags.” or list out the counts of one or more contained colored bags. We capture this model in a new type (and kind!):

data Bag = EmptyBag Symbol | FilledBag Symbol [(Nat, Symbol)]

A Bag is either an EmptyBag with a color or a FilledBag with a color and a list of tuples of count of contained bags along with their colors.

Next, we write the parsing logic at type-level which works on word symbols, directly as type families this time:

type family Parse (wrds :: [Symbol]) :: Bag where
  Parse (color1 : color2 : "bags" : "contain" : rest) =
    Parse2 (AppendSymbol color1 (AppendSymbol " " color2)) rest

type family Parse2 (color :: Symbol) (wrds :: [Symbol]) :: Bag where
  Parse2 color ("no" : _) = EmptyBag color
  Parse2 color rest = FilledBag color (Parse3 rest)

type family Parse3 (wrds :: [Symbol]) :: [(Nat, Symbol)] where
  Parse3 '[] = '[]
  Parse3 (count : color1 : color2 : _ : rest) =
    ('(ReadNat count, AppendSymbol color1 (AppendSymbol " " color2)) : Parse3 rest)

The Parse type family parses a list of word symbols into the Bag type. The logic is straightforward, if a little verbose compared to the equivalent term-level code. We use the AppendSymbol type family to put word symbols together and the ReadNat type family to convert a Symbol into a Nat. The rest is pattern matching and recursion. A quick test in GHCi reveals that it works:

> :kind! Parse (Words (ToList "light red bags contain 1 bright white bag, 2 muted yellow bags."))
Parse (Words (ToList "light red bags contain 1 bright white bag, 2 muted yellow bags.")) :: Bag
= 'FilledBag
    "light red" '[ '(1, "bright white"), '(2, "muted yellow")]
> :kind! Parse (Words (ToList "bright white bags contain 1 shiny gold bag."))
Parse (Words (ToList "bright white bags contain 1 shiny gold bag.")) :: Bag
= 'FilledBag "bright white" '[ '(1, "shiny gold")]
> :kind! Parse (Words (ToList "faded blue bags contain no other bags."))
Parse (Words (ToList "faded blue bags contain no other bags.")) :: Bag
= 'EmptyBag "faded blue"

Finally, we parse all rules into a list of Bags:

type family ParseRules (rules :: [Symbol]) :: [Bag] where
  ParseRules '[] = '[]
  ParseRules (rule : rest) = (Parse (Words (ToList rule)) : ParseRules rest)

type Bags = ParseRules Rules

And validate that it works:

> :kind! Bags
Bags :: [Bag]
= '[ 'FilledBag
       "light red" '[ '(1, "bright white"), '(2, "muted yellow")],
       "dark orange" '[ '(3, "bright white"), '(4, "muted yellow")],
     'FilledBag "bright white" '[ '(1, "shiny gold")],
       "muted yellow" '[ '(2, "shiny gold"), '(9, "faded blue")],
       "shiny gold" '[ '(1, "dark olive"), '(2, "vibrant plum")],
       "dark olive" '[ '(3, "faded blue"), '(4, "dotted black")],
       "vibrant plum" '[ '(5, "faded blue"), '(6, "dotted black")],
     'EmptyBag "faded blue", 'EmptyBag "dotted black"]

On to the final step of solving the problem: calculating the number of contained bags.

How Many Bags?

We have the list of bags with us now. To calculate the total number of bags contained in a bag of a given color, we need to be able to lookup bags from this list by their colors. So, that’s the first thing we implement:

type family LookupBag (color :: Symbol) (bags :: [Bag]) :: Bag where
  LookupBag color '[] = TypeError (Text "Unknown color: " :<>: ShowType color)
  LookupBag color (EmptyBag color' : rest) =
    LookupBag2 color (CmpSymbol color color') (EmptyBag color') rest
  LookupBag color (FilledBag color' contained : rest) =
    LookupBag2 color (CmpSymbol color color') (FilledBag color' contained) rest

type family LookupBag2 (color :: Symbol)
                       (order :: Ordering)
                       (bag :: Bag)
                       (rest :: [Bag]) :: Bag where
  LookupBag2 _ EQ bag _ = bag
  LookupBag2 color _ _ rest = LookupBag color rest

The LookupBag type family recursively walks through the list of Bags, matching each bag’s color to the given color using the CmpSymbol type family. Upon finding a match, it returns the matched bag. If no match is found, it returns a TypeError. TypeError is a type family much like the error function except it throws a compile time error instead of a runtime error.

Finally, we use LookupBag to implement the BagCount type family which does the actual calculation:

type family BagCount (color :: Symbol) :: Nat where
  BagCount color = BagCount2 (LookupBag color Bags)

type family BagCount2 (bag :: Bag) :: Nat where
  BagCount2 (EmptyBag _) = 0
  BagCount2 (FilledBag _ bagCounts) = BagCount3 bagCounts

type family BagCount3 (a :: [(Nat, Symbol)]) :: Nat where
  BagCount3 '[] = 0
  BagCount3 ( '(n, bag) : as) =
    n + n GHC.TypeLits.* BagCount2 (LookupBag bag Bags) + BagCount3 as

We use the type-level operators + and * from the GHC.TypeLits package to do the math on the Nat numbers. The rest again, is just recursion and pattern matching.

Our work is finished. It’s time to put it all to test in GHCi:

> :kind! BagCount "shiny gold"
BagCount "shiny gold" :: Nat
= 32
> :kind! BagCount "light red"
BagCount "light red" :: Nat
= 186
> :kind! BagCount "faded blue"
BagCount "faded blue" :: Nat
= 0

It works! We can also convert the type-level counts to term-level using the natVal function and the Proxy type with the TypeApplications extension. If we put an invalid color, we get a compilation error instead of a runtime error.

> :set -XTypeApplications
> natVal $ Proxy @(BagCount "shiny gold")
> natVal $ Proxy @(BagCount "shiny red")

<interactive>:17:1: error:
    • Unknown color: "shiny red"
    • In the expression: natVal $ Proxy @(BagCount "shiny red")
      In an equation for ‘it’:
          it = natVal $ Proxy @(BagCount "shiny red")

This concludes our little fun experiment with type-level programming in Haskell6. Though our problem was an easy one, it demonstrated the power of TLP. I hope to see more useful applications of TLP in the Haskell ecosystem going forward.

You can find the complete code for this post here. You can discuss this post on lobsters, r/haskell, twitter or in the comments below.

  1. Many modern libraries are increasingly employing TLP for better type-safe APIs. Some examples:

  2. For the unfamiliar:

    Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like. People use them as a speed contest, interview prep, company training, university coursework, practice problems, or to challenge each other.

  3. Type-level strings have interesting usages like type-safe keys in open records.↩︎

  4. The type families we use in this post are technically Top-level closed type families. There are other ways of defining type families as described in the Haskell wiki.↩︎

  5. The author of the symbols library Csongor Kiss has written an excellent post about how ToList is implemented.↩︎

  6. We solve only the example problem in this post but not the actual problem which has a much larger set of rules. This is because it’s just too slow to compile. I suspect it’s because we don’t have a built-in function to break a symbol into its constituent characters and have to resort to complicated type-level hacks for the same.↩︎

If you liked this post, please leave a comment.

by Abhinav Sarkar ( at January 04, 2021 12:00 AM

January 03, 2021

Mark Jason Dominus

Snow White in German

Tonight I was thinking of

Mirror, mirror, on the wall
Who is the fairest of them all?

I remembered that the original was in German and wondered whether it had always rhymed. It turns out that it had:

Spieglein, Spieglein an der Wand,
Wer ist die Schönste im ganzen Land?

The English is a pretty literal translation.

When the wunderbare Spiegel gives the Queen the bad news, it says:

Frau Königin, Ihr seid die Schönste hier,
Aber Schneewittchen ist tausendmal schöner als Ihr.

(“Queen, you are the fairest one here, but Little Snow White is a thousand times as fair as you.”)

When the dwarfs see Snow White in one of their beds, they cry

Ei, du mein Gott!

which is German for “zOMG”.

Later the Queen returns to the mirror, expecting a better answer, but she gets this:

Frau Königin, Ihr seid die Schönste hier,
Aber Schneewittchen über den Bergen
Bei den sieben Zwergen
Ist noch tausendmal schöner als Ihr.

(“Queen, you are the fairest here, but Little Snow White up on the mountain with the seven dwarfs is still a thousand times as fair as you.”)

I like the way this poem here interpolates the earlier version, turning the A-A rhyme into A-B-B-A. The English version I have has “in the glen / little men” in place of “über den Bergen / sieben Zwergen”. The original is much better, but I am not sure English has any good rhymes for “dwarfs”. Except “wharfs”, but putting the dwarfs by the wharfs is much worse than putting them in the glen.

[ Thanks to Gaal Yahas for correcting my translation of noch and to Mario Lang for correcting my German grammar. ]

by Mark Dominus ( at January 03, 2021 09:46 AM

Lysxia's blog

Theory of iteration and recursion

Posted on January 3, 2021

Recursion and iteration are two sides of the same coin. A common way to elaborate that idea is to express one in terms of the other. Iteration, recursively: to iterate an action, is to do the action, and then iterate the action again. Conversely, a recursive definition can be approximated by unfolding it iteratively. To implement recursion on a sequential machine, we can use a stack to keep track of those unfoldings.

So there is a sense in which these are equivalent, but that already presumes that they are not exactly the same. We think about recursion differently than iteration. Hence it may a little surprising when recursion and iteration both appear directly as two implementations of the same interface.

To summarize the main point without all the upcoming category theory jargon, there is one signature which describes an operator for iteration, recursion, or maybe a bit of both simultaneously, depending on how you read the symbols ==> and +:

iter :: (a ==> a + b) -> (a ==> b)

Iteration operator

The idea of “iteration” is encapsulated by the following function iter:

iter :: (a -> Either a b) -> (a -> b)
iter f a =
  case f a of
    Left a' -> iter f a'
    Right b -> b

iter can be thought of as a “while” loop. The body of the loop f takes some state a, and either says “continue” with a new state a' to keep the loop going, or “break” with a result b.

Iterative categories

We can generalize iter. It transforms “loop bodies” into “loops”, and rather than functions, those could be entities in any category. An iteration operator on some category denoted (==>) is a function with the following signature:

iter :: (a ==> a + b) -> (a ==> b)

satisfying a bunch of laws, with the most obvious one being a fixed point equation:1

iter f = (f >>> either (iter f) id)

where (>>>) and id are the two defining components of a category, and either is the eliminator for sums (+). The technical term for “a category with sums” is a cocartesian category.

class Category k => Cocartesian k where
  type a + b    -- Not fully well-formed Haskell.
  either :: k a c -> k b c -> k (a + b) c
  left :: k a (a + b)
  right :: k b (a + b)

-- Replacing k with an infix (==>)
-- either :: (a ==> c) -> (b ==> c) -> (a + b ==> c)

Putting this all together, an iterative category is a cocartesian category plus an iter operation.

class Cocartesian k => Iterative k where
  iter :: k a (a + b) -> k a b

The fixed point equation provides a pretty general way to define iter. For the three in this post, it produces working functions in Haskell. In theory, properly sorting out issues of non-termination can get hairy.

iter :: (a -> Either a b) -> (a -> b)
iter f = f >>> either (iter f) id
-- NB: (>>>) = flip (.)

Recursion operator

Recursion also provides an implementation for iter, but in the opposite category, (<==). If you flip arrows back the right way, this defines a twin interface of “coiterative categories”. Doing so, sums (+) become products (*).

class Cartesian k => Coiterative k where
  coiter :: k (a * b) a -> k b a

-- with infix notation (==>) instead of k,
-- coiter :: (a * b ==> a) -> (b ==> a)

We can wrap any instance of Iterative as an instance of Coiterative and vice versa, so iter and coiter can be thought of as the same interface in principle. For particular implementations, one or the other direction may seem more intuitive.

If we curry and flip the argument, the type of coiter becomes (b -> a -> a) -> b -> a, which is like the type of fix :: (a -> a) -> a but with the functor (b -> _) applied to both the domain (a -> a) and codomain a: coiter is fmap fix.

coiter' :: (b -> a -> a) -> b -> a
coiter' = fmap fix

The fixed point equation provides an equivalent definition. We need to flip (>>>) into (<<<) (which is (.)), and the dual of either does not have a name in the standard library, but it is liftA2 (,).

coiter :: ((a, b) -> a) -> b -> a
coiter f = f . liftA2 (,) (coiter f) id

-- where --

liftA2 (,) :: (c -> a) -> (c -> b) -> (c -> (a, b))

That latter definition is mostly similar to the naive definition of fix, where fix f will be reevaluated with every unfolding.

fix :: (a -> a) -> a
fix f = f (fix f)

We have two implementations of iter, one by iteration, one by recursion. Iterative categories thus provide a framework generalizing both iteration and recursion under the same algebraic rules.

From those two examples, one might hypothesize that iter models iteration, while coiter models recursion. But here is another example which suggests the situation is not as simple as that.

Functor category, free monads

We start with the category of functors Type -> Type, which is equipped with a sum:

data (f :+: g) a = L (f a) | R (g a)

But the real category of interest is the Kleisli category of the “monad of free monads”, i.e., the mapping Free from functors f to the free monads they generate Free f. That mapping is itself a monad.

data Free f a = Pure a | Lift (f (Free f a))

An arrow f ==> g is now a natural transformation f ~> Free g, i.e., forall a. f a -> Free g a:

-- Natural transformation from f to g
type f ~> g = forall a. f a -> g a

One intuition for that category is that functors f are interfaces, and the free monad Free f is inhabited by expressions, or programs, using operations from the interface f. Then a natural transformation f ~> Free g is an implementation of the interface f using interface g. Those operations compose naturally: given an implementation of f in terms of g (f ~> Free g), and an implementation of g in terms of h (g ~> Free h), we can obtain an implementation of f in terms of h (f ~> Free h). Thus arrows _ ~> Free _ form a category—and that also mostly implies that Free is a monad.

We can define iter in that category. Like previous examples, we can define it without thinking by using the fixed point equation of iter. We will call rec this variant of iter, because it actually behaves a lot like fix whose name is already taken:

rec :: (f ~> Free (f :+: g)) -> (f ~> Free g)
rec f = f >>> either (rec f) id

-- where --

(>>>) :: (f ~> Free g) -> (g ~> Free h) -> (f ~> Free h)
id :: f ~> Free f
either :: (f ~> h) -> (g ~> h) -> (f :+: g ~> h)

We eventually do have to think about what rec means.

The argument f ~> Free (f :+: g) is a recursive implementation of an interface f: it uses an interface f :+: g which includes f itself. rec f composes f with either (rec f) id, which is basically some plumbing around rec f. Consequently, rec takes a recursive program prog :: f ~> Free (f :+: g), and produces a non-recursive program f ~> Free g, using that same result to implement the f calls in prog, so only the other “external” calls in g remain.

That third version of iter (rec) has similarities to both of the previous versions (iter and fix).

Obviously, the whole explanation above is given from perspective of recursion, or self-referentiality. While fix simply describes recursion as fixed points, rec provides a more elaborate model based on an explicit notion of syntax using Free monads.

There is also a connection to the eponymous interpretation of iter as iteration. Both iter and rec use a sum type (Either or (:+:)), representing a choice: to “continue” or “break” the loop, to “recurse” or “call” an external function.

Control-flow graphs

That similarity may be more apparent when phrased in terms of low-level “assembly-like” languages, control-flow graphs. Here, programs consist of blocks of instructions, with “jump” instructions pointing to other blocks of instructions. Those programs form a category. The objects, i.e., interfaces, are sets of “program labels” that one can jump to. A program p : I ==> J exposes a set of “entry points” I and a set of “exit points” J: execution enters the program p by jumping to a label in I, and exits it by jumping to a label in J. There may be other “internal jumps” within such a program, which are not visible in the interface I ==> J.

The operation iter : (I ==> I + J) -> (I ==> J) takes a program p : I ==> I + J, whose exit points are in the disjoint union of I and J; iter p : I ==> J is the result of linking the exit points in I to the corresponding entry points, turning them into internal jumps. With some extra conditional constructs, we can easily implement “while” loops (“iter on _ -> _”) with such an operation.

Simple jumps (“jump to this label”) are pretty limited in expressiveness. We can make them more interesting by adding return locations to jumps, which thus become “calls” (“push a frame on the stack and jump to this label”)—to be complemented with “return” instructions. That generalization allows us to (roughly) implement rec, suggesting that those various interpretations of iter are maybe not as different as they seem.

iter :: (a ==> a + b) -> (a ==> b)

-- specializes to --

iter   :: (a -> Either a b)     -> (a -> b)
coiter :: ((a, b) -> a)         -> (b -> a)
rec    :: (f ~> Free (f :+: g)) -> (f ~> Free g)

  1. The notion of “iterative category” is not quite standard; here is my version in Coq which condenses the little I could digest from the related literature (I mostly skip a lot and look for equations or commutative diagrams). Those and other relevant equations can be found in the book Iteration Theories: The Equational Logic of Iterative Processes by Bloom and Ésik (in Section 5.2, Definition 5.2.1 (fixed point equation), and Theorems 5.3.1, 5.3.3, 5.3.9). It’s a pretty difficult book to just jump into though. The nice thing about category theory is that such dense formulas can be replaced with pretty pictures, like in this paper (page 7). For an additional source of diagrams and literature, a related notion is that of traced monoidal categories—every iterative category is traced monoidal.↩︎

by Lysxia at January 03, 2021 12:00 AM

January 01, 2021

Mark Jason Dominus

My big mistake about dense sets

I made a big mistake in a Math Stack Exchange answer this week. It turned out that I believed something that was completely wrong.

Here's the question, are terminating decimals dense in the reals?. It asks if the terminating decimals (that is, the rational numbers of the form ) are dense in the reals. “Dense in the reals” means that if an adversary names a real number and a small distance , and challenges us to find a terminating decimal that is closer than to point , we can always do it. For example, is there a terminating decimal that is within of ? There is: is closer than that; the difference is less than .

The answer to the question is ‘yes’ and the example shows why: every real number has a decimal expansion, and if you truncate that expansion far enough out, you get a terminating decimal that is as close as you like to the original number. This is the obvious and straightforward way to prove it, and it's just what the top-scoring answer did.

I thought I'd go another way, though. I said that it's enough to show that for any two terminating decimals, and , there is another one that lies between them. I remember my grandfather telling me long ago that this was a sufficient condition for a set to be dense in the reals, and I believed him. But it isn't sufficient, as Noah Schweber kindly pointed out.

(It is, of course, necessary, since if is a subset of , and but no element of between these, then there is no element of that is less than distance of . Both and are at that distance, and no other point of is closer.)

The counterexample that M. Schweber pointed out can be explained quickly if you know what the Cantor middle-thirds set is: construct the Cantor set, and consider the set of midpoints of the deleted intervals; this set of midpoints has the property that between any two there is another, but it is not dense in the reals. I was going to do a whole thing with diagrams for people who don't know the Cantor set, but I think what follows will be simpler.

Consider the set of real numbers between 0 and 1. These can of course be represented as decimals, some terminating and some not. Our counterexample will consist of all the terminating decimals that end with , and before that final have nothing but zeroes and nines. So, for example, . To the left and right of , respectively, are and .

In between (and around) these three are: $$\begin{array}{l} \color{darkblue}{ 0.005 }\\ 0.05 \\ \color{darkblue}{ 0.095 }\\ 0.5 \\ \color{darkblue}{ 0.905 }\\ 0.95 \\ \color{darkblue}{ 0.995 }\\ \end{array}$$

(Dark blue are the new ones we added.)

And in between and around these are:

$$\begin{array}{l} \color{darkblue}{ 0.0005 }\\ 0.005 \\ \color{darkblue}{ 0.0095 }\\ 0.05 \\ \color{darkblue}{ 0.0905 }\\ 0.095 \\ \color{darkblue}{ 0.0995 }\\ 0.5 \\ \color{darkblue}{ 0.9005 }\\ 0.905 \\ \color{darkblue}{ 0.9095 }\\ 0.95 \\ \color{darkblue}{ 0.9905 }\\ 0.995 \\ \color{darkblue}{ 0.9995 }\\ \end{array}$$

Clearly, between any two of these there is another one, because around we've added before and after, which will lie between and any decimal with fewer digits before it terminates. So this set does have the between-each-two-is-another property that I was depending on.

But it should also be clear that this set is not dense in the reals, because, for example, there is obviously no number of this type that is near .

(This isn't the midpoints of the middle-thirds set, it's the midpoints of the middle-four-fifths set, but the idea is exactly the same.)

Happy New Year, everyone!

by Mark Dominus ( at January 01, 2021 04:56 PM

December 31, 2020

in Code

Advent of Code 2020: Haskell Solution Reflections for all 25 Days

Merry Christmas and Happy New Years, to all!

Once again, every year I like to participate in Eric Wastl’s Advent of Code! It’s a series of 25 Christmas-themed puzzles that release every day at midnight — there’s a cute story motivating each one, usually revolving around saving Christmas. Every night my friends and I (including the good people of freenode’s ##advent-of-code channel) talk about the puzzle and creative ways to solve it (and also see how my bingo card is doing). The subreddit community is also pretty great as well! And an even nicer thing is that the puzzles are open-ended enough that there are often many ways of approaching them…including some approaches that can leverage math concepts in surprising ways, like group theory, galilean transformations and linear algebra, and more group theory. Many of the puzzles are often simple data transformations that Haskell is especially good at!

Speaking of Haskell, I usually do a write-up for every day I can get around to about unique insights that solving in Haskell can provide to each different puzzle. I did them in 2017, 2018, and 2019, but I never finished every day. But 2020 being what it is, I was able to finish! :D

You can find all of them here, but here are links to each individual one. Hopefully you can find them helpful. And if you haven’t yet, why not try Advent of Code yourself? :) And drop by the freenode ##advent-of-code channel, we’d love to say hi and chat, or help out! Thanks all for reading, and also thanks to Eric for a great event this year, as always!

by Justin Le at December 31, 2020 04:35 AM

December 30, 2020

Mark Jason Dominus

Benjamin Franklin and the Exercises of Ignatius

Recently I learned of the Spiritual Exercises of St. Ignatius. Wikipedia says (or quotes, it's not clear):

Morning, afternoon, and evening will be times of the examinations. The morning is to guard against a particular sin or fault, the afternoon is a fuller examination of the same sin or defect. There will be a visual record with a tally of the frequency of sins or defects during each day. In it, the letter 'g' will indicate days, with 'G' for Sunday. Three kinds of thoughts: "my own" and two from outside, one from the "good spirit" and the other from the "bad spirit".

This reminded me very strongly of Chapter 9 of Benjamin Franklin's Autobiography, in which he presents “A Plan for Attaining Moral Perfection”:

My intention being to acquire the habitude of all these virtues, I judg'd it would be well not to distract my attention by attempting the whole at once, but to fix it on one of them at a time… Conceiving then, that, agreeably to the advice of Pythagoras in his Golden Verses, daily examination would be necessary, I contrived the following method for conducting that examination.

I made a little book, in which I allotted a page for each of the virtues. I rul'd each page with red ink, so as to have seven columns, one for each day of the week, marking each column with a letter for the day. I cross'd these columns with thirteen red lines, marking the beginning of each line with the first letter of one of the virtues, on which line, and in its proper column, I might mark, by a little black spot, every fault I found upon examination to have been committed respecting that virtue upon that day.

I determined to give a week's strict attention to each of the virtues successively. Thus, in the first week, my great guard was to avoid every the least offense against Temperance, leaving the other virtues to their ordinary chance, only marking every evening the faults of the day.

So I wondered: was Franklin influenced by the Exercises? I don't know, but it's possible. Wondering about this I consulted the Mighty Internet, and found two items in the Woodstock Letters, a 19th-century Jesuit periodical, wondering the same thing:

The following extract from Franklin’s Autobiography will prove of interest to students of the Exercises: … Did Franklin learn of our method of Particular Examen from some of the old members of the Suppressed Society?

(“Woodstock Letters” Volume XXXIV #2 (Sep 1905) p.311–313)

I can't guess at the main question, but I can correct one small detail: although this part of the Autobiography was written around 1784, the time of which Franklin was writing, when he actually made his little book, was around 1730, well before the suppression of the Society.

The following issue takes up the matter again:

Another proof that Franklin was acquainted with the Exercises is shown from a letter he wrote to Joseph Priestley from London in 1772, where he gives the method of election of the Exercises. …

(“Woodstock Letters” Volume XXXIV #3 (Dec 1905) p.459–461)

Franklin describes making a decision by listing, on a divided sheet of paper, the reasons for and against the proposed action. And then a variation I hadn't seen: balance arguments for and arguments against, and cross out equally-balanced sets of arguments. Franklin even suggests evaluations as fine as matching two arguments for with three slightly weaker arguments against and crossing out all five together.

I don't know what this resembles in the Exercises but it certainly was striking.

by Mark Dominus ( at December 30, 2020 08:08 PM

December 29, 2020

Roman Cheplyaka

StateT vs. IORef: a benchmark

Sometimes I’m writing an IO loop in Haskell, and I need some sort of a counter or accumulator. The two main options are to use a mutable reference (IORef) or to put a StateT transformer on top the IO monad.

I was curious, though, if there was a difference in efficiency between these two approaches. Intuitively, IORefs are dedicated heap objects, while a StateT transformer’s state becomes “just” a local variable, so StateT might optimize better. But how much of a difference does it make?

So I benchmarked the four functions, all of which calculate the sum of numbers between 1 and n = 1000.

base_sum simply calls sum from the base package; state_sum and stateT_sum maintain the accumulator using the State Int and StateT Int IO monads, respectively, and ioref_sum uses an IORef within the IO monad. And here are the results, as reported by criterion.

Mean execution times reported by criterion. The error bars are the lower and upper bounds of the mean as reported by criterion, which I think are 95% bootstrap confidence intervals.

I’m not sure how stateT_sum manages to be faster than state_sum and base_sum (this doesn’t appear to be a statistical fluke), but what’s clear is that ioref_sum is significantly slower of them all.

So if 3ns per state access matter to you, go for StateT even when you are in IO.

(Update: also check out the comments on reddit, especially the ones by u/VincentPepper.)

Here’s the full benchmark code. It was compiled with -O2 by GHC 8.8.4 and run on AMD Ryzen 7 3700X.

import Criterion
import Criterion.Main

import Control.Monad.State
import Data.IORef

base_sum :: Int -> Int
base_sum n = sum [1 .. n]

state_sum :: Int -> Int
state_sum n = flip execState 0 $
  forM_ [1..n] $ \i ->
    modify' (+i)

stateT_sum :: Int -> IO Int
stateT_sum n = flip execStateT 0 $
  forM_ [1..n] $ \i ->
    modify' (+i)

ioref_sum :: Int -> IO Int
ioref_sum n = do
  ref <- newIORef 0
  forM_ [1..n] $ \i ->
    modifyIORef' ref (+i)
  readIORef ref

main = do
  let n = 1000
    [ bench "base_sum" $ whnf base_sum n
    , bench "state_sum" $ whnf state_sum n
    , bench "stateT_sum" $ whnfAppIO stateT_sum n
    , bench "ioref_sum" $ whnfAppIO ioref_sum n

December 29, 2020 08:00 PM

Gil Mizrahi

Things I did in 2020

In this post I'd like to look back and mention a few (programming related) things I did in 2020.

Blog posts and learning resources

  • 5 new blog posts (including this one :))
  • learn-haskell-blog-generator - I've been thinking about making a project-based tutorial for Haskell for a long time and thought a static blog generator was a good choice for a first project. I've tried doing something new and wrote each tutorial section in it's own commit, building stuff and learning new things as we go.
  • learn-scotty-bulletin-app - A tutorial for basic usage of scotty. I tried to include enough information so that even people without a lot of domain knowledge on the subject will be able to follow and learn.


Open-Source Software

I've selected a few notable fun projects:


In February, I started an experiment to see if I could reimplement the uniplate API using GHC Generics instead of Data, and improve on the current performance of uniplate (with Data), and this was the result.

My benchmark showed some functionality can indeed be faster in some cases but I also got worse performance in other cases. Specifically, biplate functions for different on and from types were faster, when on and from are the same it was only a little faster, and monadic variants were significantly worse (2x).

I'd like to give more info regarding why that is, but I didn't continue exploring further.

Bytecode Interpreter Project (name pending)

In April, I started to live stream (mentioned above) building a stack-based virtual machine/bytecode interpreter in C. I managed to implement a few things including heap allocation and a simple garbage collector.

I really enjoy streaming and working on this project, and I'd like to continue it in 2021.


In July, I wanted an easy way to share links between various computers in my local network, so I spent a weekend and built a website program to do that.

An image of sharelinks

So far it works well and I'm satisfied with the result.


In October, I started playing D&D with a few friends and we joked about the idea of having a bot to remind us when is the next session, one friend made a telegram bot and I made a discord bot.

An image of sephibot


Also in October, after realizing that I am building multiple apps that need to save some data but do not actually need a full-fledge database, I created a simple library that can handle saving data to a file without running into data races (I think!).

This probably isn't something you should use unless you're just having fun and in that case knock yourself out.


I've been facinated with the power of Datalog for a long time, and has been thinking about implementing something similar to it myself.

I started to work on logi in October and stopped, but then I picked it back up a couple of weeks ago to make it a little more presentable for this post.

Since Logi is implemented in Haskell, I was easily able to plug it into my discord bot, which also saves its data using simple-file-db. Thus combining three different project into one :)

An image of sephibot

It's been a very fun project and I'm quite happy with the result, but there's still much to do which I might (or might not) do in 2021!

Final notes

There's no denying that 2020 was an extremely tough year. And from the looks of it 2021 will also be extremely challenging. I wish everyone all the best. Stay healthy and safe as much as you can and I hope 2021 will be better!

by Gil at December 29, 2020 12:00 AM

December 28, 2020

Monday Morning Haskell

Countdown to 2021!


At last. 2020 is nearly over. It's been a tumultuous year for the entire world, and I think most of us are glad to be turning over a new page, even if the future is still uncertain. As I always do, I'll sign off the year with a review of the different concepts we've looked at this year, and give a preview of what to expect in 2021.

2020 In Review

There were three major themes we covered this year. For much of the start of this year, we focused on AI. The main product of this was our work on a Haskell version of Open AI Gym. We explored ways to generalize the idea of an AI agent, including cool integrations of Haskell ideas like type families. We even wrote this in such a way that we could incorporate Tensor Flow! You can read about that work in our Open AI Series.

Over the summer, we switched gears a bit and focused on Rust. In our Rust Web Series we solved some more interesting problems to parallel our Real World Haskell Series. This included building a simple web server and connecting to a database.

Then our final work area was on our Haskellings program. Modeled after Rustlings, this is intended to be an automated beginner tutorial for the Haskell language. For the first time, I changed up the content a bit and did a video series, rather than a written blog series. So you can find the videos for that series on our YouTube Channel!

We're looking for people to contribute exercises (and possibly other code) to the Haskellings project, so definitely visit the repository if you'd like to help!

Looking Forward

There will be some big changes to the blog in 2021. Here are some of the highlights I'm looking forward to:

Spending more time on how Haskell fits into the broader programming ecosystem and what role it can play for those new to the industry. What can beginning programmers learn from the Haskell language and toolchain? What lessons of Haskell are applicable across many different languages? More exploration of different content types and media. As mentioned above, I spent the last part of 2020 experimenting with video blogs. I expect to do more of this type of experimenting this year. Upgrading the site's appearance and organization. Things have been pretty stagnant for a while, and there are a lot of improvements I'd like to make. For one example, I'd like to make coding sections more clear and interactive in blog posts. New, lighter-weight course material. Right now, our course page has 2 rather large courses. This year I'm going to look at breaking the material in these out into smaller, more manageable chunks, as well as adding a couple totally new course offerings at this smaller size.

I've set a lot of these goals before and fallen short. Unfortunately, I've found that these priorities often get pushed aside due to my desire to publish new content weekly, as I've been doing for over 4 years now (how time flies!). But starting in 2021, I'm going to focus on quality over quantity. I do not plan on publishing every week, and a lot of the blogs I do publish will highlight improvements to old content, rather than being new, detailed technical tutorials. I hope these changes will take the most important content on the blog and make it much more useful to the intended audiences.

I also have a tendency of creating projects to demonstrate concepts, but leave the projects behind once I am done writing about those concepts. This year, I hope to take a couple of my projects, specifically Open AI Gym and the Haskellings Beginner Tutorial and turn them into polished products that other developers will want to use. This will take a lot of focused time and effort, but I think it will be worth it.

So even though you might not see a new post every Monday, never fear! Monday Morning Haskell is here to stay! I hope all of you have a happy and safe new year!

by James Bowen at December 28, 2020 03:30 PM

Ken T Takusagawa

[jlylyjlr] not modifying state

import qualified Control.Monad.State as State;
import Control.Monad.State(State,runState);
type Mystate = ...

we can assert that a monadic function in the State monad does not modify state through comments:

-- f only reads state, and does not modify it.
f :: State Mystate Int;
f = ...

demo1 :: State Mystate ();
demo1 = do {
x <- f; -- f does not modify state

better would be to make the function pure, asserting it cannot modify state in a way the typechecker can verify:

f2 :: Mystate -> Int;
f2 = ...

demo2 :: State Mystate ();
demo2 = do {
x <- State.get >>= (return . f2);

the convenience function "gets" concisely enables the same thing:

demo3 :: State Mystate ();
demo3 = do {
x <- State.gets f2;

the name of "gets" suggests it is meant to be used for getting a projection of the state, using the specified projection function.  however, it can do any read-only computation on state, not limited to things which seem like projections.  (though arguably any read-only computation on state seems like a projection.)

originally, i thought we might have to do something much more complicated to assert at the type level that a function will not modify state, perhaps wrapping something with something using monad transformers Control.Monad.Reader.ReaderT or Control.Monad.State.StateT.  fortunately, nothing so complicated is needed.


the following is potentially confusing, so is not recommended for the faint of heart.  although we've made f2 above a pure function, we can still use monadic "do" notation to define it.  this is because a unary function is the Reader monad in disguise.  the "ask" or "get" function is "id".

{-# LANGUAGE ScopedTypeVariables #-}
astring :: Mystate -> String;
astring = do {
 foo :: Mystate <- id; -- read state
 return $ show foo;

f2 :: Mystate -> Int;
f2 = do {
 s1 :: String <- astring >>= doubleit >>= doubleit;
 return $ length s1;

-- monadic function that takes an argument.  the state variable should be the last input argument.
doubleit :: String -> Mystate -> String;
-- equivalently, read the above type signature as follows, where (->) is the type constructor for a read-only state monad, Mystate is encapsulated state, and String is the monadic return value.
doubleit :: String -> (((->) Mystate) String);
doubleit x = do {
 _ :: Mystate <- id; -- useless statement just to demonstrate reading state inside a do block
 return $ x++x;

by Unknown ( at December 28, 2020 04:30 AM

FP Complete

Cloning a reference and method call syntax in Rust

This semi-surprising corner case came up in some recent Rust training I was giving. I figured a short write-up may help some others in the future.

Rust's language design focuses on ergonomics. The goal is to make common patterns easy to write on a regular basis. This overall works out very well. But occasionally, you end up with a surprising outcome. And I think this situation is a good example.

Let's start off by pretending that method syntax doesn't exist at all. Let's say I've got a String, and I want to clone it. I know that there's a Clone::clone method, which takes a &String and returns a String. We can leverage that like so:

fn uses_string(x: String) {
    println!("I consumed the String! {}", x);

fn main() {
    let name = "Alice".to_owned();
    let name_clone = Clone::clone(&name);

Notice that I needed to pass &name to clone, not simply name. If I did the latter, I would end up with a type error:

error[E0308]: mismatched types
 --> src\
7 |     let name_clone = Clone::clone(name);
  |                                   ^^^^
  |                                   |
  |                                   expected reference, found struct `String`
  |                                   help: consider borrowing here: `&name`

And that's because Rust won't automatically borrow a reference from function arguments. You need to explicit say that you want to borrow the value. Cool.

But now I've remembered that method syntax is, in fact, a thing. So let's go ahead and use it!

let name_clone = (&name).clone();

Remembering that clone takes a &String and not a String, I've gone ahead and helpfully borrowed from name before calling the clone method. And I needed to wrap up that whole expression in parentheses, otherwise it will be parsed incorrectly by the compiler.

That all works, but it's clearly not the way we want to write code in general. Instead, we'd like to forgo the parentheses and the & symbol. And fortunately, we can! Most Rustaceans early on learn that you can simply do this:

let name_clone = name.clone();

In other words, when we use method syntax, we can call .clone() on either a String or a &String. That's because with a method call expression, "the receiver may be automatically dereferenced or borrowed in order to call a method." Essentially, the compiler follows these steps:

  • What's the type of name? OK, it's a String
  • Is there a method available that takes a String as the receiver? Nope.
  • OK, try borrowing it. Is there a method available that takes a &String as the receiver? Yes. Use that!

And, for the most part, this works exactly as you'd expect. Until it doesn't. Let's start off with a confusing error message. Let's say I've got a helper function to loudly clone a String:

fn clone_loudly(x: &String) -> String {
    println!("Cloning {}", x);

fn uses_string(x: String) {
    println!("I consumed the String! {}", x);

fn main() {
    let name = "Alice".to_owned();
    let name_clone = clone_loudly(&name);

Looking at clone_loudly, I realize that I can easily generalize this to more than just a String. The only two requirements are that the type must implement Display (for the println! call) and Clone. Let's go ahead and implement that, accidentally forgetting about the Clone:

use std::fmt::Display;
fn clone_loudly<T: Display>(x: &T) -> T {
    println!("Cloning {}", x);

As you'd expect, this doesn't compile. However, the error message given may be surprising. If you're like me, you were probably expecting an error message about missing a Clone bound on T. In fact, we get something else entirely:

error[E0308]: mismatched types
 --> src\
2 | fn clone_loudly<T: Display>(x: &T) -> T {
  |                 - this type parameter - expected `T` because of return type
3 |     println!("Cloning {}", x);
4 |     x.clone()
  |     ^^^^^^^^^ expected type parameter `T`, found `&T`
  = note: expected type parameter `T`
                  found reference `&T`

Strangely enough, the .clone() seems to have succeeded, but returned a &T instead of a T. That's because the method call expression is following the same steps as above with String, namely:

  • What's the type of x? OK, it's a &T
  • Is there a clone method available that takes a &T as the receiver? Nope, since we don't know that T implements the Clone trait.
  • OK, try borrowing it. Is there a method available that takes a &&T as the receiver? Interestingly yes.

Let's dig in on that Clone implementation a bit. Removing a bit of noise so we can focus on the important bits:

impl<T> Clone for &T {
    fn clone(self: &&T) -> &T {

Since references are Copyable, derefing a reference to a reference results in copying the inner reference value. What I find fascinating, and slightly concerning, is that we have two orthogonal features in the language:

  • Method call syntax automatically causing borrows
  • The ability to implement traits for both a type and a reference to that type

When combined, there's some level of ambiguity about which trait implementation will end up being used.

In this example, we're fortunate that the code didn't compile. We ended up with nothing more than a confusing error message. I haven't yet run into a real life issue where this behavior can result in code which compiles but does the wrong thing. It's certainly theoretically possible, but seems unlikely to occur unintentionally. That said, if anyone has been bitten by this, I'd be very interested to hear the details.

So the takeaway: autoborrowing and derefing as part of method call syntax is a great feature of the language. It would be a major pain to use Rust without it. I'm glad it's present. Having traits implemented for references is a great feature, and I wouldn't want to use the language without it.

But every once in a while, these two things bite us. Caveat emptor.

December 28, 2020 12:00 AM

December 27, 2020

Philip Wadler

My First Type Theory

Who knew? Add eyeballs and rhymes, and type theory becomes cute! An introductory video by Arved Friedemann.

by Philip Wadler ( at December 27, 2020 04:27 PM

Michael Snoyman

Live Coding: Rust reverse proxy

I've been doing quite a bit of experimenting recently with my video recording and streaming setup. As a combination of "let's test it out" and "let's see if anyone likes this kind of thing," I'm going to be doing a live coding session this Tuesday, December 29, at 10am Eastern. If everything goes according to plan (which it may not), you can view the live stream and the recording on YouTube:

I'm also planning on streaming simultaneously on Twitter via Periscope, so if you follow me on Twitter you may see it pop up there. The current plan is:

  • Live code a reverse proxy using Rust and Hyper. I'm hoping to use the latest Hyper 0.14 and Tokio 1.0, which I haven't tested out yet at all.
  • Make modifications in response to any chats I see (assuming I get chats to work).
  • Answer any Q&A that pops up.

This first go through will likely be rocky, so if you want to come and help me out, and/or laugh at me fumbling through coding and fighting A/V issues simultaneously, come check it out!

If anyone has requests for future live streaming sessions, or Q&A they'd like to queue up in advance, please let me know in the comments below or on Twitter. I'd definitely like to do some Haskell live coding too, so I'm curious what people would like to see on that front in particular.

December 27, 2020 12:00 AM

Donnacha Oisín Kidney

Trees indexed by a Cayley Monoid

Posted on December 27, 2020
Tags: Haskell

The Cayley monoid is well-known in Haskell (difference lists, for instance, are a specific instance of the Cayley monoid), because it gives us <semantics>O(1)<annotation encoding="application/x-tex">O(1)</annotation></semantics> <>. What’s less well known is that it’s also important in dependently typed programming, because it gives us definitional associativity. In other words, the type x . (y . z) is definitionally equal to (x . y) . z in the Cayley monoid.

Some helpers and extra code

data Nat = Z | S Nat

type family (+) (n :: Nat) (m :: Nat) :: Nat where
  Z   + m = m
  S n + m = S (n + m)

I used a form of the type-level Cayley monoid in a previous post to type vector reverse without proofs. I figured out the other day another way to use it to type tree flattening.

Say we have a size-indexed tree and vector:

data Tree (a :: Type) (n :: Nat) :: Type where
  Leaf  :: a -> Tree a (S Z)
  (:*:) :: Tree a n -> Tree a m -> Tree a (n + m)

data Vec (a :: Type) (n :: Nat) :: Type where
  Nil  :: Vec a Z
  (:-) :: a -> Vec a n -> Vec a (S n)

And we want to flatten it to a list in <semantics>O(n)<annotation encoding="application/x-tex">O(n)</annotation></semantics> time:

treeToList :: Tree a n -> Vec a n
treeToList xs = go xs Nil
    go :: Tree a n -> Vec a m -> Vec a (n + m)
    go (Leaf    x) ks = x :- ks
    go (xs :*: ys) ks = go xs (go ys ks)

Haskell would complain specifically that you hadn’t proven the monoid laws:

• Couldn't match type ‘n’ with ‘n + 'Z’
• Could not deduce: (n2 + (m1 + m)) ~ ((n2 + m1) + m)

But it seems difficult at first to figure out how we can apply the same trick as we used for vector reverse: there’s no real way for the Tree type to hold a function from Nat to Nat.

To solve this problem we can borrow a trick that Haskellers had to use in the good old days before type families to represent type-level functions: types (or more usually classes) with multiple parameters.

data Tree' (a :: Type) (n :: Nat) (m :: Nat) :: Type where
  Leaf  :: a -> Tree' a n (S n)
  (:*:) :: Tree' a n2 n3
        -> Tree' a n1 n2
        -> Tree' a n1 n3

The Tree' type here has three parameters: we’re interested in the last two. The first of these is actually an argument to a function in disguise; the second is its result. To make it back into a normal size-indexed tree, we apply that function to zero:

type Tree a = Tree' a Z

three :: Tree Int (S (S (S Z)))
three = (Leaf 1 :*: Leaf 2) :*: Leaf 3

This makes the treeToList function typecheck without complaint:

treeToList :: Tree a n -> Vec a n
treeToList xs = go xs Nil
    go :: Tree' a x y -> Vec a x -> Vec a y
    go (Leaf    x) ks = x :- ks
    go (xs :*: ys) ks = go xs (go ys ks)

by Donnacha Oisín Kidney at December 27, 2020 12:00 AM

December 26, 2020

Mark Jason Dominus


Screenshot of a tweet. It says “Keys for me: kibbe, cheese pie, spinach pie, stuff grape leaves (no meat), olives, cheeses, soujuk (spicy lamb sausage), basterma (err, spicy beed prosciutto), hummous, baba g., taramasalata, immam bayadi”

This tweet from Raffi Melkonian describes the appetizer plate at his house on Christmas. One item jumped out at me:

basterma (err, spicy beef prosciutto)

I wondered what that was like, and then I realized I do have some idea, because I recognized the word. Basterma is not an originally Armenian word, it's a Turkish loanword, I think canonically spelled pastırma. And from Turkish it made a long journey through Romanian and Yiddish to arrive in English as… pastrami

For which “spicy beef prosciutto” isn't a bad description at all.

by Mark Dominus ( at December 26, 2020 07:00 PM

December 25, 2020

Tweag I/O

The Shrinks Applicative

One of the key ingredient of randomised property testing is the shrinker. The shrinker turns the output of a failed property test from “your function has a bug” to “here is a small actionable example where your function fails to meet the specification”. Specifically, after a randomised test has found a counterexample, the shrinker will kick in and recursively try smaller potential counterexamples until it can’t find a way to reduce the counterexample anymore.

Roll your own shrinker

When it comes to writing a shrinker for a particular generator, my advice is:

Hedgehog will automatically generate shrinkers for you, even for the most complex types. They are far from perfect, but in most cases, writing a shrinker manually is too hard to be worth it.

Nevertheless, there are some exceptions to everything. And you may find yourself in a situation where you have to write something which is much like a QuickCheck shrinker, but not quite. I have. If it happens to you, this blog post provides a tool to add to your tool belt.

Applicative functors

I really like applicative functors. If only because of how easy they make it to write traversals.

data T a
  = MkT1 a
  | MkT2 a (T a)
  | MkT3 a (T a) a

class Traversable T where
  traverse f (MkT1 a) = MkT1 <$> f a
  traverse f (MkT2 a as) = MkT2 <$> f a <*> traverse f as
  traverse f (MkT3 a1 as a2) = MkT3 <$> f a1 <*> traverse f as <*> f a2

There is a zen to it, really: we’re just repeating the definition. Just slightly accented.


So when defining a shrinker, I want to reach for an applicative functor.

Let’s look at the type of shrink: from a counterexample, shrink proposes a list of smaller candidate counterexample to check:

shrink :: a -> [a]

Ah, great! [] is already an applicative functor. So we can go and define

shrink :: (a, b) -> [(a, b)]
shrink = (,) <$> shrink a <*> shrink b
-- Which expands to:
shrink = [(a, b) | a <- shrink a, b <- shrink b]

But if I compare this definition with the actual shrinker for (a, b) in Quickcheck:

shrink :: (a, b) -> [(a, b)]
shrink (x, y) =
     [ (x', y) | x' <- shrink x ]
  ++ [ (x, y') | y' <- shrink y ]

I can see that it’s a bit different. My list-applicative based implementation shrinks too fast: it shrinks both component of the pair at the same time, while Quickcheck’s hand-written shrinker is more prudent and shrinks in one component at a time.

The Shrinks applicative

At this point I could say that it’s good enough: I will miss some shrinks, but it’s a price I’m willing to pay. Yet, I can have my cake and eat it too.

The problem of using the list applicative is that I can’t construct all the valid shrinks of (x, y) based solely on the shrink x and shrink y: I also need x and y. The solution is simply to carry the original x and y around.

Let’s define our Shrinks applicative:

newtype Shrinks a = Shrinks { original :: a, shrinks :: [a] }
  deriving (Functor)

-- | Class laws:
-- * `original . shrinkA = id`
-- * `shrinks . shrinkA = shrink`
class Shrinkable a where
  shrinkA :: a -> Shrinks a
  shrinkA x = Shrinks { original=x, shrinks=shrink x}

  shrink :: a -> [a]
  shrink x = shrinks (shrinkA x)
  {-# MINIMAL shrinkA | shrink #-}

All we need to do is to give to Shrinks an Applicative instance. Which we can base on the Quickcheck implementation of shrink on pairs:

instance Applicative Shrinks where
  pure x = Shrinks { original=x, shrinks=[] }

  fs <*> xs = Shrinks
    { original = (original fs) (original xs)
    , shrinks = [f (original x) | f <- shrinks fs] ++ [(original f) x | x <- shrinks xs]

It is a simple exercise to verify the applicative laws. In the process you will prove that

shrinkA :: (a, b, c) -> Shrinks (a, b, c)
shrinkA (x, y, z) = (,,) <$> shrinkA x <*> shrinkA y <*> shrinkA z

does indeed shrink one component at a time.

A word of caution

Using a traversal-style definition is precisely what we want for fixed-shaped data types. But, in general, shrinkers require a bit more thought to maximise their usefulness. For instance, in a list, you will typically want to reduce the size of the list. Here is a possible shrinker for lists:

instance Shrinkable a => Shrinkable [a] where
  shrink xs =
    -- Remove one element
    [ take k xs ++ drop (k+1) xs | k <- [0 .. length xs]]
    -- or, shrink one element
    ++ shrinks (traverse shrinkA xs)

December 25, 2020 12:00 AM

December 24, 2020

Douglas M. Auclair (geophf)

December 2020 1HaskellADay Problems and Solutions

by geophf ( at December 24, 2020 01:11 AM

December 23, 2020

Oliver Charles

Monad Transformers and Effects with Backpack

A good few years ago Edward Yang gifted us an implementation of Backpack - a way for us to essentially abstract modules over other modules, allowing us to write code independently of implementation. A big benefit of doing this is that it opens up new avenues for program optimization. When we provide concrete instantiations of signatures, GHC compiles it as if that were the original code we wrote, and we can benefit from a lot of specialization. So aside from organizational concerns, Backpack gives us the ability to write some really fast code. This benefit isn’t just theoretical - Edward Kmett gave us unpacked-containers, removing a level of indirection from all keys, and Oleg Grenrus showed as how we can use Backpack to “unroll” fixed sized vectors. In this post, I want to show how we can use Backpack to give us the performance benefits of explicit transformers, but without having library code commit to any specific stack. In short, we get the ability to have multiple interpretations of our program, but without paying the performance cost of abstraction.

The Problem

Before we start looking at any code, let’s look at some requirements, and understand the problems that come with some potential solutions. The main requirement is that we are able to write code that requires some effects (in essence, writing our code to an effect interface), and then run this code with different interpretations. For example, in production I might want to run as fast as possible, in local development I might want further diagnostics, and in testing I might want a pure or in memory solution. This change in representation shouldn’t require me to change the underlying library code.

Seasoned Haskellers might be familiar with the use of effect systems to solve these kinds of problems. Perhaps the most familiar is the mtl approach - perhaps unfortunately named as the technique itself doesn’t have much to do with the library. In the mtl approach, we write our interfaces as type classes abstracting over some Monad m, and then provide instances of these type classes - either by stacking transformers (“plucking constraints”, in the words of Matt Parson), or by a “mega monad” that implements many of these instances at once (e.g., like Tweag’s capability) approach.

Despite a few annoyances (e.g., the “n+k” problem, the lack of implementations being first-class, and a few other things), this approach can work well. It also has the potential to generate a great code, but in practice it’s rarely possible to achieve maximal performance. In her excellent talk “Effects for Less”, Alexis King hits the nail on the head - despite being able to provide good code for the implementations of particular parts of an effect, the majority of effectful code is really just threading around inside the Monad constraint. When we’re being polymorphic over any Monad m, GHC is at a loss to do any further optimization - and how could it? We know nothing more than “there will be some >>= function when you get here, promise!” Let’s look at this in a bit more detail.

Say we have the following:

foo :: Monad m => m Int
foo = go 0 1_000_000_000
    go acc 0 = return acc
    go acc i = return acc >> go (acc + 1) (i - 1)

This is obviously “I needed an example for my blog” levels of contrived, but at least small. How does it execute? What are the runtime consequences of this code? To answer, we’ll go all the way down to the STG level with -ddump-stg:

$wfoo =
    \r [ww_s2FA ww1_s2FB]
        let {
          Rec {
          $sgo_s2FC =
              \r [sc_s2FD sc1_s2FE]
                  case eqInteger# sc_s2FD lvl1_r2Fp of {
                    __DEFAULT ->
                        let {
                          sat_s2FK =
                              \u []
                                  case +# [sc1_s2FE 1#] of sat_s2FJ {
                                    __DEFAULT ->
                                        case minusInteger sc_s2FD lvl_r2Fo of sat_s2FI {
                                          __DEFAULT -> $sgo_s2FC sat_s2FI sat_s2FJ;
                                  }; } in
                        let {
                          sat_s2FH =
                              \u []
                                  let { sat_s2FG = CCCS I#! [sc1_s2FE]; } in  ww1_s2FB sat_s2FG;
                        } in  ww_s2FA sat_s2FH sat_s2FK;
                    1# ->
                        let { sat_s2FL = CCCS I#! [sc1_s2FE]; } in  ww1_s2FB sat_s2FL;
          end Rec }
        } in  $sgo_s2FC lvl2_r2Fq 0#;

foo =
    \r [w_s2FM]
        case w_s2FM of {
          C:Monad _ _ ww3_s2FQ ww4_s2FR -> $wfoo ww3_s2FQ ww4_s2FR;

In STG, whenever we have a let we have to do a heap allocation - and this code has quite a few! Of particular interest is the what’s going on inside the actual loop $sgo_s2FC. This loop first compares i to see if it’s 0. In the case that’s it’s not, we allocate two objects and call ww_s2Fa. If you squint, you’ll notice that ww_s2FA is the first argument to $wfoo, and it ultimately comes from unpacking a C:Monad dictionary. I’ll save you the labor of working out what this is - ww_s2Fa is the >>. We can see that every iteration of our loop incurs two allocations for each argument to >>. A heap allocation doesn’t come for free - not only do we have to do the allocation, the entry into the heap incurs a pointer indirection (as heap objects have an info table that points to their entry), and also by merely being on the heap we increase our GC time as we have a bigger heap to traverse. While my STG knowledge isn’t great, my understanding of this code is that every time we want to call >>, we need to supply it with its arguments. This means we have to allocate two closures for this function call - which is basically whenever we pressed “return” on our keyboard when we wrote the code. This seems crazy - can you imagine if you were told in C that merely using ; would cost time and memory?

If we compile this code in a separate module, mark it as {-# NOINLINE #-}, and then call it from main - how’s the performance? Let’s check!

module Main (main) where

import Foo

main :: IO ()
main = print =<< foo
$ ./Main +RTS -s
 176,000,051,368 bytes allocated in the heap
       8,159,080 bytes copied during GC
          44,408 bytes maximum residency (1 sample(s))
          33,416 bytes maximum slop
               0 MB total memory in use (0 MB lost due to fragmentation)

                                     Tot time (elapsed)  Avg pause  Max pause
  Gen  0     169836 colls,     0 par    0.358s   0.338s     0.0000s    0.0001s
  Gen  1         1 colls,     0 par    0.000s   0.000s     0.0001s    0.0001s

  INIT    time    0.000s  (  0.000s elapsed)
  MUT     time   54.589s  ( 54.627s elapsed)
  GC      time    0.358s  (  0.338s elapsed)
  EXIT    time    0.000s  (  0.000s elapsed)
  Total   time   54.947s  ( 54.965s elapsed)

  %GC     time       0.0%  (0.0% elapsed)

  Alloc rate    3,224,078,302 bytes per MUT second

  Productivity  99.3% of total user, 99.4% of total elapsed

OUCH. My i7 laptop took almost a minute to iterate a loop 1 billion times.

A little disclaimer: I’m intentionally painting a severe picture here - in practice this cost is irrelevant to all but the most performance sensitive programs. Also, notice where the let bindings are in the STG above - they are nested within the loop. This means that we’re essentially allocating “as we go” - these allocations are incredibly cheap, and the growth to GC is equal trivial, resulting in more like constant GC pressure, rather than impending doom. For code that is likely to do any IO, this cost is likely negligible compared to the rest of the work. Nonetheless, it is there, and when it’s there, it’s nice to know if there are alternatives.

So, is the TL;DR that Haskell is completely incapable of writing effectful code? No, of course not. There is another way to compile this program, but we need a bit more information. If we happen to know what m is and we have access to the Monad dictionary for m, then we might be able to inline >>=. When we do this, GHC can be a lot smarter. The end result is code that now doesn’t allocate for every single >>=, and instead just gets on with doing work. One trivial way to witness this is to define everything in a single module (Alexis rightly points out this is a trap for benchmarking that many fall into, but for our uses it’s the behavior we actually want).

This time, let’s write everything in one module:

module Main ( main ) where

And the STG:

lvl_r4AM = CCS_DONT_CARE S#! [0#];

lvl1_r4AN = CCS_DONT_CARE S#! [1#];

Rec {
main_$sgo =
    \r [void_0E sc1_s4AY sc2_s4AZ]
        case eqInteger# sc1_s4AY lvl_r4AM of {
          __DEFAULT ->
              case +# [sc2_s4AZ 1#] of sat_s4B2 {
                __DEFAULT ->
                    case minusInteger sc1_s4AY lvl1_r4AN of sat_s4B1 {
                      __DEFAULT -> main_$sgo void# sat_s4B1 sat_s4B2;
          1# -> let { sat_s4B3 = CCCS I#! [sc2_s4AZ]; } in  Unit# [sat_s4B3];
end Rec }

main2 = CCS_DONT_CARE S#! [1000000000#];

main1 =
    \r [void_0E]
        case main_$sgo void# main2 0# of {
          Unit# ipv1_s4B7 ->
              let { sat_s4B8 = \s [] $fShowInt_$cshow ipv1_s4B7;
              } in  hPutStr' stdout sat_s4B8 True void#;

main = \r [void_0E] main1 void#;

main3 = \r [void_0E] runMainIO1 main1 void#;

main = \r [void_0E] main3 void#;

The same program compiled down to much tighter loop that is almost entirely free of allocations. In fact, the only allocation that happens is when the loop terminates, and it’s just boxing the unboxed integer that’s been accumulating in the loop.

As we might hope, the performance of this is much better:

$ ./Main +RTS -s
  16,000,051,312 bytes allocated in the heap
         128,976 bytes copied during GC
          44,408 bytes maximum residency (1 sample(s))
          33,416 bytes maximum slop
               0 MB total memory in use (0 MB lost due to fragmentation)

                                     Tot time (elapsed)  Avg pause  Max pause
  Gen  0     15258 colls,     0 par    0.031s   0.029s     0.0000s    0.0000s
  Gen  1         1 colls,     0 par    0.000s   0.000s     0.0001s    0.0001s

  INIT    time    0.000s  (  0.000s elapsed)
  MUT     time    9.402s  (  9.405s elapsed)
  GC      time    0.031s  (  0.029s elapsed)
  EXIT    time    0.000s  (  0.000s elapsed)
  Total   time    9.434s  (  9.434s elapsed)

  %GC     time       0.0%  (0.0% elapsed)

  Alloc rate    1,701,712,595 bytes per MUT second

  Productivity  99.7% of total user, 99.7% of total elapsed

Our time in the garbage collector dropped by a factor of 10, from 0.3s to 0.03. Our total allocation dropped from 176GB (yes, you read that right) to 16GB (I’m still not entirely sure what this means, maybe someone can enlighten me). Most importantly our total runtime dropped from 54s to just under 10s. All this from just knowing what m is at compile time.

So GHC is capable of producing excellent code for monads - what are the circumstances under which this happens? We need, at least:

  1. The source code of the thing we’re compiling must be available. This means it’s either defined in the same module, or is available with an INLINABLE pragma (or GHC has chosen to add this itself).

  2. The definitions of >>= and friends must also be available in the same way.

These constraints start to feel a lot like needing whole program compilation, and in practice are unreasonable constraints to reach. To understand why, consider that most real world programs have a small Main module that opens some connections or opens some file handles, and then calls some library code defined in another module. If this code in the other module was already compiled, it will (probably) have been compiled as a function that takes a Monad dictionary, and just calls the >>= function repeatedly in the same manner as our original STG code. To get the allocation-free version, this library code needs to be available to the Main module itself - as that’s the module that choosing what type to instantiate ‘m’ with - which means the library code has to have marked that code as being inlinable. While we could add INLINE everywhere, this leads to an explosion in the amount of code produced, and can sky rocket compilation times.

Alexis’ eff library works around this by not being polymorphic in m. Instead, it chooses a concrete monad with all sorts of fancy continuation features. Likewise, if we commit to a particular monad (a transformer stack, or maybe using RIO), we again avoid this cost. Essentially, if the monad is known a priori at time of module compilation, GHC can go to town. However, the latter also commits to semantics - by choosing a transformer stack, we’re choosing a semantics for our monadic effects.

With the scene set, I now want to present you with another approach to solving this problem using Backpack.

A Backpack Primer

Vanilla GHC has a very simple module system - modules are essentially a method for name-spacing and separate compilation, they don’t do much more. The Backpack project extends this module system with a new concept - signatures. A signature is like the “type” of a module - a signature might mention the presence of some types, functions and type class instances, but it says nothing about what the definitions of these entities are. We’re going to (ab)use this system to build up transformer stacks at configuration time, and allow our library to be abstracted over different monads. By instantiating our library code with different monads, we get different interpretations of the same program.

I won’t sugar coat - what follows is going to pretty miserable. Extremely fun, but miserable to write in practice. I’ll let you decide if you want to inflict this misery on your coworkers in practice - I’m just here to show you it can be done!

A Signature for Monads

The first thing we’ll need is a signature for data types that are monads. This is essentially the “hole” we’ll rely on with our library code - it will give us the ability to say “there exists a monad”, without committing to any particular choice.

In our Cabal file, we have:

library monad-sig
  hs-source-dirs:   src-monad-sig
  signatures:       Control.Monad.Signature
  default-language: Haskell2010
  build-depends:    base

The important line here is signatures: Control.Monad.Signature which shows that this library is incomplete and exports a signature. The definition of Control/Monad/Signature.hsig is:

signature Control.Monad.Signature where

data M a
instance Functor M
instance Applicative M
instance Monad M

This simply states that any module with this signature has some type M with instances of Functor, Applicative and Monad.

Next, we’ll put that signature to use in our library code.

Libary Code

For our library code, we’ll start with a new library in our Cabal file:

library business-logic
  hs-source-dirs:   lib
  signatures:       BusinessLogic.Monad
  exposed-modules:  BusinessLogic
    , base
    , fused-effects
    , monad-sig

  default-language: Haskell2010
    monad-sig requires (Control.Monad.Signature as BusinessLogic.Monad)

Our business-logic library itself exports a signature, which is really just a re-export of the Control.Monad.Signature, but we rename it something more meaningful. It’s this module that will provide the monad that has all of the effects we need. Along with this signature, we also export the BusinessLogic module:

{-# language FlexibleContexts #-}
module BusinessLogic where

import BusinessLogic.Monad ( M )
import Control.Algebra ( Has )
import Control.Effect.Empty ( Empty, guard )

businessCode :: Has Empty sig M => Bool -> M Int
businessCode b = do
  guard b
  return 42

In this module I’m using fused-effects as a framework to say which effects my monad should have (though this is not particularly important, I just like it!). Usually Has would be applied to a type variable m, but here we’re applying it to the type M. This type comes from BusinessLogic.Monad, which is a signature (you can confirm this by checking against the Cabal file). Other than that, this is all pretty standard!

Backpack-ing Monad Transformers

Now we get into the really fun stuff - providing implementations of effects. I mentioned earlier that one possible way to do this is with a stack of monad transformers. Generally speaking, one would write a single newtype T m a for each effect type class, and have that transformer dispatch any effects in that class, and to lift any effects from other classes - deferring their implementation to m.

We’re going to take the same approach here, but we’ll absorb the idea of a transformer directly into the module itself. Let’s look at an implementation of the Empty effect. The Empty effect gives us a special empty :: m a function, which serves the purpose of stopping execution immediately. As a monad transformer, one implementation is MaybeT:

newtype MaybeT m a = MaybeT { runMaybeT :: m (Maybe a) }

But we can also write this using Backpack. First, our Cabal library:

library fused-effects-empty-maybe
  hs-source-dirs:   src-fused-effects-backpack
  default-language: Haskell2010
    , base
    , fused-effects
    , monad-sig

  exposed-modules: Control.Carrier.Backpack.Empty.Maybe
    monad-sig requires (Control.Monad.Signature as Control.Carrier.Backpack.Empty.Maybe.Base)

Our library exports the module Control.Carrier.Backpack.Empty.Maybe, but also has a hole - the type of base monad this transformer stacks on top of. As a monad transformer, this would be the m parameter, but when we use Backpack, we move that out into a separate module.

The implementation of Control.Carrier.Backpack.Empty.Maybe is short, and almost identical to the body of Control.Monad.Trans.Maybe - we just change any occurrences of m to instead refer to M from our .Base module:

{-# language BlockArguments, FlexibleContexts, FlexibleInstances, LambdaCase,
      MultiParamTypeClasses, TypeOperators, UndecidableInstances #-}

module Control.Carrier.Backpack.Empty.Maybe where

import Control.Algebra
import Control.Effect.Empty
import qualified Control.Carrier.Backpack.Empty.Maybe.Base as Base

type M = EmptyT

-- We could also write: newtype EmptyT a = EmptyT { runEmpty :: MaybeT Base.M a }
newtype EmptyT a = EmptyT { runEmpty :: Base.M (Maybe a) }

instance Functor EmptyT where
  fmap f (EmptyT m) = EmptyT $ fmap (fmap f) m

instance Applicative EmptyT where
  pure = EmptyT . pure . Just
  EmptyT f <*> EmptyT x = EmptyT do
    f >>= \case
      Nothing -> return Nothing
      Just f' -> x >>= \case
        Nothing -> return Nothing
        Just x' -> return (Just (f' x'))

instance Monad EmptyT where
  return = pure
  EmptyT x >>= f = EmptyT do
    x >>= \case
      Just x' -> runEmpty (f x')
      Nothing -> return Nothing

Finally, we make sure that Empty can handle the Empty effect:

instance Algebra sig Base.M => Algebra (Empty :+: sig) EmptyT where
  alg handle sig context = case sig of
    L Empty -> EmptyT $ return Nothing
    R other -> EmptyT $ thread (maybe (pure Nothing) runEmpty ~<~ handle) other (Just context)

Base Monads

Now that we have a way to run the Empty effect, we need a base case to our transformer stack. As our transformer is now built out of modules that conform to the Control.Monad.Signature signature, we need some modules for each monad that we could use as a base. For this POC, I’ve just added the IO monad:

library fused-effects-lift-io
  hs-source-dirs:   src-fused-effects-backpack
  default-language: Haskell2010
  build-depends:    base
  exposed-modules:  Control.Carrier.Backpack.Lift.IO
module Control.Carrier.Backpack.Lift.IO where
type M = IO

That’s it!

Putting It All Together

Finally we can put all of this together into an actual executable. We’ll take our library code, instantiate the monad to be a combination of EmptyT and IO, and write a little main function that unwraps this all into an IO type. First, here’s the Main module:

module Main where

import BusinessLogic
import qualified BusinessLogic.Monad

main :: IO ()
main = print =<< BusinessLogic.Monad.runEmptyT (businessCode True)

The BusinessLogic module we’ve seen before, but previously BusinessLogic.Monad was a signature (remember, we renamed Control.Monad.Signature to BusinessLogic.Monad). In executables, you can’t have signatures - executables can’t be depended on, so it doesn’t make sense for them to have holes, they must be complete. The magic happens in our Cabal file:

executable test
  main-is:          Main.hs
  hs-source-dirs:   exe
    , base
    , business-logic
    , fused-effects-empty-maybe
    , fused-effects-lift-io
    , transformers

  default-language: Haskell2010
    fused-effects-empty-maybe (Control.Carrier.Backpack.Empty.Maybe as BusinessLogic.Monad) requires (Control.Carrier.Backpack.Empty.Maybe.Base as BusinessLogic.Monad.Base),
    fused-effects-lift-io (Control.Carrier.Backpack.Lift.IO as BusinessLogic.Monad.Base)

Wow, that’s a mouthful! The work is really happening in mixins. Let’s take this step by step:

  1. First, we can see that we need to mixin the fused-effects-empty-maybe library. The first (X as Y) section specifies a list of modules from fused-effects-empty-maybe and renames them for the test executable that’s currently being compiled. Here, we’re renaming Control.Carrier.Backpack.Empty.Maybe as BusinessLogic.Monad. By doing this, we satisfy the hole in the business-logic library, which was otherwise incomplete.

  2. But fused-effects-empty-maybe itself has a hole - the base monad for the transformer. The requires part lets us rename this hole, but we’ll still need to plug it. For now, we rename Control.Carrier.Backpack.Empty.Maybe.Base).

  3. Next, we mixin the fused-effects-lift-io library, and rename Control.Carrier.Backpack.Lift.IO to be BusinessLogic.Monad.Base. We’ve now satisfied the hole for fused-effects-empty-maybe, and our executable has no more holes and can be compiled.

We’re Done!

That’s “all” there is to it. We can finally run our program:

$ cabal run
Just 42

If you compare against businessCode you’ll see that we got passed the guard and returned 42. Because we instantiated BusinessLogic.Monad with a MaybeT-like transformer, this 42 got wrapped up in Just.

Is This Fast?

The best check here is to just look at the underlying code itself. If we add

{-# options -ddump-simpl -ddump-stg -dsuppress-all #-}

to BusinessLogic and recompile, we’ll see the final code output to STDERR. The core is:

  = \ @ sig_a2cM _ b_a13P eta_B1 ->
      case b_a13P of {
        False -> (# eta_B1, Nothing #);
        True -> (# eta_B1, lvl1_r2NP #)

and the STG:

businessCode1 =
    \r [$d(%,%)_s2PE b_s2PF eta_s2PG]
        case b_s2PF of {
          False -> (#,#) [eta_s2PG Nothing];
          True -> (#,#) [eta_s2PG lvl1_r2NP];



In this post, I’ve hopefully shown how we can use Backpack to write effectful code without paying the cost of abstraction. What I didn’t answer is the question of whether or you not you should. There’s a lot more to effectful code than I’ve presented, and it’s unclear to me whether this approach can scale to the needs. For example, if we needed something like mmorph’s MFunctor, what do we do? Are we stuck? I don’t know! Beyond these technical challenges, it’s clear that Backpack here is also not remotely ergonomic, as is. We’ve had to write five components just to get this done, and I pray for any one who comes to read this code and has to orientate themselves.

Nonetheless, I think this an interesting point of the effect design space that hasn’t been explored, and maybe I’ve motivated some people to do some further exploration.

The code for this blog post can be found at

Happy holidays, all!

by Oliver Charles at December 23, 2020 12:00 AM

December 22, 2020

Roman Cheplyaka

Laptop vs. desktop for compiling Haskell code

I’ve been using various laptops as daily drivers for the last 12 years, and I’ve never felt they were inadequate — until this year. There were a few things that made me put together a desktop PC last month, but a big reason was to improve my Haskell compilation experience on big projects.

So let’s test how fast Haskell code compiles on a laptop vs. a desktop.


Laptop Desktop
CPU Intel Core i7-6500U AMD Ryzen 7 3700X
Base clock 2.5 GHz 3.6 GHz
Boost clock 3.1 GHz 4.4 GHz
Number of cores 2 8
Memory speed 2133 MT/s 4000 MT/s


I picked four Haskell packages for this test: pandoc, lens, hledger, and criterion. An individual test consists of building one of these packages or all of them together (represented here by a meta-package called all).

The build time includes the time to build all of the transitive dependencies. All sources are pre-downloaded, so just the compilation is timed.

The compilation is done using stack (current master with a custom patch), GHC 8.8.4, and the lts-16.26 Stackage snapshot, with the default flags.

The build time of each package (including the all meta-package) is measured 3 times, with all tests happening in a random order. There is a 2 minute break after each build to let the CPU cool down.

The CPU frequency governor is set to performance while compiling and to powersave during the cooling breaks.

To calculate the average level of parallelism achieved on each package, I divide the user CPU time by the wall-clock time (as reported by GNU time’s %U and %e, respectively), using the data from the desktop benchmark (as it has more potential for parallelism).

The full benchmark script is available here.

I also measured the average power drawn by both computers, both when running the benchmark and in the idle state. As my power meter only reports the instantaneous power and cumulative energy, I measured the cumulative energy (in W⋅h) at several random time points and fitted an ordinary least squares linear regression to find the average power.


The first result is that I had to take the laptop outside the house (0°C) to even be able to finish this benchmark; otherwise the computer would overheat and shut down. While the laptop was outside, the CPU temperature would rise up to 74°C. The desktop, on the other hand, had no issue keeping itself cool (< 60°C) under the room temperature with only the stock coolers.

And here are the timings.

Mean compile times (minutes:seconds) and their ratio
package desktop laptop ratio
lens 01:50 02:53 1.57
criterion 03:49 06:05 1.59
hledger 04:28 07:51 1.75
pandoc 14:07 22:30 1.59
all 15:20 26:48 1.75
The column height represents the mean time, and the error bars (which collapse into thick black lines) show the maximum and minimum of the 3 runs

We can also see how well the desktop/laptop speed ratio is predicted by the parallelism achieved for each package.

The average power (where averaging also includes the cooling breaks) drawn during the benchmark was 19W for the laptop and 65W for the desktop.

The average idle power was 3W for the laptop and 37W for the desktop.


  1. The overheating laptop issue is real and has happened to me numerous times while working on real projects, forcing me to limit the number of threads and making the compilation even slower. This alone was worth getting a desktop PC.

  2. There’s a decent increase in the compilation speed, but it’s not huge. The average time ratio (1.65) is much closer to the ratio of clock frequencies (1.42–1.44) than to the difference in the combined power of all cores. Also, the laptop/desktop ratio grows slowly with the level of parallelism. My interpretation of this is that the (dual-core, 4 threads) laptop is capable of exploiting most of the parallelism available when building these packages.

    So the way things are today, I’d say a quad-core or probably even a dual-core CPU is enough for a Haskell developer to compile code.

    That said, I hope that our build systems become better at parallelism over the coming years.

  3. In terms of power efficiency, the laptop is a clear winner: twice as power-efficient for compilation (after adjusting for the speed difference) and 13 times as power-efficient when idle.

  4. I also played a bit with overclocking the desktop’s CPU. I’m not an experienced overclocker and didn’t dare to go to the extreme settings, but moderate overclocking (raising the clock speed to 3.8 GHz or enabling MSI Game Boost) actually resulted in longer compile times. My understanding is that overclocking affects all cores, while CPU’s default “boosting” logic (which is disabled by overclocking) can significantly boost the clock frequency of one or two cores when needed. The latter seems to be a much better fit for a compilation workload, where most of the cores are idle most of the time.


Thanks to Félix Baylac-Jacqué for educating me about the modern PC parts.

December 22, 2020 08:00 PM

Joachim Breitner

Don’t think, just defunctionalize

TL;DR: CPS-conversion and defunctionalization can help you to come up with a constant-stack algorithm.

Update: Turns out I inadvertedly plagiarized the talk The Best Refactoring You’ve Never Heard Of by James Koppel. Please consider this a form of sincere flattery.

The starting point

Today, I’ll take you on a another little walk through the land of program transformations. Let’s begin with a simple binary tree, with value of unknown type in the leaves, as well as the canonical map function:

data T a = L a | B (T a) (T a)

map1 :: (a -> b) -> T a -> T b
map1 f (L x) = L (f x)
map1 f (B t1 t2) = B (map1 f t1) (map1 f t2)

As you can see, this map function is using the program stack as it traverses the tree. Our goal is now to come up with a map function that does not use the stack!

Why? Good question! In Haskell, there wouldn’t be a strong need for this, as the Haskell stack is allocated on the heap, just like your normal data, so there is plenty of stack space. But in other languages or environments, the stack space may have a hard limit, and it may be advised to not use unbounded stack space.

That aside, it’s a fun exercise, and that’s sufficient reason for me.

(In the following, I assume that tail-calls, i.e. those where a function end with another function call, but without modifying its result, do not actually use stack space. Once all recursive function calls are tail calls, the code is equivalent to an imperative loop, as we will see.)


We could now just stare at the problem (rather the code), and try to come up with a solution directly. We’d probably think “ok, as I go through the tree, I have to remember all the nodes above me… so I need a list of those nodes… and for each of these nodes, I also need to remember whether I am currently processing the left child, and yet have to look at the right one, or whether I am done with the left child… so what do I have to remember about the current node…?”

… ah, my brain spins already. Maybe eventually I figure it out, but why think when we can derive the solution? So let’s start with above map1, and rewrite it, in several, mechanical, steps into a stack-less, tail-recursive solution.


Before we set out, let me rewrite the map function using a local go helper, as follows:

map2 :: forall a b. (a -> b) -> T a -> T b
map2 f t = go t
    go :: T a -> T b
    go (L x) = L (f x)
    go (B t1 t2) = B (go t1) (go t2)

This transformation (effectively the “static argument transformation”) has the nice advantage that we do not have to pass f around all the time, and that when we copy the function, I only have to change the top-level name, but not the names of the inner functions.

Also, I find it more aesthetically pleasing.


A blunt, effective tool to turn code that is not yet using tail-calls into code that only uses tail-calls is use continuation-passing style. If we have a function of type … -> t, we turn it into a function of type … -> (t -> r) -> r, where r is the type of the result we want at the very end. This means the function now receives an extra argument, often named k for continuation, and instead of returning some x, the function calls k x.

We can apply this to our go function. Here, both t and r happen to be T b; the type of finished trees:

map3 :: forall a b. (a -> b) -> T a -> T b
map3 f t = go t (\r -> r)
    go :: T a -> (T b -> T b) -> T b
    go (L x) k  = k (L (f x))
    go (B t1 t2) k  = go t1 (\r1 -> go t2 (\r2 -> k (B r1 r2)))

Note that when initially call go, we pass the identity function (\r -> r) as the initial continuation.

Behold, suddenly all function calls are in tail position, and this codes does not use stack space! Technically, we are done, although it is not quite satisfying; all these lambdas floating around obscure the meaning of the code, are maybe a bit slow to execute, and also, we didn’t really learn much yet. This is certainly not the code we would have writing after “thinking hard”.


So let’s continue rewriting the code to something prettier, simpler. Something that does not use lambdas like this.

Again, there is a mechanical technique that can help it. It likely won't make the code prettier, but it will get rid of the lambdas, so let’s do that an clean up later.

The technique is called defunctionalization (because it replaces functional values by plain data values), and can be seen as a form of refinement.

Note that we pass around vales of type (T b -> T b), but we certainly don’t mean the full type (T b -> T b). Instead, only very specific values of that type occur in our program, So let us replace (T b -> T b) with a data type that contains representatives of just the values we actually use.

  1. We find at all values of type (T b -> T b). These are:

    • (\r -> r)
    • (\r1 -> go t2 (\r2 -> k (B r1 r2)))
    • (\r2 -> k (B r1 r2))
  2. We create a datatype with one constructor for each of these:

     data K = I | K1 | K2

    (This is not complete yet.)

  3. We introduce an interpretation function that turns a K back into a (T b -> T b):

    eval :: K -> (T b -> T b)
    eval = (* TBD *)
  4. In the function go, instead of taking a parameter of type (T b -> T b), we take a K. And when we actually use the continuation, we have to turn the K back to the function using eval:

    go :: T a -> K a b -> T b
    go (L x) k  = eval k (L (f x))
    go (B t1 t2) k = go t1 K1
    We also do this to the code fragments identified in the first step; these become:
    • (\r -> r)
    • (\r1 -> go t2 K2)
    • (\r2 -> eval k (B r1 r2))
  5. Now we complete the eval function: For each constructor, we simply map it to the corresponding lambda from step 1:

    eval :: K -> (T b -> T b)
    eval I = (\r -> r)
    eval K1 = (\r1 -> go t2 K2)
    eval K2 = (\r2 -> eval k (B r1 r2))
  6. This doesn’t quite work yet: We have variables on the right hand side that are not bound (t2, r1, k). So let’s add them to the constructors K1 and K2 as needed. This also changes the type K itself; it now needs to take type parameters.

This leads us to the following code:

data K a b
  = I
  | K1 (T a) (K a b)
  | K2 (T b) (K a b)

map4 :: forall a b. (a -> b) -> T a -> T b
map4 f t = go t I
    go :: T a -> K a b -> T b
    go (L x) k  = eval k (L (f x))
    go (B t1 t2) k  = go t1 (K1 t2 k)

    eval :: K a b -> (T b -> T b)
    eval I = (\r -> r)
    eval (K1 t2 k) = (\r1 -> go t2 (K2 r1 k))
    eval (K2 r1 k) = (\r2 -> eval k (B r1 r2))

Not really cleaner or prettier, but everything is still tail-recursive, and we are now working with plain data.

We like lists

To clean it up a little bit, we can notice that the K data type really is just a list of values, where the values are either T a or T b. We do not need a custom data type for this! Instead of our K, we can just use the following, built from standard data types:

type K' a b = [Either (T a) (T b)]

Now I replace I with [], K1 t2 k with Left t2 : k and K2 r1 k with Right r1 : k. I also, very suggestively, rename go to down and eval to up:

map5 :: forall a b. (a -> b) -> T a -> T b
map5 f t = down t []
    down :: T a -> K' a b -> T b
    down (L x) k  = up k (L (f x))
    down (B t1 t2) k  = down t1 (Left t2 : k)

    up :: K' a b -> T b -> T b
    up [] r = r
    up (Left  t2 : k) r1 = down t2 (Right r1 : k)
    up (Right r1 : k) r2 = up k (B r1 r2)

At this point, the code suddenly makes more sense again. In fact, I can try to verbalize it:

As we traverse the tree, we have to remember for all parent nodes, whether there is still something Left to do when we come back to it (so we remember a T a), or if we are done with that (so we have a T b). This is the list K' a b.

We begin to go down the left of the tree (noting that the right siblings are still left to do), until we hit a leaf. We transform the leaf, and then go up.

If we go up and hit the root, we are done. Else, if we go up and there is something Left to do, we remember the subtree that we just processed (as that is already in the Right form), and go down the other subtree. But if we go up and there is nothing Left to do, we put the two subtrees together and continue going up.

Quite neat!

The imperative loop

At this point we could stop: the code is pretty, makes sense, and has the properties we want. But let’s turn the dial a bit further and try to make it an imperative loop.

We know that if we have a single tail-recursive function, then that’s equivalent to a loop, with the function’s parameter turning into mutable variables. But we have two functions!

It turns out that if you have two functions a -> r and b -> r that have the same return type (which they necessarily have here, since we CPS-converted them further up), then those two functions are equivalent to a single function taking “a or b”, i.e. Either a b -> r. This really nothing else than the high-school level algebra rule of ra ⋅ rb = ra + b.

So (after reordering the arguments of down to put T b first) we can rewrite the code as

map6 :: forall a b. (a -> b) -> T a -> T b
map6 f t = go (Left t) []
    go :: Either (T a) (T b) -> K' a b -> T b
    go (Left (L x))     k        = go (Right (L (f x))) k
    go (Left (B t1 t2)) k        = go (Left t1) (Left t2 : k)
    go (Right r)  []             = r
    go (Right r1) (Left  t2 : k) = go (Left t2) (Right r1 : k)
    go (Right r2) (Right r1 : k) = go (Right (B r1 r2)) k

Do you see the loop yet? If not, maybe it helps to compare it with the following equivalent imperative looking pseudo-code:

mapLoop :: forall a b. (a -> b) -> T a -> T b
mapLoop f t {
  var node = Left t;
  var parents = [];
  while (true) {
    switch (node) {
      Left (L x) -> node := Right (L (f x))
      Left (B t1 t2) -> node := Left t1; parents.push(Left t2)
      Right r1 -> {
        if (parents.len() == 0) {
          return r1;
        } else {
          switch (parents.pop()) {
            Left t2  -> node := Left t2; parents.push(Right r1);
            Right r2 -> node := Right (B r1 r2)


I find it enlightening to see how apparently very different approaches to a problem (recursive, lazy functions and imperative loops) are connected by a series of rather mechanical transformations. When refactoring code, it is helpful to see if one can conceptualize the refactoring as one of those mechanical steps (refinement, type equivalences, defunctionalization, cps conversion etc.)

If you liked this post, you might enjoy my talk The many faces of isOrderedTree, which I have presented at MuniHac 2019 and Haskell Love 2020.

by Joachim Breitner ( at December 22, 2020 07:40 AM

December 21, 2020

Monday Morning Haskell

Open Sourcing Haskellings!

newlogo3 (3).png

In the last couple months we've been working on "Haskellings", an automated Haskell tutorial inspired by Rustlings. This week, I'm happy to announce that this project is now open source! You can find the (very early) version here on Github. I'll be working on making the project more complete throughout 2021, but I would really value any contributions the community has to this project! In this article, I'll list a few specific areas that would be good to work on!

More Exercises

The first and most important thing is that we need more exercises! I've done a couple simple examples to get started, but I'd like to crowd-source the creation of exercises. You can use the set of Rustlings exercises as some sort of inspiration. The most important topics to start out with would be things that explain the Haskell type system, so the different sorts of types, expressions and functions, as well as making our own data types. Other good concepts include things like syntax elements (think "where" and "case") and type classes.

Operating System Compatibility

I've definitely cut a few corners when it comes to the MVP of this project. I've only been working on Linux, so it's quite possible that there are some Linux-specific assumptions in the file-system level code. There will need to be some testing of the application on Windows and Mac platforms, and some adjustments will likely be necessary.

GHC Precision

Another area that will need some attention is the configuration section. Is there a cleaner way to determine where the GHC executable lives? What about finding the package database that corresponds to our Stack snapshot? My knowledge of Stack and package systems is limited, so it's very likely that there are some edge cases where the logic doesn't work out.

Exercise Cleanup

Right now, we list all the exercises explicitly in the ExerciseList module. But they're listed in two different places in the file. It would be good to clean this up, and potentially even add a feature for automated detection of exercise features. For example, we can figure out the filename, the directory, and whether or not it's runnable just by examining the file at its path! Right now the only thing that would need to be specified in "code" would be the order of exercises and their hints.


If you're interested in contributing to this project, you can fork the repository, put up a pull request, and email me at! I'll be working on this periodically throughout 2021, hoping to have a more complete version to publish by the end.

by James Bowen at December 21, 2020 03:30 PM

December 18, 2020

Oleg Grenrus

Dependent Linear types in QTT

Posted on 2020-12-18 by Oleg Grenrus linear

This post is my musings about a type system proposed by Conor McBride in I Got Plenty o' Nuttin' and refined by Robert Atkey in Syntax and Semantics of Quantitative Type Theory: Quantitative Type Theory or QTT for short. Idris 2 is based on QTT, so at the end there is some code too.


But let me start with recalling Simply Typed Lambda Calculus with Products and Coproducts, \lambda^{\to\times+} (that is a mouthful!). As the name already says there are three binary connectives, functions, products and coproducts (or sums).

\to \qquad \times \qquad +

But in Martin Löf Type Theory (not quantiative) we have only two: pi and sigma types.

\prod \qquad \sum

We can recover ordinary functions, \to and ordinary pairs in a straigh-forward way:

A \to B \coloneqq \prod_{\_:A} B \qquad A \times B \coloneqq \sum_{\_:A} B

However, to get coproducts we need something extra. One general way is to add finite sets. We get falsehood (empty finite set), truth (singleton finite set), booleans (binary finite sets), and so on. But these three are enough.

With booleans we can define

A + B \coloneqq \sum_{t:\mathbb{B}\mathsf{ool}} \mathbf{if}\; t \;\mathbf{then}\; A \;\mathbf{else}\; B

Which is very reasonable. That is how we represent sum types on physical machines: a tag and a payload which type depends on a tag.

A natural question is what we get if switch \sum to \prod . Unfortuntely, nothing new:

\prod_{t:\mathbb{B}\mathsf{ool}} \mathbf{if}\; t \;\mathbf{then}\; A \;\mathbf{else}\; B \cong A \times B \coloneqq \sum_{\_:A} B

We get another way to describe pairs.

To summarise:

\begin{tabular}{l|cc} $\mathbf{op}$ & $\prod$ & $\sum$ \\ \hline $\mathbf{op}_{\_:A} B$ & $\to$ & $\times$ \\ $\mathbf{op}_{t:\mathbb{B}\mathsf{ool}} \ldots$ & $\times$ & $+$ \end{tabular}

Let us next try to see how this works out with linear types.


In the intuitionistic linear calculus (ILC, also logic, ILL) we have four binary connectives: linear functions, times, plus and with.

\multimap \qquad \otimes \qquad \oplus \qquad \binampersand

Often an unary bang


is also added to allow writing non-linear programs as well.

Making linear dependent theory is hard, and Quantitative Type Theory is a promising attempt. It seems to work.

And it is not complicated. In fact, as far as we concerned, it still has just two connectives

\prod \qquad \sum

but slightly modified to include multiplicity (denoted by \pi, \rho, \ldots ):

\prod_{x\overset{\pi}{:}A} B \qquad \sum_{x\overset{\pi}{:}A} B

The multiplicity can be 1 , i.e. linear single use. Or \omega , the unrestricted use, but also 0 , which is irrelevant usage. I find this quite elegant.

The rules are then set up so we don't run into problems with "types using our linear variables", as they are checked in irrelevant context.

As in the non-linear setting, we can recover two "simple" connectives immediately:

A \multimap B \coloneqq \prod_{\_ \overset{1}{:} A} B \qquad A \otimes B \coloneqq \sum_{\_ \overset{1}{:} A} B

Other multiplicities allow us to create some "new" connectives

A \to B \coloneqq \mathop{!A} \multimap B = \prod_{\_ \overset{\omega}{:} A} B \qquad \forall (x:A). B = \prod_{\_ \overset{0}{:} A} B

where \forall is the irrelevant quantification.

You should guess that next we will recover \oplus and \binampersand using booleans. However booleans in linear calculus are conceptually hard, when you start to think about resource interpretation of linear calculus. However, booleans are not resources, they are information. About introduction Atkey says

... no resources are required to construct the constants true and false

but the elimination is a bit more involved. The important bit is, however, that we have if-then-else which behaves reasonably.


A \oplus B \coloneqq \sum_{t \mathop{\overset{1}{:}} \mathbb{B}\mathsf{ool}} \mathbf{if}\; t \;\mathbf{then}\; A \;\mathbf{else}\; B


A \mathop\binampersand B \coloneqq \prod_{t \mathop{\overset{1}{:}} \mathbb{B}\mathsf{ool}} \mathbf{if}\; t \;\mathbf{then}\; A \;\mathbf{else}\; B

That \oplus behaves as we want. When we match on \sum we learn the tag and payload. As tag has multiplicity 1, we can once match on it to learn the type of the payload. (Note: I should really write the typing rules, and derivations, but I'm quite confident it works this way. Highly likely I'm wrong :)

The with-connective, \binampersand , is mind-juggling. It's a product (in fact, the product in CT-sense). We can extract parts, but if we have to decide which, cannot do both.

\begin{aligned} \mathit{fst} &: A \mathop\binampersand B \multimap A \\ \mathit{fst} &= \lambda w : \ldots \mapsto w\,\mathsf{true} \end{aligned}

The value of type A \mathop\binampersand B is a function, and we can only call it once, so we cannot write a value of type A \mathop\binampersand B \multimap A \otimes B , nor the inverse.

So the \otimes and the \binampersand are both product like, but different.

We redraw the table from the previous section. there are no more unelegant duplication:

\begin{tabular}{l|cc} $\mathbf{op}$ & $\prod$ & $\sum$ \\ \hline $\mathbf{op}_{\_ \mathop{\overset{1}:} A} B$ & $\multimap$ & $\otimes$ \\ $\mathbf{op}_{t \mathop{\overset{1}:} \mathbb{B}\mathsf{ool}} \ldots$ & $\binampersand$ & $\oplus$ \end{tabular}


It's often said that you cannot write

diag :: a -> (a, a)

in linear calculus.

This is true if we assume that tuples are \otimes tensors. That is natural assumption as \otimes is what makes curry with \multimap arrows.

However, the product is \binampersand . I argue that "the correct" type of diag ( \Delta ) is

diag : a -> a & a
diag x = x :&: x

And in fact, the adjointness is the same as in CT-of-STLC, (Idris2 agrees with me):

\oplus \dashv \Delta \dashv \binampersand

If we could just normalise the notation, then we'd use

\begin{tabular}{c|c} linear logic & category theory \\ \hline $\otimes$ & $\otimes$ \\ $\oplus$ & $+$ \\ $\mathop{\binampersand}$ & $\times$ \end{tabular}

But that would be sooo... confusing.

There is plenty of room for more confusion, it gets better.


Products and coproducts usually have units, so is the case in the linear calculus too.

\begin{tabular}{c|c|c|c} linear logic connective & unit & category theory & unit \\ \hline $\otimes$ & $1$ & $\otimes$ & $I$ \\ $\oplus$ & $0$ & $+$ & $0$ \\ $\mathop{\binampersand}$ & $\top$ & $\times$ & $1$ \end{tabular}

Spot a source of possible confusion.

We know, that because \binampersand is the product, its unit is the terminal object, 1 . And now we have to be careful.

Definition: T is terminal object if for every object X in category C there exist unique morphism X \to T .

Indeed the \top in linear logic is such object. It acts like a dumpster. If we don't like some (resource) thing, we can map it to \top . If we already have \top (there is no way to get rid of it), we can tensor it with something else we don't like and map the resulting "pair" to another \top . Bottomless trash can!

In category theory we avoid speaking about objects directly (it is point-free extreme). If we need to, we speak about object A , we rather talk about 1 \to A morphism (constant function). This works because, e.g. in Sets category:

X \cong \mathrm{Hom}(1, X)

There the use of 1 comes from it being the unit of \times used to internalize arrows, i.e. define A \to B objects (and binary functions, currying, etc).

In linear logic, the "friend" of \multimap is, however, \otimes , and its unit is not terminal object.

(A \otimes B) \multimap C \cong A \multimap (B \multimap C)

So we rather have

X \cong \mathrm{Hom}(I_\otimes, X)

which is again confusing, as you can confuse I for initial object, 0 , which it isn't. To help avoid that I used the subscript.

The takeaway is that \top and 1 in linear logic are different objects, and you have to be very careful so ordinary lambda calculus (or e.g. Haskell) intuition doesn't confuse you.

I wish there were a category where linear stuff is separate. In Sets \times = \otimes = \mathop{\binampersand} . Vector spaces are close, but they have own source of confusion (there \times = + = \oplus , direct sum, which is a product, and coproduct).

Idris 2

All above is nicely encodable in Idris 2. If you want to play with linear logic concepts, I'd say that Idris2 is the best playground at the moment.

module QTT

-- Pi and Sigma

Pi1 : (a : Type) -> (a -> Type) -> Type
Pi1 a b = (1 x : a) -> b x

data Sigma1 : (a : Type) -> (a -> Type) -> Type where
    Sig1 : (1 x : a) -> (1 y : b x) -> Sigma1 a b

-- Lollipop

Lollipop : Type -> Type -> Type
Lollipop a b = Pi1 a \_ => b

-- handy alias
(-@) : Type -> Type -> Type
(-@) = Lollipop
infixr 0 -@

-- for constructor, just write \x => expr

-- Lollipop elimination, $
lollipopElim : Lollipop a b -@ a -@ b
lollipopElim f x = f x

-- Times

Times : Type -> Type -> Type
Times a b = Sigma1 a \_ => b

-- Times introduction
times : a -@ b -@ Times a b
times x y = Sig1 x y

-- Times elimination
timesElim : Times a b -@ (a -@ b -@ c) -@ c
timesElim (Sig1 x y) k = k x y

-- With

With : Type -> Type -> Type
With a b = Pi1 Bool \t => if t then a else b

-- With elimination 1
fst : With a b -@ a
fst w = w True

-- With elimination 2
snd : With a b -@ b
snd w = w False

-- There isn't really a way to write a function for with introduction,
-- let me rather write diag.

diag : a -@ With a a
diag x True  = x
diag x False = x

-- Also note, that even if With would be a built-in, it should
-- be non-strict (and a function is).
-- We may use the same resource in both halfs differently,
-- and the resource cannot be used until user have selected the half.

-- Plus

Plus : Type -> Type -> Type
Plus a b = Sigma1 Bool \t => if t then a else b

-- Plus introduction 1
inl : a -@ Plus a b
inl x = Sig1 True x

-- Plus introduction 2
inr : b -@ Plus a b
inr y = Sig1 False y

-- Plus elimination, either... with a with twist
-- Give me two functions, I'll use one of them, not both.
plusElim : With (a -@ c) (b -@ c) -@ Plus a b -@ c
plusElim f (Sig1 True  x) = f True  x
plusElim f (Sig1 False y) = f False y

-- Extras

-- plusElim is reversible.
-- Plus -| Diag

plusElimRev : (Plus a b -@ c) -@ With (a -@ c) (b -@ c)
plusElimRev f True  = \x => f (inl x)
plusElimRev f False = \y => f (inr y)

-- Diag -| With
adjunctFwd : (c -@ With a b) -@ With (c -@ a) (c -@ b)
adjunctFwd f True  = \z => f z True
adjunctFwd f False = \z => f z False

adjunctBwd : With (c -@ a) (c -@ b) -@ (c -@ With a b)
adjunctBwd f c True  = f True c
adjunctBwd f c False = f False c

-- Hard exercise

-- What would be a good way to imlement Top, i.e unit of With.
-- fwd : a -@ With Top a
-- bwd : With Top a -@ a
-- I have ideas, I'm not sure I like them.

December 18, 2020 12:00 AM

December 17, 2020


Haskell development job with Well-Typed

tl;dr If you’d like a job with us, send your application as soon as possible.

We are looking for a Haskell expert to join our team at Well-Typed. This is a great opportunity for someone who is passionate about Haskell and who is keen to improve and promote Haskell in a professional context.

About Well-Typed

We are a team of top notch Haskell experts. Founded in 2008, we were the first company dedicated to promoting the mainstream commercial use of Haskell. To achieve this aim, we help companies that are using or moving to Haskell by providing a range of services including consulting, development, training, and support and improvement of the Haskell development tools. We work with a wide range of clients, from tiny startups to well-known multinationals. We have established a track record of technical excellence and satisfied customers.

Our company has a strong engineering culture. All our managers and decision makers are themselves Haskell developers. Most of us have an academic background and we are not afraid to apply proper computer science to customers’ problems, particularly the fruits of FP and PL research.

We are a self-funded company so we are not beholden to external investors and can concentrate on the interests of our clients, our staff and the Haskell community.

About the job

The role is not tied to a single specific project or task, and is fully remote.

In general, work for Well-Typed could cover any of the projects and activities that we are involved in as a company. The work may involve:

  • working on GHC, libraries and tools;

  • Haskell application development;

  • working directly with clients to solve their problems;

  • teaching Haskell and developing training materials.

We try wherever possible to arrange tasks within our team to suit peoples’ preferences and to rotate to provide variety and interest.

Well-Typed has a variety of clients. For some we do proprietary Haskell development and consulting. For others, much of the work involves open-source development and cooperating with the rest of the Haskell community: the commercial, open-source and academic users.

Our ideal candidate has excellent knowledge of Haskell, whether from industry, academia or personal interest. Familiarity with other languages, low-level programming and good software engineering practices are also useful. Good organisation and ability to manage your own time and reliably meet deadlines is important. You should also have good communication skills.

You are likely to have a bachelor’s degree or higher in computer science or a related field, although this isn’t a requirement.

Further (optional) bonus skills:

  • experience in teaching Haskell or other technical topics,

  • experience of consulting or running a business,

  • knowledge of and experience in applying formal methods,

  • familiarity with (E)DSL design,

  • knowledge of concurrency and/or systems programming,

  • experience with working on GHC,

  • experience with web programming (in particular front-end),

  • … (you tell us!)

Offer details

The offer is initially for one year full time, with the intention of a long term arrangement. Living in England is not required. We may be able to offer either employment or sub-contracting, depending on the jurisdiction in which you live.

If you are interested, please apply via . Tell us why you are interested and why you would be a good fit for Well-Typed, and attach your CV. Please indicate how soon you might be able to start.

We are more than happy to answer informal enquiries. Contact Duncan Coutts (, dcoutts on IRC), Adam Gundry () or Andres Löh (, kosmikus on IRC) for further information.

We will consider applications as soon as we receive them. In any case, please try to get your application to us by 10 January 2021.

by christine, andres, duncan, adam at December 17, 2020 12:00 AM

December 16, 2020

Tweag I/O

Trustix: Distributed trust and reproducibility tracking for binary caches

Downloading binaries from well-known providers is the easiest way to install new software. After all, building software from source is a chore — it requires both time and technical expertise. But how do we know that we aren’t installing something malicious from these providers?

Typically, we trust these binaries because we trust the provider. We believe that they were built from trusted sources, in a trusted computational environment, and with trusted build instructions. But even if the provider does everything transparently and in good faith, the binaries could still be anything if the provider’s system is compromised. In other words, the build process requires trust even if all build inputs (sources, dependencies, build scripts, etc…) are known.

Overcoming this problem is hard — after all, how can we verify the output of arbitrary build inputs? Excitingly, the last years have brought about ecosystems such as Nix, where all build inputs are known and where significant amounts of builds are reproducible. This means that the correspondence between inputs and outputs can be verified by building the same binary multiple times! The r13y project, for example, tracks non-reproducible builds by building them twice on the same machine, showing that this is indeed practical.

But we can go further, and that’s the subject of this blog post, which introduces Trustix, a new tool we are working on. Trustix compares build outputs for given build inputs across independent providers and machines, effectively decentralizing trust. This establishes what I like to call build transparency because it verifies what black box build machines are doing. Behind the scenes Trustix builds a Merkle tree-based append-only log that maps build inputs to build outputs, which I’ll come back to in a later post. This log can be used to establish consensus whether certain build inputs always produce the same output — and can therefore be trusted. Conversely, it can also be used to uncover non-reproducible builds, corrupted or not, on a large scale.

The initial implementation of Trustix, and its description in this post are based on the Nix package manager. Nix focuses on isolated builds, provides access to the hashes of all build inputs as well as a high quantity of bit-reproducible packages, making it the ideal initial testing ecosystem. However, Trustix was designed to be system-independent, and is not strongly tied to Nix.

The developmentent of Trustix is funded by NLnet foundation and the European Commission’s Next Generation Internet programme through the NGI Zero PET (privacy and trust enhancing technologies) fund. The tool is still in development, but I’m very excited to announce it already!

How Nix verifies binary cache results

Most Linux package managers use a very simple signature scheme to secure binary distribution to users. Some use GPG keys, some use OpenSSL certificates, and others use some other kind of key, but the idea is essentially the same for all of them. The general approach is that binaries are signed with a private key, and clients can use an associated public key to check that a binary was really signed by the trusted entity.

Nix for example uses an ed25519-based key signature scheme and comes with a default hard-coded public key that corresponds to the default cache. This key can be overridden or complemented by others, allowing the use of additional caches. The list of signing keys can be found in /etc/nix/nix.conf. The default base64-encoded ed25519 public key with a name as additional metadata looks like this:

trusted-public-keys =

Now, in Nix, software is addressed by the hash of all of its build inputs (sources, dependencies and build instructions). This hash, or the output path is used to query a cache (like for a binary.

Here is an example: The hash of the hello derivation can be obtained from a shell with nix-instantiate:

$ nix-instantiate '<nixpkgs>' --eval -A hello.outPath

Here, behind the scenes, we have evaluated and hashed all build inputs that the hello derivation needs (.outPath is just a helper). This hash can then be used to query the default Nix binary cache:

$ curl
StorePath: /nix/store/w9yy7v61ipb5rx6i35zq1mvc2iqfmps1-hello-2.10
URL: nar/15zk4zszw9lgkdkkwy7w11m5vag11n5dhv2i6hj308qpxczvdddx.nar.xz
Compression: xz
FileHash: sha256:15zk4zszw9lgkdkkwy7w11m5vag11n5dhv2i6hj308qpxczvdddx
FileSize: 41232
NarHash: sha256:1mi14cqk363wv368ffiiy01knardmnlyphi6h9xv6dkjz44hk30i
NarSize: 205968
References: 9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31 w9yy7v61ipb5rx6i35zq1mvc2iqfmps1-hello-2.10

Besides links to the archive that contains the compressed binaries, this response includes two relevant pieces of information which are used to verify binaries from the binary cache(s):

  • The NarHash is a hash over all Nix store directory contents
  • The Sig is a cryptographic signature over the NarHash

With this information, the client can check that this binary really comes from the provider’s Nix store.

What are the limitations of this model?

While this model has served Nix and others well for many years it suffers from a few problems. All of these problems can be traced back to a single point of failure in the chain of trust:

  • First, if the key used by is ever compromised, all builds that were ever added to the cache can be considered tainted.
  • Second, one needs to put either full trust or no trust at all in the build machines of a binary cache — there is no middle ground.
  • Finally, there is no inherent guarantee that the build inputs described in the Nix expressions were actually used to build what’s in the cache.


Trustix aims to solve these problems by assembling a mapping from build inputs to (hashes of) build outputs provided by many build machines.

Instead of relying on verifying packages signatures, like the traditional Nix model does, Trustix only exposes packages that it considers trustworthy. Concretely, Trustix is configured as a proxy for a binary cache, and hides the packages which are not trustworthy. As far as Nix is concerned, the package not being trustworthy is exactly as if the package wasn’t stored in the binary cache to begin with. If such a package is required, Nix will therefore build it from source.

Trustix doesn’t define what a trustworthy package is. What your Trustix considers trustworthy is up to you. The rules for accepting packages are entirely configurable. In fact, in the current prototype, there isn’t a default rule for packages to count as trustworthy: you need to configure trustworthiness yourself.

With this in mind, let’s revisit the above issues

  • In Trustix, if an entity is compromised, you can rely on all other entities in the network to establish that a binary artefact is trustworthy. Maybe a few hashes are wrong in the Trustix mapping, but if an overwhelming majority of the outputs are the same, you can trust that the corresponding artefact is indeed what you would have built yourself.

    Therefore you never need to invalidate an entire binary cache: you can still verify the trustworthiness of old packages, even if newer packages are built by a malicious actor.

  • In Trustix, you never typically consider any build machine to be fully trusted. You always check their results against the other build machines. You can further configure this by considering some machines as more trusted (maybe because it is a community-operated machine, and you trust said community) or less trusted (for instance, because it has been compromised in the past, and you fear it may be compromised again).

    Moreover, in the spirit of having no single point of failure, Trustix’s mapping is not kept in a central database. Instead every builder keeps a log of its builds; these logs are aggregated on your machine by your instance of the Trustix daemon. Therefore even the mapping itself doesn’t have to be fully trusted.

  • In Trustix, package validity is not ensured by a signature scheme. Instead Trustix relies on the consistency of the input to output mapping. As a consequence, the validity criterion, contrary to a signature scheme, links the output to the input. It makes it infeasible to pass the build result of input I as a build result for input J: it would require corrupting the entire network.

Limitations: reproducibility tracking and non-reproducible builds

A system like Trustix will not work well with builds that are non-reproducible, which is a limitation of this model. After all, you cannot reach consensus if everyone’s opinions differ.

However, Trustix can still be useful, even for non-reproducible builds! By accumulating all the data in the various logs and aggregating them, we can track which derivations are non-reproducible over all of Nixpkgs, in a way that is easier than previously possible. Whereas the r13y project builds a single closure on a single machine, Trustix will index everything ever built on every architecture.


I am very excited to be working on the next generation of tooling for trust and reproducibility, and for the purely functional software packaging model pioneered by Nix to keep enabling new use cases. I hope that this work can be a foundation for many other applications other than improving trust — for example, by enabling the Nix community to support new CPU architectures with community binary caches.

Please check out the code at the repo or join us for a chat over in #untrustix on Freenode. And stay tuned — in the next blog post, we will talk more about Merkle trees and how they are used in Trustix.



December 16, 2020 12:00 AM

December 15, 2020

Mark Jason Dominus

Master of the Pecos River

The world is so complicated! It has so many things in it that I could not even have imagined.

Yesterday I learned that since 1949 there has been a compact between New Mexico and Texas about how to divide up the water in the Pecos River, which flows from New Mexico to Texas, and then into the Rio Grande.

Map of the above, showing the Pecos River and Rio Grande, both flowing roughly from northwest to southeast.  The Grande flows south past Albuquerque, NM, and then becomes the border between Texas and Mexico.  The Pecos flows through New Mexico past Brantley Lake and Carlsbad, then into the Texas Red Bluff Reservoir, and eventually into the Amistad Reservoir on the Texas-Mexico border.

New Mexico is not allowed to use all the water before it gets to Texas. Texas is entitled to receive a certain amount.

There have been disputes about this in the past (the Supreme Court case has been active since 1974), so in 1988 the Supreme Court appointed Neil S. Grigg, a hydraulic engineer and water management expert from Colorado, to be “River Master of the Pecos River”, to mediate the disputes and account for the water. The River Master has a rulebook, which you can read online. I don't know how much Dr. Grigg is paid for this.

In 2014, Tropical Storm Odile dumped a lot of rain on the U.S. Southwest. The Pecos River was flooding, so Texas asked NM to hold onto the Texas share of the water until later. (The rulebook says they can do this.) New Mexico arranged for the water that was owed to Texas to be stored in the Brantley Reservoir.

A few months later Texas wanted their water. "OK," said New Mexico. “But while we were holding it for you in our reservoir, some of it evaporated. We will give you what is left.”

“No,” said Texas, “we are entitled to a certain amount of water from you. We want it all.”

But the rule book says that even though the water was in New Mexico's reservoir, it was Texas's water that evaporated. (Section C5, “Texas Water Stored in New Mexico Reservoirs”.)

Too bad Texas!

by Mark Dominus ( at December 15, 2020 06:23 PM

December 14, 2020

Philip Wadler

Hokusai's "The Great Wave" recreated in Lego


Brilliant! Spotted via Boing-Boing.

Lego Certified Professional Jumpei Mitsui brought Hokusai's iconic ukiyo-e woodblock print "The Great Wave off Kanagawa" (c. 1829-1833) into the Lego realm. Marvel at this incredible work in Osaka's Hankyu Brick Museum.

by Philip Wadler ( at December 14, 2020 05:30 PM

Monday Morning Haskell

Dependencies and Package Databases

Here's the final video in our Haskellings series! We'll figure out how to add dependencies to our exercises. This is a bit trickier than it looks, since we're running GHC outside of the Stack context. But with a little intuition, we can find where Stack is storing its package database and use that when running the exercise!

Next week, we'll do a quick summary of our work on this program, and see what the future holds for it!

by James Bowen at December 14, 2020 03:30 PM

FP Complete

Pattern matching

I first started writing Haskell about 15 years ago. My learning curve for the language was haphazard at best. In many cases, I learnt concepts by osmosis, and only later learned the proper terminology and details around them. One of the prime examples of this is pattern matching. Using a case expression in Haskell, or a match expression in Rust, always felt natural. But it took years to realize that patterns appeared in other parts of the languages than just these expressions, and what terms like irrefutable meant.

It's quite possible most Haskellers and Rustaceans will consider this content obvious. But maybe there are a few others like me out there who never had a chance to realize how ubiquitous patterns are in these languages. This post may also be a fun glimpse into either Haskell or Rust if you're only familiar with one of the languages.

Language references

Both Haskell and Rust have language references available online. The caveats are that the Rust reference is marked as incomplete, and the Haskell language reference is for Haskell2010, which GHC does not strictly adhere to. That said, both are readily understandable and complete enough to get a very good intuition. If you've never looked at either of these documents, I highly recommend having a peek.

case and match

The first place most of us hear the term "pattern matching" is in Haskell's case expression, or Rust's match expression. And it makes perfect sense here. We can provide multiple patterns, typically based on a data constructor/variant, and the language will match the most appropriate one. Slightly tying in with my previous post on errors, let's look at a common example: pattern matching on an Either value in Haskell.

mightFail :: Either String Int

main =
    case mightFail of
        Left err -> putStrLn $ "Error occurred: " ++ err
        Right x -> putStrLn $ "Successful result: " ++ show x

Or a Result value in Rust:

fn might_fail() -> Result<i32, String> { ... }

fn main() {
    match might_fail() {
        Err(err) => println!("Error occurred: {}", err),
        Ok(x) => println!("Successful result: {}", x),

I think most programmers, even those unfamiliar with these languages, could intuit to some extent what these expressions do. mightFail and might_fail() return some kind of value. The value may be in multiple different "states." The patterns match, and we branch our behavior depending on which state. Easy enough.

Already here, though, there's an important detail many of us gloss over. Or at least I did. Our patterns not only match a constructor, they also bind a variable. In the examples above, we bind the variables err and x to values contained by the data constructors. And that's pretty interesting, because both Haskell and Rust also use let bindings for defining variables. I wonder if there's some kind of connection there.

Narrator: there was a connection

Functions in Haskell

Haskell immediately adds a curve ball (in a good way) to this story. Let's take a classic recursive definition of a factorial function (note: this isn't a good definition since it has a space leak).

fact :: Int -> Int
fact i =
    case i of
        0 -> 1
        _ -> i * fact (i - 1)

This feels a bit verbose. We capture the variable i, only to immediately pattern match on it. We also have a new kind of pattern, _. When I first learned Haskell, I thought of _ as "a variable I don't care about." But it's actually more specialized than this: a wildcard pattern, something which matches anything. (We'll get into what variables match later.)

Anyway, to make this kind of code a bit terser, Haskell offers a different way of writing this function:

fact :: Int -> Int
fact 0 = 1
fact i = i * fact (i - 1)

These two versions of the code are identical. It's just a syntactic trick. Let's see another more interesting syntactic trick.

What about let?

We use let expressions (and let bindings in do-notation) in Haskell to create new variables, e.g.:

main =
    let name = "Alice"
     in putStrLn $ "Hello, " ++ name

And we do the same with let statements in Rust:

fn main() {
    let name = "Alice";
    println!("Hello, {}", name);

But here's where we begin to get a bit fancy. We already saw that we can bind variables in case and match expressions. Does that mean we can do away with the lets? Yes we can!

main =
    case "Alice" of
        name -> putStrLn $ "Hello, " ++ name


fn main() {
    match "Alice" {
        name => println!("Hello, {}", name)

This isn't good code per se. In fact, cargo clippy will complain about it. But it does hint at the fact that there's a deeper connection between two constructs. And the connection is this: the left hand side of the equals sign in a let statement/expression/binding is a pattern.

Ditch the case! Ditch the match!

Alright, so we can technically get rid of lets if we wanted to (which we don't). Can we get rid of the case expressions in Haskell? The real answer is "definitely not." But interestingly, this code compiles!

mightFail :: Either String Int
mightFail = Left "It failed"

main :: IO ()
main =
    let Right x = mightFail
     in putStrLn $ "Successful result: " ++ show x

As mentioned, we can put a pattern on the left hand side of the equals sign. And we've done just that here. But what on Earth does this code do? As you can see, the mightFail expression will evaluate to a Left value. But our pattern only matches on Right values! Running this code gives us:

Main.hs:10:9-27: Non-exhaustive patterns in Right x

Haskell is a non-strict language. Performing this binding is allowed. But evaluating the result of this binding blows up.

Rust, however, is a strict language. We can do something very similar in Rust:

fn main() {
    let Ok(x) = might_fail();
    println!("Successful result: {}", x);

But this code won't even compile:

error[E0005]: refutable pattern in local binding: `Err(_)` not covered
    = note: `let` bindings require an "irrefutable pattern", like a `struct` or an `enum` with only one variant
    = note: for more information, visit
    = note: the matched value is of type `std::result::Result<i32, std::string::String>`
help: you might want to use `if let` to ignore the variant that isn't matched

Let's dive into those "exhaustive" and "refutable" concepts, and then round out this post with a glance at where else patterns appear in these languages.

Side note: it's true that the Haskell code above compiles. However, if you turn on the -Wincomplete-uni-patterns warning, you'll get a warning about this. I personally think this warning should be included in -Wall.

Refutable and irrefutable, exhaustive and non-exhaustive

This topic is quite a bit more complicated in Haskell due to non-strictness. How matching works in the presence of "bottom" or undefined values is an entire extra wrench of complication. I'm going to ignore those cases entirely here. If you're interested in more information on this, my article All about strictness discusses some of these points.

Some patterns will always match a value. The simplest example of this is a wildcard. In fact, that's basically its definition. Quoting the Rust reference:

The wildcard pattern (an underscore symbol) matches any value.

And fortunately for us, things behave exactly the same way in Haskell.

Another pattern that matches any value given is variable. let x = blah is a valid binding, regardless of what blah is. Both of these are known as irrefutable patterns.

By contrast, some patterns are refutable. They are patterns that only match some possible cases of the value, not all. The simplest example is the one we saw before: matching on one of many data constructors/variants in a data type (Haskell) or enum (Rust).

Contrasting yet again: if you have a struct in Rust, or a Rust enum with only variant, or a Haskell data with only one data constructor, or a Haskell newtype, the pattern will always match. That is, of course, assuming any patterns nested within will also always match. To demonstrate, this pattern match is irrefutable:

data Foo = Foo Bar
data Bar = Baz Int

main :: IO ()
main =
    let Foo (Baz x) = Foo (Baz 5)
     in putStrLn $ "x == " ++ show x

However, if I add another data constructor to Bar, it becomes refutable:

data Foo = Foo Bar
data Bar = Baz Int | Bin Char

main :: IO ()
main =
    let Foo (Baz x) = Foo (Bin 'c')
     in putStrLn $ "x == " ++ show x

In both Haskell and Rust, tuples behave like data types with one constructor, and therefore as long as the patterns inside of them are irrefutable, they are irrefutable too.

The final case I want to point out is literal patterns. Literal patterns are very much refutable. This code thankfully does not compile:

fn main() {
    let 'x' = 'a';

But the really interesting thing for someone not used to pattern matching is that you can do this at all! We've already done pattern matching on literal values above, in our definition of fact. It's very convenient to be able to build up complex case/match expressions using literal syntax (like list/slice syntax).

Alright, let's see a few more examples of where patterns are used in these languages, then tie it up.

Function arguments

Function arguments are patterns in both languages. In Haskell we saw that you can use refutable patterns, and provide multiple function clauses. The same doesn't apply to Rust functions. You'll need to use an irrefutable pattern in the function, and then do some pattern matching or other kind of branching in the body of the functions. For example, the poorly written fact function can be rewritten in Rust as:

fn fact(i: u32) -> u32 {
    if i == 0 {
    } else {
        i * fact(i - 1)

fn main() {
    println!("5! == {}", fact(5));

Perhaps more interestingly in both languages, you can use a pattern matching a data structure in the function argument. For example, in Rust:

struct Person {
    name: String,
    age: u32,

fn greet(Person { name, age }: &Person) {
    println!("{} is {} years old", name, age);

fn main() {
    let alice = Person {
        name: "Alice".to_owned(),
        age: 30,

Or in Haskell, using positional instead of named fields:

data Person = Person String Int

greet :: Person -> IO ()
greet (Person name age) = putStrLn $ name ++ " is " ++ show age ++ " years old"

main :: IO ()
main = greet $ Person "Alice" 30

Closures, functions, and lambdas

The arguments to closures (Rust) and lambdas (Haskell) are patterns. That means we can match on irrefutable things like tuples fairly easily:

fn main() {
    let greet = |(name, age)| println!("{} is {} years old", name, age);
    greet(("Alice", 30));

The big difference is that, in Rust, the pattern must be irrefutable. This is again due to strictness. The following code will compile in Haskell, but fail at runtime:

main :: IO ()
main =
    let mylambda = \(Right x) -> putStrLn x
     in mylambda (Left "Error!")

Again, -Wincomplete-uni-patterns will warn about this. But again, it's not on by default.

By contrast, in Rust, the equivalent code will fail to compile:

fn main() {
    let myclosure = |Ok(x): Result<i32, &str>| println!("{}", x);

This produces:

error[E0005]: refutable pattern in function argument: `Err(_)` not covered

And if you're wondering: I needed to add the explicit : Result<i32, &str> type annotation to help type inference get to that error message. Without it, it just complained that it couldn't infer the type of x.

if let, while let, and for (Rust)

The if let and while let expressions are all about refutable pattern matches. "Only do this if the pattern matches" and "keep doing this while the pattern matches." if let looks something like this:

fn main() {
    let result: Result<(), String> = Err("Something happened".to_owned());
    if let Err(e) = result {
        eprintln!("Something went wrong: {}", e);

And with while let, you can make something close to a for loop:

fn main() {
    let mut iter = 1..=10;
    while let Some(i) = {
        println!("i == {}", i);

And speaking of for loops, the left hand side of the in keyword is a pattern. This can be really nice for cases like destructuring the tuple generated by the enumerate() method:

fn main() {
    for (idx, c) in "Hello, world!".chars().enumerate() {
        println!("{}: {}", idx, c);

The patterns in a for loop must be irrefutable. This code won't compile:

fn main() {
    let array = [Ok(1), Ok(2), Err("something"), Ok(3)];
    for Ok(x) in &array {
        println!("x == {}", x);

Instead, if you want to exit the for loop at the first Err value, you would need to do something like this:

fn main() {
    let array = [Ok(1), Ok(2), Err("something"), Ok(3)];
    for x in &array {
        match x {
            Ok(x) => println!("x == {}", x),
            Err(_) => break,

Where they're used

This was not intended to be a complete explanation of all examples of patterns in these languages. However, for a bit of completeness, let me quote the Haskell language specification for where patterns are part of the language:

Patterns appear in lambda abstractions, function definitions, pattern bindings, list comprehensions, do expressions, and case expressions. However, the first five of these ultimately translate into case expressions, so defining the semantics of pattern matching for case expressions is sufficient.

And similarly for Rust:

  • let declarations
  • Function and closure parameters
  • match expressions
  • if let expressions
  • while let expressions
  • for expressions

There are also more advanced examples of patterns that I haven't touched on at all. Reference patterns in Rust would be relevant here, as would lazy patterns in Haskell.


I hoped this gave a little bit of insight into the value of patterns. For me, the important takeaway is:

  • Patterns appear in lots of places
  • The difference between refutable and irrefutable patterns
  • There are some places where you must use irrefutable patterns
  • There are some places where Haskell lets you use refutable patterns, but you shouldn't
  • Variable binding is just one special case of patterns

If you're interested in learning more about either Haskell or Rust, check out our Haskell syllabus or our Rust Crash Course. FP Complete also offers both corporate and public training classes on both Haskell and Rust. If you're interested in learning more, please contact us for details.

December 14, 2020 12:00 AM

Donnacha Oisín Kidney

Enumerating Trees

Posted on December 14, 2020
Tags: Agda, Haskell

Consider the following puzzle:

Given a list of <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> labels, list all the trees with those labels in order.

For instance, given the labels [1,2,3,4], the answer (for binary trees) is the following:

┌1     ┌1      ┌1     ┌1     ┌1
┤      ┤      ┌┤     ┌┤     ┌┤
│┌2    │ ┌2   ││┌2   │└2    │└2
└┤     │┌┤    │└┤    ┤     ┌┤
 │┌3   ││└3   │ └3   │┌3   │└3
 └┤    └┤     ┤      └┤    ┤
  └4    └4    └4      └4   └4

This problem (the “enumeration” problem) turns out to be quite fascinating and deep, with connections to parsing and monoids. It’s also just a classic algorithmic problem which is fun to try and solve.

The most general version of the algorithm is on forests of rose trees:

data Rose a = a :& Forest a
type Forest a = [Rose a]

It’s worth having a go at attempting it yourself, but if you’d just like to see the slick solutions the following is one I’m especially proud of:

Solution to the Enumeration Problem on Forests of Rose Trees

enumForests :: [a] -> [Forest a]
enumForests = foldrM f []
    f x xs = zipWith ((:) . (:&) x) (inits xs) (tails xs)

In the rest of this post I’ll go through the intuition behind solutions like the one above and I’ll try to elucidate some of the connections to other areas of computer science.

A First Approach: Trying to Enumerate Directly

I first came across the enumeration problem when I was writing my master’s thesis: I needed to prove (in Agda) that there were finitely many binary trees of a given size, and that I could list them (this proof was part of a larger verified solver for the countdown problem). My first few attempts were unsuccessful: the algorithm presented in the countdown paper (Hutton 2002) was not structurally recursive, and did not seem amenable to Agda-style proofs.

Instead, I looked for a type which was isomorphic to binary trees, and which might be easier to reason about. One such type is Dyck words.

Dyck Words

A “Dyck word” is a string of balanced parentheses.


It’s (apparently) well-known that these strings are isomorphic to binary trees (although the imperative descriptions of algorithms which actually computed this isomorphism addled my brain), but what made them interesting for me was that they are a flat type, structured like a linked list, and as such should be reasonably straightforward to prove to be finite.

Our first task, then, is to write down a type for Dyck words. Te following is a first possibility:

data Paren = LParen | RParen
type Dyck = [Paren]

But this type isn’t correct. It includes many values which don’t represent balanced parentheses, i.e. the expressions [LParen,RParen] :: Dyck are well-typed. To describe dyck words properly we’ll need to reach for the GADTs:

data DyckSuff (n :: Nat) :: Type where
  Done :: DyckSuff Z
  Open :: DyckSuff (S n) -> DyckSuff n
  Clos :: DyckSuff n     -> DyckSuff (S n)

type Dyck = DyckSuff Z

The first type here represents suffixes of Dyck words; a value of type DyckSuff n represents a string of parentheses which is balanced except for n extraneous closing parentheses. DyckSuff Z, then, has no extraneous closing parens, and as such is a proper Dyck word.

>>> Open $ Clos $ Open $ Clos $ Done :: Dyck

>>> Clos $ Open $ Clos $ Done :: DyckSuff (S Z)

>>> Open $ Open $ Clos $ Open $ Clos $ Clos $ Open $ Clos $ Done :: Dyck

>>> Open $ Open $ Clos $ Clos $ Open $ Clos $ Done :: Dyck

The next task is to actually enumerate these words. Here’s an <semantics>O(n)<annotation encoding="application/x-tex">O(n)</annotation></semantics> algorithm which does just that:

enumDyck :: Int -> [Dyck]
enumDyck sz = go Zy sz Done []
    go, zero, left, right :: Natty n -> Int -> DyckSuff n -> [Dyck] -> [Dyck]
    go n m k = zero n m k . left n m k . right n m k

    zero Zy 0 k = (k:)
    zero _  _ _ = id
    left (Sy n) m k = go n m (Open k)
    left Zy     _ _ = id
    right _ 0 _ = id
    right n m k = go (Sy n) (m-1) (Clos k) 

>>> mapM_ print (enumDyck 3)

A variant of this function was what I needed in my thesis: I also needed to prove that it produced every possible value of the type Dyck, which was not too difficult.

The difficult part is still ahead, though: now we need to convert between this type and a binary tree.


First, for the conversion algorithms we’ll actually need another GADT:

infixr 5 :-
data Stack (a :: Type) (n :: Nat) :: Type where
  Nil  :: Stack a Z
  (:-) :: a -> Stack a n -> Stack a (S n)

The familiar length-indexed vector will be extremely useful for the next few bits of code: it will act as a stack in our stack-based algorithms. Here’s one of those algorithms now:

dyckToTree :: Dyck -> Tree
dyckToTree dy = go dy (Leaf :- Nil)
    go :: DyckSuff n -> Stack Tree (S n) -> Tree
    go (Open d) ts               = go d (Leaf :- ts)
    go (Clos d) (t1 :- t2 :- ts) = go d (t2 :*: t1 :- ts)
    go Done     (t  :- Nil)      = t

This might be familiar: it’s actually shift-reduce parsing dressed up with some types. The nice thing about it is that it’s completely total: all pattern-matches are accounted for here, and when written in Agda it’s clearly structurally terminating.

The function in the other direction is similarly simple:

treeToDyck :: Tree -> Dyck
treeToDyck t = go t Done
    go :: Tree -> DyckSuff n -> DyckSuff n
    go Leaf        = id
    go (xs :*: ys) = go xs . Open . go ys . Clos

A Compiler

Much of this stuff has been on my mind recently because of this (2020) video on the computerphile channel, in which Graham Hutton goes through using QuickCheck to test an interesting compiler. The compiler itself is explored more in depth in Bahr and Hutton (2015), where the algorithms developed are really quite similar to those that we have here.

The advantage of the code above is that it’s all total: we will never pop items off the stack that aren’t there. This is a nice addition, and it’s surprisingly simple to add: let’s see if we can add it to the compiler presented in the paper.

The first thing we need to change is we need to add a payload to our tree type: the one above is just the shape of a binary tree, but the language presented in the paper contains values.

data Expr (a :: Type) where
  Val   :: a -> Expr a
  (:+:) :: Expr a -> Expr a -> Expr a

We’ll need to change the definition of Dyck similarly:

data Code (n :: Nat) (a :: Type) :: Type where
  HALT :: Code (S Z) a
  PUSH :: a -> Code (S n) a -> Code n a
  ADD  :: Code (S n) a -> Code (S (S n)) a

After making it so that these data structures can now store contents, there are two other changes worth pointing out:

  • The names have been changed, to match those in the paper. It’s a little clearer now that the Dyck word is a bit like code for a simple stack machine.
  • The numbering on Code has changed. Now, the HALT constructor has a parameter of 1 (well, S Z), where its corresponding constructor in Dyck (Done) had 0. Why is this? I am not entirely sure! To get this stuff to all work out nicely took a huge amount of trial and error, I would love to see a more principled reason why the numbering changed here.

With these definitions we can actually transcribe the exec and comp functions almost verbatim (from page 11 and 12 of 2015).

exec :: Code n Int -> Stack Int (n + m) -> Stack Int (S m)
exec HALT         st              = st
exec (PUSH v is)  st              = exec is (v :- st)
exec (ADD    is) (t1 :- t2 :- st) = exec is (t2 + t1 :- st)

comp :: Expr a -> Code Z a
comp e = comp' e HALT
    comp' :: Expr a -> Code (S n) a -> Code n a
    comp' (Val     x) = PUSH x
    comp' (xs :+: ys) = comp' xs . comp' ys . ADD

Proving the Isomorphism

As I have mentioned, a big benefit of all of this stuff is that it can be translated into Agda readily. The real benefit of that is that we can show the two representations of programs are fully isomorphic. I have proven this here: the proof is surprisingly short (about 20 lines), and the rest of the code follows the Haskell stuff quite closely. I got the idea for much of the proof from this bit of code by Callan McGill (2020).

I’ll include it here as a reference.

Agda Code

open import Prelude
open import Data.Nat using (_+_)
open import Data.Vec.Iterated using (Vec; __; []; foldlN; head)

    n :

-- Binary trees: definition and associated functions

data Tree (A : Type a) : Type a where
  [_] : A  Tree A
  _*_ : Tree A  Tree A  Tree A

-- Programs: definition and associated functions

data Prog (A : Type a) : Type a where
  halt : Prog A 1
  push : A  Prog A (1 + n)  Prog A n
  pull : Prog A (1 + n)  Prog A (2 + n)

-- Conversion from a Prog to a Tree

prog→tree⊙ : Prog A n  Vec (Tree A) n  Tree A
prog→tree⊙ halt        (v ∷ [])       = v
prog→tree⊙ (push v is) st             = prog→tree⊙ is ([ v ] ∷ st)
prog→tree⊙ (pull   is) (t₁ ∷ t₂ ∷ st) = prog→tree⊙ is (t₂ * t₁ ∷ st)

prog→tree : Prog A zero  Tree A
prog→tree ds = prog→tree⊙ ds []

-- Conversion from a Tree to a Prog

tree→prog⊙ : Tree A  Prog A (suc n)  Prog A n
tree→prog⊙ [ x ]     = push x
tree→prog⊙ (xs * ys) = tree→prog⊙ xs ∘ tree→prog⊙ ys ∘ pull

tree→prog : Tree A  Prog A zero
tree→prog tr = tree→prog⊙ tr halt

-- Proof of isomorphism

tree→prog→tree⊙ : (e : Tree A) (is : Prog A (1 + n)) (st : Vec (Tree A) n) 
  prog→tree⊙ (tree→prog⊙ e is) st ≡ prog→tree⊙ is (e ∷ st)
tree→prog→tree⊙ [ x ]     is st = refl
tree→prog→tree⊙ (xs * ys) is st = tree→prog→tree⊙ xs _ st ;
                                  tree→prog→tree⊙ ys (pull is) (xs ∷ st)

tree→prog→tree : (e : Tree A)  prog→tree (tree→prog e) ≡ e
tree→prog→tree e = tree→prog→tree⊙ e halt []

prog→tree→prog⊙ : (is : Prog A n) (st : Vec (Tree A) n) 
 tree→prog (prog→tree⊙ is st) ≡ foldlN (Prog A) tree→prog⊙ is st
prog→tree→prog⊙  halt       st = refl
prog→tree→prog⊙ (push i is) st = prog→tree→prog⊙ is ([ i ] ∷ st)
prog→tree→prog⊙ (pull is) (t₁ ∷ t₂ ∷ ts) = prog→tree→prog⊙ is ((t₂ * t₁) ∷ ts)

prog→tree→prog : (is : Prog A 0)  tree→prog (prog→tree is) ≡ is
prog→tree→prog is = prog→tree→prog⊙ is []

prog-iso : Prog A zero ⇔ Tree A
prog-iso .fun = prog→tree
prog-iso .inv = tree→prog
prog-iso .rightInv = tree→prog→tree
prog-iso .leftInv  = prog→tree→prog

Folds and Whatnot

Another thing I’ll mention is that all of the exec functions presented are folds. In particular, they’re left folds. Here’s how we’d rewrite exec to make that fact clear:

foldlCode :: ( n. a -> b n -> b (S n))
          -> ( n. b (S (S n)) -> b (S n))
          -> b m
          -> Code m a -> b (S Z)
foldlCode _ _ h  HALT       = h
foldlCode p a h (PUSH x xs) = foldlCode p a (p x h) xs
foldlCode p a h (ADD    xs) = foldlCode p a (a   h) xs

shift :: Int -> Stack Int n -> Stack Int (S n)
shift x xs = x :- xs

reduce :: Stack Int (S (S n)) -> Stack Int (S n)
reduce (t1 :- t2 :- st) = t2 + t1 :- st

execFold :: Code Z Int -> Int
execFold = pop . foldlCode shift reduce Nil

I think the “foldl-from-foldr” trick could be a nice way to explain the introduction of continuations in Bahr and Hutton (2015).

Direct Enumeration

It turns out that you can follow relatively straightforward rewriting steps from the Dyck-based enumeration algorithm to get to one which avoids Dyck words entirely:

enumTrees :: [a] -> [Expr a]
enumTrees = fmap (foldl1 (flip (:+:))) . foldlM f []
    f []         v = [[Val v]]
    f [t1]       v = [[Val v, t1]]
    f (t1:t2:st) v = (Val v : t1 : t2 : st) : f ((t2 :+: t1) : st) v

Maybe in a future post I’ll go through the derivation of this algorithm.

It turns out that the Dyck-based enumeration can be applied without much difficulty to rose trees as well:

data Rose a = a :& Forest a
type Forest a = [Rose a]

dyckToForest :: Dyck -> Forest ()
dyckToForest dy = go dy ([] :- Nil)
    go :: DyckSuff n -> Stack (Forest ()) (S n) -> Forest ()
    go (Open d) ts               = go d ([] :- ts)
    go (Clos d) (t1 :- t2 :- ts) = go d ((() :& t2 : t1) :- ts)
    go Done     (t  :- Nil)      = t

forestToDyck :: Forest () -> Dyck
forestToDyck t = go t Done
    go :: Forest () -> DyckSuff n -> DyckSuff n
    go []          = id
    go ((() :& x):xs) = go x . Open . go xs . Clos

And again, following relatively mechanical derivations, we arrive at an elegant algorithm:

enumForests :: [a] -> [Forest a]
enumForests = foldrM f []
    f x xs = zipWith ((:) . (:&) x) (inits xs) (tails xs)

Related Work

While researching this post I found that enumeration of trees has been studied extensively elsewhere: see Knuth (2006), for example, or the excellent blog post by Tychonievich (2013), or the entire field of Boltzmann sampling. This post has only scratched the surface of all of that: I hope to write much more on the topic in the future.


As I mentioned, the Agda code for this stuff can be found here, I have also put all of the Haskell code in one place here.


Bahr, Patrick, and Graham Hutton. 2015. “Calculating Correct Compilers.” Journal of Functional Programming 25 (e14) (September). doi:10.1017/S0956796815000180.

Hutton, Graham. 2002. “The Countdown Problem.” J. Funct. Program. 12 (6) (November): 609–616. doi:10.1017/S0956796801004300.

Knuth, Donald E. 2006. The Art of Computer Programming, Volume 4, Fascicle 4: Generating All Trees–History of Combinatorial Generation (Art of Computer Programming). Addison-Wesley Professional.

McGill, Callan. 2020. “Compiler Correctness for Addition Language.”

Riley, Sean. 2020. “Program Correctness - Computerphile.” University of Nottingham.

Tychonievich, Luther. 2013. “Enumerating Trees.” Luther’s Meanderings.

by Donnacha Oisín Kidney at December 14, 2020 12:00 AM

December 13, 2020

Russell O'Connor

Carbon Tax: Running my Numbers

I am so excited to read today that the Liberal government is planning to raise the tax on carbon up to $170 per tonne by 2030. The biggest problem with the previous carbon pricing program was that $50 per tonne was way too low. The price needs to be large enough such that capturing carbon is preferable to paying the tax. This announcement would finally put us into that ballpark.

Last fall I was discussing with someone who claimed that the carbon tax was just a tax grab and did not believe that the Liberal government was really going to pay the funds back out through the Climate Action Incentive tax credit. I did believe it would be paid out, but to be certain, I figured I should run the numbers.

The major carbon tax payments for my household is (1) natural gas for heating, and (2) gasoline for driving. Between April 2019 and March 2020, I paid $61.83 in carbon taxes for natural gas heating. During the same period I bought approximately 651.612 litres of gasoline. At a price of 4.42 cents per liter, I estimate I paid about $28.80 in carbon taxes for that gasoline. That brings me to a total of $90.63 paid in carbon taxes for that period.

On my 2018 tax return I received a $231 Climate Action Incentive tax credit for my household. That leaves me with a net benefit of $140.37 as a reward for emitting less carbon than average. This is to be expected because average household carbon emissions are much higher than median household emissions due to a relatively few high emitters.

I am omitting carbon tax charges for other incidentals, most notably for airfare. If anyone has any methods for calculating the carbon tax on airfares, please let me know. However, I have little doubt in my mind that this $140.37 more than covers any other outstanding carbon tax costs. Also keep in mind that the Climate Action Incentive tax credit was paid in advance with my 2018 income tax return, before I paid any carbon taxes at all.

Some people on twitter were wrongly arguing that the Climate Action Incentive tax credit is income based. I filled out my income tax return and the $231 value was based on the size of my household and was independent of my income.

The carbon tax program is great because it lets market forces determine how best to reduce carbon emissions. Those few households that are emitting most of the carbon are financially incentivized to change their consumption patterns. For reasons that I do not understand, both the federal and Ontario Conservative parties hate market solutions. Their "solution" to the carbon problem is to have their governments pick winners and losers themselves. No doubt that way they can make sure their cronies just happen to be among the winners.

One could argue that I am being selfish, because the more the tax on carbon increases, the more rewards I earn. On the other hand, perhaps those Canadians who are arguing against the carbon tax should run their own numbers and see how much they benefit from this carbon program.

December 13, 2020 02:30 AM

in Code

Roll your own Holly Jolly streaming combinators with Free

Hi! Welcome, if you’re joining us from the great Advent of Haskell 2020 event! Feel free to grab a hot chocolate and sit back by the fireplace. I’m honored to be able to be a part of the event this year; it’s a great initiative and harkens back to the age-old Haskell tradition of bite-sized Functional Programming “advent calendars”. I remember when I was first learning Haskell, Ollie Charles’ 24 Days of Hackage series was one of my favorite series that helped me really get into the exciting world of Haskell and the all the doors that functional programming can open.

All of the posts this year have been great — they range from insightful reflections on the nature of Haskell and programming in Haskell, or also on specific language features. This post is going to be one of the “project-based” ones, where we walk through and introduce a solidly intermediate Haskell technique as it applies to building a useful general toolset. I’m going to be exploring the “functor combinator style” where you identify the interface you want, associate it with a common Haskell typeclass, pick your primitives, and automatically get the ability to imbue your primitives with the structure you need. I’ve talked about this previously with:

  1. Applicative regular expressions
  2. The functor combinatorpedia
  3. Bidirectional serializers
  4. Composable interpreters

and I wanted to share a recent application I have been able to use apply it with where just thinking about the primitives gave me almost all the functionality I needed for a type: composable streaming combinators. This specific application is also very applicable to integrate into any composable effects system, since it’s essentially a monadic interface.

In a way, this post could also be seen as capturing the spirit of the holidays by reminiscing about the days of yore — looking back at one of the more exciting times in modern Haskell’s development, where competing composable streaming libraries were at the forefront of practical innovation. The dust has settled on that a bit, but it every time I think about composable streaming combinators, I do get a bit nostalgic :)

This post is written for an intermediate Haskell audience, and will assume you have a familiarity with monads and monadic interfaces, and also a little bit of experience with monad transformers. Note — there are many ways to arrive at the same result, but this post is more of a demonstration of a certain style and approach that has benefited my greatly in the past.

All of the code in this page can be found online at github!

Dreaming of an Effectful Christmas

The goal here is to make a system of composable pipes that are “pull-based”, so we can process data as it is read in from IO only as we need it, and never do more work than we need to do up-front or leak memory when we stop using it.

So, the way I usually approach things like these is: “dress for the interface you want, not the one you have.” It involves:

  1. Thinking of the m a you want and how you would want to combine it/use it.
  2. Express the primitive actions of that thing
  3. Use some sort of free structure or effects system to enhance that primitive with the interface you are looking for.

So, let’s imagine our type!

type Pipe i o m a = ...

where a Pipe i o m a represents a pipe component where:

  • i: the type of the input the pipe expects from upstream
  • o: the type of the output the pipe will be yielding upstream
  • m: the monad that the underlying actions live in
  • a: the overall result of the pipe once it has terminated.

One nice thing about this setup is that by picking different values for the type parameters, we can already get a nice classification for interesting subtypes:

  1. If i is () (or universally quantified1) — a Pipe () o m a — it means that the pipe doesn’t ever expect any sort of information upstream, and so can be considered a “source” that keeps on churning out values.

  2. If o is Void (or universally quantified) — a Pipe i Void m a — it means that the pipe will never yield anything downstream, because Void has no inhabitants that could possibly be yielded.

    data Void

    This means that it acts like a “sink” that will keep on eating i values without ever outputting anything downstream.

  3. If i is () and o is Void (or they are both universally quantified), then the pipe doesn’t expect any sort of information upstream, and also won’t ever yield anything downstream… a Pipe () Void m a is just an m a! In the biz, we often call this an “effect”.

  4. If a is Void (or universally quantified) — a Pipe i o m Void — it means that the pipe will never terminate, since Void has no inhabitants that could it could possibly produce upon termination.

To me, I think it embodies a lot of the nice principles about the “algebra” of types that can be used to reason with inputs and outputs. Plus, it allows us to unify sources, sinks, and non-terminating pipes all in one type!

Now let’s think of the interface we want. We want to be able to:

-- | Yield a value `o` downstream
yield :: o -> Pipe i o m ()

-- | Await a value `i` upstream
await :: Pipe i o m (Maybe i)

-- | Terminate immediately with a result value
return :: a -> Pipe i o m a

-- | Sequence pipes one-after-another:
-- "do this until it terminates, then that one next"
(>>) :: Pipe i o m a -> Pipe i o m b -> Pipe i o m b

-- | In fact let's just make it a full fledged monad, why not?  We're designing
-- our dream interface here.
(>>=) :: Pipe i o m a -> (a -> Pipe i o m b) -> Pipe i o m b

-- | A pipe that simply does action in the underlying monad and terminates with
-- the result
lift :: m a -> Pipe i o m a

-- | Compose pipes, linking the output of one to the input of the other
(.|) :: Pipe i j m a -> Pipe j o m b -> Pipe i o m b

-- | Finally: run it all on a pipe expecting no input and never yielding:
runPipe :: Pipe () Void m a -> m a

This looks like a complicated list…but actually most of these come from ubiquitous Haskell typeclasses like Monad and Applicative. We’ll see how this comes into play later, when we learn how to get these instances for our types for free. This makes the actual “work” we have to do very small.

So, these are going to be implementing “conduit-style” streaming combinators, where streaming actions are monadic, and monadic sequencing represents “do this after this one is done.” Because of this property, they work well as pull-based pipes: yields will block until a corresponding await can accept what is yielded.

Put on those Christmas Sweaters

“Dress for the interface you want, not the one you have”. So let’s pretend we already implemented this interface…what could we do with it?

Well, can write simple sources like “yield the contents from a file line-by-line”:

-- source:

sourceHandleIO :: Handle -> Pipe i String IO ()
sourceHandleIO handle = do
    res <- lift $ tryJust (guard . isEOFError) (hGetLine handle)
    case res of
      Left  _   -> return ()
      Right out -> do
        yield out
        sourceHandle handle

Note that because the i is universally quantified, it means that we know that sourceFile never ever awaits or touches any input: it’s purely a source.

We can even write a simple sink, like “await and print the results to stdout as they come”:

-- source:

sinkStdoutIO :: Pipe String o IO ()
sinkStdoutIO = do
    inp <- await
    case inp of
      Nothing -> pure ()
      Just x  -> do
        lift $ putStrLn x

And maybe we can write a pipe that takes input strings and converts them to all capital letters and re-yields them:

-- source:

toUpperPipe :: Monad m => Pipe String String m ()
toUpperPipe = do
    inp <- await
    case inp of
      Nothing -> pure ()
      Just x  -> do
        yield (map toUpper x)

And we can maybe write a pipe that stops as soon as it reads the line STOP.

-- source:

untilSTOP :: Monad m => Pipe String String m ()
untilSTOP = do
    inp <- await
    case inp of
      Nothing -> pure ()
      Just x
        | x == "STOP" -> pure ()
        | otherwise   -> do
            yield x

untilSTOP is really sort of the crux of what makes these streaming systems useful: we only pull items from the file as we need it, and untilSTOP will stop pulling anything as soon as we hit STOP, so no IO will happen anymore if the upstream sink does IO.

Our Ideal Program

Now ideally, we’d want to write a program that lets us compose the above pipes to read from a file and output its contents to stdout, until it sees a STOP line:

-- source:

samplePipeIO :: Handle -> Pipe i o IO ()
samplePipeIO handle =
       sourceHandleIO handle
    .| untilSTOP
    .| toUpperPipe
    .| sinkStdoutIO

Setting up our Stockings

Step 2 of our plan was to identify the primitive actions we want. Looking at our interface, it seems like the few things that let us “create” a Pipe from scratch (instead of combining existing ones) are:

yield  :: o -> Pipe i o m ()
await  :: Pipe i o m (Maybe i)
lift   :: m a -> Pipe i o m a
return :: a   -> Pipe i o m a

However, we can note that lift and return can be gained just from having a Monad and MonadTrans instance. So let’s assume we have those instances.

class Monad m where
    return :: a -> m a

class MonadTrans p where
    lift :: m a -> p m a

The functor combinator plan is to identify your primitives, and let free structures give you the instances (in our case, Monad and MonadTrans) you need for them.

So this means we only need two primitives: yield and await. Then we just throw them into some machinery that gives us a free Monad and MonadTrans structure, and we’re golden :)

In the style of the free library, we’d write base functions to get an ADT that describes the primitive actions:

-- source:

data PipeF i o a =
    YieldF o a
  | AwaitF (Maybe i -> a)
    deriving Functor

The general structure of the base functor style is to represent each primitive as a constructor: include any inputs, and then a continuation on what to do if you had the result.

For example:

  1. For YieldF, you need an o to be able to yield. The second field should really be the continuation () -> a, since the result is (), but that’s equivalent to a in Haskell.
  2. For AwaitF, you don’t need any parameters to await, but the continuation is Maybe i -> a since you need to specify how to handle the Maybe i result.

(This is specifically the structure that free expects, but this principle can be ported to any algebraic effects system.)

A Christmas Surprise

And now for the last ingredient: we can use the FreeT type from Control.Monad.Trans.Free, and now we have our pipe interface, with a Monad and MonadTrans instance!

type Pipe i o = FreeT (PipeF i o)

This takes our base functor and imbues it with a full Monad and MonadTrans instance:

lift :: m a -> FreeT (PipeF i o) m a
lift :: m a -> Pipe i o m a

return :: a -> FreeT (PipeF i o) m a
return :: a -> Pipe i o m a

(>>)  :: Pipe i o m a -> Pipe i o m b -> Pipe i o m b
(>>=) :: Pipe i o m a -> (a -> Pipe i o m b) -> Pipe i o m b

That’s the essence of the free structure: it adds to our base functor (PipeF) exactly the structure it needs to be able to implement the instances it is free on. And it’s all free as in beer! :D

As a bonus gift, we also get a MonadIO instance from FreeT, as well:

liftIO :: MonadIO m => IO a -> FreeT (PipeF i o) m a
liftIO :: MonadIO m => IO a -> Pipe i o m a

Now we just need our functions to lift our primitives to Pipe, using liftF :: f a -> FreeT f m a:

-- source:

yield :: Monad m => o -> Pipe i o m ()
yield x = liftF $ YieldF x ()

await :: Monad m => Pipe i o m (Maybe i)
await = liftF $ AwaitF id

(these things you can usually just fill in using type tetris, filling in values with typed holes into they typecheck).

Note that all of the individual pipes we had planned work as-is! And we can even even make sourceHandle and sinkStdout work for any MonadIO m => Pipe i o m a, because of the unexpected surprise Christmas gift we got (the MonadIO instance and liftIO :: MonadIO m => IO a -> Pipe i o u m a). Remember, MonadIO m is basically any m that supports doing IO.

-- source:

sourceHandle :: MonadIO m => Handle -> Pipe i String m ()
sourceHandle handle = do
    res <- liftIO $ tryJust (guard . isEOFError) (hGetLine handle)
    case res of
      Left  _   -> return ()
      Right out -> do
        yield out
        sourceHandle handle

sinkStdout :: MonadIO m => Pipe String o m ()
sinkStdout = do
    inp <- await
    case inp of
      Nothing -> pure ()
      Just x  -> do
        liftIO $ putStrLn x

toUpperPipe :: Monad m => Pipe String String m ()
toUpperPipe = do
    inp <- await
    case inp of
      Nothing -> pure ()
      Just x  -> do
        yield (map toUpper x)

untilSTOP :: Monad m => Pipe String String m ()
untilSTOP = do
    inp <- await
    case inp of
      Nothing -> pure ()
      Just x
        | x == "STOP" -> pure ()
        | otherwise   -> do
            yield x

That’s because using FreeT, we imbue the structure required to do monadic chaining (do notation) and MonadTrans (lift) and MonadIO (liftIO) for free!

To “run” our pipes, we can use FreeT’s “interpreter” function. This follows the same pattern as for many free structures: specify how to handle each individual base functor constructor, and it then gives you a handler to handle the entire thing.

    :: (PipeF i o (m a) -> m a)  -- ^ given a way to handle each base functor constructor ...
    -> Pipe i o m a -> m a       -- ^ here's a way to handle the whole thing

So let’s write our base functor handler. Remember that we established earlier we can only “run” a Pipe () Void m a: that is, pipes where await can always be fed with no information (()) and no yield is ever called (because you cannot yield with Void, a type with no inhabitants). We can directly translate this to how we handle each constructor:

-- source:

handlePipeF :: PipeF () Void (m a) -> m a
handlePipeF = \case
    YieldF o _ -> absurd o
    AwaitF f   -> f (Just ())

And so we get our full runPipe:

-- source:

runPipe :: Monad m => Pipe () Void m a -> m a
runPipe = iterT handlePipeF

I think this process exemplifies most of the major beats when working with free structures:

  1. Define the base functor
  2. Allow the free structure to imbue the proper structure over your base functor
  3. Write your interpreter to interpret the constructors of your base functor, and the free structure will give you a way to interpret the entire structure.

The Final Ornament

If you look at the list of all the things we wanted, we’re still missing one thing: pipe composition/input-output chaining. That’s because it isn’t a primitive operation (like yield or await), and it wasn’t given to us for free by our free structure (FreeT, which gave us monadic composition and monad transformer ability). So with how we have currently written it, there isn’t any way of getting around writing (.|) manually. So let’s roll up our sleeves and do the (admittedly minimal amount of) dirty work.

Let’s think about the semantics of our pipe chaining. We want to never do more work than we need to do, so we’ll be “pull-based”: for f .| g, try running g as much as possible until it awaits anything from f. Only then do we try doing f.

To implement this, we’re going to have to dig in a little bit to the implementation/structure of FreeT:

newtype FreeT f m a = FreeT
    { runFreeT :: m (FreeF f a (FreeT f m a)) }

data FreeF f a b
      Pure a
    | Free (f b)

This does look a little complicated, and on the face of it, it can be a bit intimidating. And why is there a second internal data type?

Well, you can think of FreeF f a b as being a fancy version of Either a (f b). And the implementation of FreeT is saying that FreeT f m a is an m-action that produces Either a (FreeT f m a). So for example, FreeT f IO a is an IO action that produces either the a (we’re done, end here!) or a f (FreeT f m a)) (we have to handle an f here!)

newtype FreeT f m a = FreeT
    { runFreeT :: m (Either a (f (FreeT f m a))) }

At the top level, FreeT is an action in the underlying monad (just like MaybeT, ExceptT, StateT, etc.). Let’s take that into account and write our implementation (with a hefty bit of help from the typechecker and typed holes)! Remember our plan: for f .| g, start unrolling g until it needs anything, and then ask f when it does.

    :: Monad m
    => Pipe a b m x         -- ^ pipe from a -> b
    -> Pipe b c m y         -- ^ pipe from b -> c
    -> Pipe a c m y         -- ^ pipe from a -> c
pf .| pg = do
    gRes <- lift $ runFreeT pg          -- 1
    case gRes of
      Pure x            -> pure x       -- 2
      Free (YieldF o x) -> do           -- 3
        yield o
        pf .| x
      Free (AwaitF g  ) -> do           -- 4
        fRes <- lift $ runFreeT pf
        case fRes of
          Pure _            -> pure () .| g Nothing     -- 5
          Free (YieldF o y) -> y       .| g (Just o)    -- 6
          Free (AwaitF f  ) -> do                       -- 7
            i <- await
            f i .| FreeT (pure gRes)

Here are some numbered notes and comments:

  1. Start unrolling the downstream pipe pg, in the underlying monad m!
  2. If pg produced Pure x, it means we’re done pulling anything. The entire pipe has terminated, since we will never need anything again. So just quit out with pure x.
  3. If pg produced Free (YieldF o x), it means it’s yielding an o and continuing on with x. So let’s just yield that o and move on to the composition of pf with the next pipe x.
  4. If pg produced Free (AwaitF g), now things get interesting. We need to unroll pf until it yields some Maybe b, and feed that to g :: Maybe b -> Pipe b c m y.
  5. If pf produced Pure y, that means it was done! The upstream terminated, so the downstream will have to terminate as well. So g gets a Nothing, and we move from there. Note we have to compose with a dummy pipe pure () to make the types match up properly.
  6. If pf produced YieldF o y, then we have found our match! So give g (Just o), and now we recursively compose the next pipe (y) with the that g gave us.
  7. If pf produced AwaitF f, then we’re in a bind, aren’t we? We now have two layers waiting for something further upstream. So, we await from even further upstream; when we get it, we feed it to f and then compose f i :: Pipe a b m x with pg’s result (wrapping up gRes back into a FreeT/Pipe so the types match up).

Admittedly (!) this is the “ugly” part of this derivation: sometimes we just can’t get everything for free. But getting the Monad, Applicative, Functor, MonadTrans, etc. instances is probably nice enough to justify this inconvenience :) And who knows, there might be a free structure that I don’t know about that gives us all of these plus piping for free.

Christmas Miracle

It runs!

-- source:

samplePipe :: Handle -> Pipe i o IO ()
samplePipe handle =
       sourceHandle handle
    .| untilSTOP
    .| toUpperPipe
    .| sinkStdout
$ cat testpipefile.txt
ghci> withFile "testpipefile.txt" ReadMode $ \handle ->
        runPipe (samplePipe handle)

Smooth as silk :D

Takeways for a Happy New Year

Most of this post was thought up when I needed2 a tool that was sort of like conduit, sort of like pipes, sort of like the other libraries…and I thought I had to read up on the theory of pipes and iteratees and trampolines and fancy pants math stuff to be able to make anything useful in this space. I remember being very discouraged when I read about this stuff as a wee new Haskeller, because the techniques seemed so foreign and out of the range of my normal Haskell experience.

However, I found a way to maintain a level head somehow, and just thought — “ok, I just need a monad (trans) with two primitive actions: await, and yield. Why don’t I just make an await and yield and get automatic Monad and MonadTrans instances with the appropriate free structure?”

As we can see…this works just fine! We only needed to implement one extra thing (.|) to get the interface of our dreams. Of course, for a real industrial-strength streaming combinator library, we might need to be a bit more careful. But for my learning experience and use case, it worked perfectly.

The next time you need to make some monad that might seem exotic, try this out and see if it works for you :)

Happy holidays, and merry Christmas!


Click on the links in the corner of the text boxes for solutions! (or just check out the source file)

  1. An Pipe i o m a “takes” i and “produces” o, so it should make sense to make pre-map and post-map functions:

    -- source:
    postMap :: Monad m => (o -> o') -> Pipe i o m a -> Pipe i o' m a
    preMap :: Monad m => (i' -> i) -> Pipe i o m a -> Pipe i' o m a

    That pre-maps all inputs the pipe would receive, and post-maps all of the values it yields.

    Hint: This actually is made a lot simpler to write with the handy transFreeT combinator, which lets you swap out/change the base functor:

        :: (forall a. f a -> g a)     -- ^ polymorphic function to edit the base functor
        -> FreeT f m b
        -> FreeT g m b
        :: (forall a. PipeF i o a -> PipeF i' o' a)  -- ^ polymorphic function to edit the base functor
        -> Pipe i  o  m a
        -> Pipe i' o' m a

    We could then write pre-map and post-map function on PipeF and translate them to Pipe using transFreeT:

    -- source:
    postMapF :: (o -> o') -> PipeF i o a -> PipeF i o' a
    preMapF :: (i' -> i) -> PipeF i o a -> PipeF i' o a
    postMap :: Monad m => (o -> o') -> Pipe i o m a -> Pipe i o' m a
    postMap f = transFreeT (postMapF f)
    preMap :: Monad m => (i' -> i) -> Pipe i o m a -> Pipe i' o m a
    preMap f = transFreeT (preMapF f)
  2. One staple of a streaming combinator system is giving you a disciplined way to handle resources allocations like file handlers and properly close them on completion. Our streaming combinator system has no inherent way of doing this within its structure, but we can take advantage of the resourcet package to handle it for us.

    Basically, if we run our pipes over ResourceT IO instead of normal IO, we get an extra action allocate:

        :: IO a             -- ^ get a handler
        -> (a -> IO ())     -- ^ close a handler
        -> ResourceT IO (ResourceKey, a)
    -- example
    allocate (openFile fp ReadMode) hClose
        :: ResourceT IO (ResourceKey, Handler)

    We can use this in our pipe to open a handler from a filename, and rest assured that the file handler will be closed when we eventually runResourceT :: ResourceT IO a -> IO a our pipe.

    -- source:
    sourceFile :: MonadIO m => FilePath -> Pipe i String (ResourceT m) ()
    samplePipe2 :: FilePath -> Pipe i o (ResourceT IO) ()
    samplePipe2 fp =
           sourceFile fp
        .| untilSTOP
        .| toUpperPipe
        .| hoistFreeT lift sinkStdout
    ghci> runResourceT . runPipe $ samplePipe2 "testpipefile.txt"
    -- HELLO
    -- WORLD
  3. Let’s say we modified our PipeF slightly to take another parameter u, the result type of the upstream pipe.

    data PipeF i o u a =
        YieldF o a
      | AwaitF (Either u i -> a)
    type Pipe i o u = FreeT (PipeF i o u)
    await :: Pipe i o m (Either u i)
    await = liftF $ AwaitF id

    So now await would be fed i things yielded from upstream, but sometimes you’d get a Left indicating that the upstream pipe has terminated.

    What would be the implications if u is Void?

    type CertainPipe i o = Pipe i o Void

    What could you do in a CertainPipe i o m a that you couldn’t normally do with our Pipe i o m a?

  4. We mentioned earlier that a “source” could have type

    type Source = Pipe ()

    And a Source o m a would be something that keeps on pumping out os as much as we need, without requiring any upstream input.

    This is actually the essential behavior of the (true) list monad transformer, as esposed by the list-transformer package.

    In that package, ListT is defined as:

    newtype ListT m a = ListT { next :: m (Step m a) }
    data Step m a = Cons a (ListT m a) | Nil

    And it’s a type that can yield out new as on-demand, until exhausted.

    In fact, Source o m () is equivalent to ListT m o. Write the functions to convert between them! :D

    -- source:
    toListT :: Monad m => Pipe () o m a -> L.ListT m o
    fromListT :: Monad m => L.ListT m o -> Pipe i o m ()

    Unfortunately we cannot use iterT because the last type parameter of each is different. But manual pattern matching (like how we wrote (.|)) isn’t too bad!

    The semantics of ListT api is that x <|> y will “do” (and emit the result) x before moving on to what y would emit. And empty is the ListT that signals it is done producing. <|> and pure and empty for ListT are roughly analogous to >> and yield and return for Source, respectively.

Special Thanks

I am very humbled to be supported by an amazing community, who make it possible for me to devote time to researching and writing these posts. Very special thanks to my supporter at the “Amazing” level on patreon, Josh Vera! :)

  1. “Universally quantified” here means that the pipe’s type is left fully polymorphic (with no constraints) over i, the input.↩︎

  2. This came about when I was developing my numerical emd library.↩︎

by Justin Le at December 13, 2020 12:00 AM

December 11, 2020


Using Cabal With Large Projects

DEC 2020 UPDATE: This post is mostly out of date.  The main things that cabal-meta and cabal-dev provided have now been either integrated into cabal itself or been made obsolete by subsequent improvements.

In the last post we talked about basic cabal usage. That all works fine as long as you're working on a single project and all your dependencies are in hackage. When Cabal is aware of everything that you want to build, it's actually pretty good at dependency resolution. But if you have several packages that depend on each other and you're working on development versions of these packages that have not yet been released to hackage, then life becomes more difficult. In this post I'll describe my workflow for handling the development of multiple local packages. I make no claim that this is the best way to do it. But it works pretty well for me, and hopefully others will find this information helpful.

Consider a situation where package B depends on package A and both of them depend on bytestring. Package A has wide version bounds for its bytestring dependency while package B has narrower bounds. Because you're working on improving both packages you can't just do "cabal install" in package B's directory because the correct version of package A isn't on hackage. But if you install package A first, Cabal might choose a version of bytestring that won't work with package B. It's a frustrating situation because eventually you'll have to end up worrying about dependencies issues that Cabal should be handling for you.

The best solution I've found to the above problem is cabal-meta. It lets you specify a sources.txt file in your project root directory with paths to other projects that you want included in the package's build environment. For example, I maintain the snap package, which depends on several other packages that are part of the Snap Framework. Here's what my sources.txt file looks like for the snap package:


My development versions of the other four packages reside in the parent directory on my local machine. When I build the snap package with cabal-meta install, cabal-meta tells Cabal to look in these directories in addition to whatever is in hackage. If you do this initially for the top-level package, it will correctly take into consideration all your local packages when resolving dependencies. Once you have all the dependencies installed, you can go back to using Cabal and ghci to build and test your packages. In my experience this takes most of the pain out of building large-scale Haskell applications.

Another tool that is frequently recommended for handling this large-scale package development problem is cabal-dev. cabal-dev allows you to sandbox builds so that differing build configurations of libraries can coexist without causing problems like they do with plain Cabal. It also has a mechanism for handling this local package problem above. I personally tend to avoid cabal-dev because in my experience it hasn't played nicely with ghci. It tries to solve the problem by giving you the cabal-dev ghci command to execute ghci using the sandboxed environment, but I found that it made my ghci workflow difficult, so I prefer using cabal-meta which doesn't have these problems.

I should note that cabal-dev does solve another problem that cabal-meta does not. There may be cases where two different packages may be completely unable to coexist in the same Cabal "sandbox" if their set of dependencies are not compatible. In that case, you'll need cabal-dev's sandboxes instead of the single user-level package repository used by Cabal. I am usually only working on one major project at a time, so this problem has never been an issue for me. My understanding is that people are currently working on adding this kind of local sandboxing to Cabal/cabal-install. Hopefully this will fix my complaints about ghci integration and should make cabal-dev unnecessary.

There are definitely things that need to be done to improve the cabal tool chain. But in my experience working on several different large Haskell projects both open and proprietary I have found that the current state of Cabal combined with cabal-meta (and maybe cabal-dev) does a reasonable job at handling large project development within a very fast moving ecosystem.

by mightybyte ( at December 11, 2020 09:38 PM

Haskell Best Practices for Avoiding "Cabal Hell"

DEC 2020 UPDATE: A lot has changed since this post was written.  Much of "cabal hell" is now a thing of the past due to cabal's more recent purely functional "nix style" build infrastructure.  Some of the points here aren't really applicable any more, but many still are.  I'm updating this post with strikethroughs for the points that are outdated.

I posted this as a reddit comment and it was really well received, so I thought I'd post it here so it would be more linkable.  A lot of people complain about "cabal hell" and ask what they can do to solve it.  There are definitely things about the cabal/hackage ecosystem that can be improved, but on the whole it serves me quite well.  I think a significant amount of the difficulty is a result of how fast things move in the Haskell community and how much more reusable Haskell is than other languages.

With that preface, here are my best practices that seem to make Cabal work pretty well for me in my development.

1. I make sure that I have no more than the absolute minimum number of packages installed as --global.  This means that I don't use the Haskell Platform or any OS haskell packages.  I install GHC directly.  Some might think this casts too much of a negative light on the Haskell Platform.  But everyone will agree that having multiple versions of a package installed at the same time is a significant cause of build problems.  And that is exactly what the Haskell Platform does for you--it installs specific versions of packages.  If you use Haskell heavily enough, you will invariably encounter a situation where you want to use a different version of a package than the one the Haskell Platform gives you.  The --global flag is not applicable any more now that we have the new v2-* commands.

2. Make sure ~/.cabal/bin is at the front of your path.  Hopefully you already knew this, but I see this problem a lot, so it's worth mentioning for completeness.

3. Install happy and alex manually.  These two packages generate binary executables that you need to have in ~/.cabal/bin.  They don't get picked up automatically because they are executables and not package dependencies.

4. Make sure you have the most recent version of cabal-install.  There is a lot of work going on to improve these tools.  The latest version is significantly better than it used to be, so you should definitely be using it.

5. Become friends with "rm -fr ~/.ghc".  This command cleans out your --user repository, which is where you should install packages if you're not using a sandbox.  It sounds bad, but right now this is simply a fact of life.  The Haskell ecosystem is moving so fast that packages you install today will be out of date in a few months if not weeks or days.  We don't have purely functional nix-style package management yet, so removing the old ones is the pragmatic approach.  Note that sandboxes accomplish effectively the same thing for you.  Creating a new sandbox is the same as "rm -fr ~/.ghc" and then installing to --user, but has the benefit of not deleting everything else you had in --user.  UPDATE: Removing the .ghc directory is still potentially useful to know but much less of an issue now.

6. If you're not working on a single project with one harmonious dependency tree, then use sandboxes for separate projects or one-off package compiles.  Sandboxes have been deprecated in lieu of the new build approach.

7. Learn to use --allow-newer.  Again, things move fast in Haskell land.  If a package gives you dependency errors, then try --allow-newer and see if the package will just work with newer versions of dependencies.

8. Don't be afraid to dive into other people's packages.  "cabal unpack" makes it trivial to download the code for any package.  From there it's often trivial to make manual changes to version bounds or even small code changes.  If you make local changes to a package, then you can either install it to --user so other packages use it, or you can do "cabal sandbox add-source /path/to/project" to ensure that your other projects use the locally modified version.  If you've made code changes, then help out the community by sending a pull request to the package maintainer.  Edit: bergmark mentions that unpack is now "cabal get" and "cabal get -s" lets you clone the project's source repository.

9. If you can't make any progress from the build messages cabal gives you, then try building with -v3.  I have encountered situations where cabal's normal dependency errors are not helpful.  Using -v3 usually gives me a much better picture of what's going on and I can usually figure out the root of the problem pretty quickly.

by mightybyte ( at December 11, 2020 09:32 PM

FP Complete

Why we built Kube360

Over a year ago, FP Complete began work on Kube360. Kube360 is a distribution of Kubernetes supporting multiple cloud providers, as well as on-prem deployments. It includes the most requested features we've seen in Kubernetes clusters. It aims to address many pain points companies typically face in Kubernetes deployments, especially around security. And it tries to centralize the burden of keeping up with the Kubernetes treadmill to one centralized code repo that we maintain.

That's all well and good. But what led FP Complete down this path? Who is the target customer? What might you get out of this versus other alternatives? Let's dive in.

Repeatable process

In the past decade, FP Complete has set up dozens of production clusters for hosting container-based applications. We have done this for both internal and customer needs. Our first clusters predate public releases of Docker and relied on tooling like LXC. We have kept our recommendations and preferred tooling up to date as the deployment world has (very rapidly) iterated.

In the past three years, we have seen a consistent move across the industry towards Kubernetes standardization. Kubernetes addresses many of the needs of most companies around deployment. It does so in a vendor-neutral way, and mostly in a developer-friendly way.

But setting up a Kubernetes cluster in a sensible, secure, and standard fashion is far from trivial. Kubernetes is highly configurable, but out of the box provides little functionality. Different cloud vendors offer slightly different features.

As we were helping our clients onboard with Kubernetes, we noticed some recurring themes:

  • Clients were looking for significant guidance around best practices with Kubernetes

  • We ended up deploying clusters that were largely identical for different clients

  • Maintaining each of these clusters became a large task on its own

We decided to move ahead with creating a repeatable process for setting up Kubernetes clusters. We dogfooded this on ourselves and have been running all FP Complete services on Kube360 for about nine months now. Our process supports both initial setup as well as upgrades to the latest version of Kubernetes itself, as well as the underlying components.

With this in place, we have been able to get clients up and running far more rapidly with fully functioning, batteries-included clusters. The risk of misconfiguration or mismatch between components has come down significantly. And maintenance costs can now be amortized across multiple projects, instead of falling to each individual DevOps team.

What we included

Our initial collection of tools was based on what we were leveraging internally and what we had seen most commonly with our clients. This is what we consider a "basic batteries included" Kubernetes distribution. The functionality included:

  • Metrics collection

  • Monitoring dashboards

  • Log aggregation and search/index

  • Alerts

  • In-cluster Continuous Deployment

We based our choices of defaults here on best-in-class open-source offerings. These were tools we were already familiar with, with great support across the Kubernetes ecosystem. It also makes Kube360 a less risky endeavor for our clients. We have strived to avoid the common vendor lock-in present in many offerings. With Kube360, you're getting a standard Kubernetes cluster with bells and whistles added. But you're getting it faster, more well tested, and maintained and supported by an external team.

The one curveball in this mix was the inclusion of Istio as a service mesh layer. We had already been investigating Istio for its support of in-transit encryption within the cluster, a feature we had implemented previously. Our belief is that Istio will continue to gain major adoption in the Kubernetes ecosystem, and we wanted to future proof Kube360 to be prepared for this.

Cloud native

Kube360 is designed to run mostly identically across different cloud vendors, as well as on-prem. However, where possible, we've leveraged cloud native service offerings for tighter integration. This includes:

  • Leveraging cloud native Kubernetes control plane offerings, like Amazon's EKS or Azure's AKS

  • Defaulting to cloud-specific secrets management instead of using the default secrets engine or a third-party tool like Vault

  • For durability and cost-effectiveness, we use cloud specific blob storage offerings, together with wrappers to abstract over the underlying APIs

Fully configurable

We've tried to keep the base of Kube360 mostly unopinionated. But in our consulting experience, each organization tends to have at least few modifications needed to a "standard" Kubernetes setup. The most common we experience is integration with SaaS monitoring and logging solutions.

We've designed configurability from the ground up with Kube360. Outside of a few core services, each add-on can be enabled or disabled. We can easily retarget metrics to be sent to a third party instead of intracluster collection. Even more fundamental tooling, such as the ingress controller, can be swapped out for alternatives.


The biggest addition we've made to standard tooling comes to authentication. In our experience, the Achilles Heel of many cloud environments, and particularly Kubernetes environments, is mismanagement of authentication. We've seen many setups where credentials leverage:

  • Long term lifetimes

  • No Multi-Factor Authentication

  • Shared credentials across multiple users and services

  • Overly broad privilege grants

The reason for this is, in our opinion, quite simple. Doing the right thing is difficult out of the box. We believe this is a major weakness that needs to be addressed. So, with Kube360, we've dedicated significant effort to providing a best-in-class authentication and authorization experience for everyone in your organization.

In short, our goal with Kube360 is to:

  • Leverage existing user directories and credentials. You shouldn't need yet another password.

  • Make it easy to grant everyone in your organization access to the cluster. We believe in democratizing access. Executives should be able to easily gain read-only access to informational dashboards, so they feel confident in their services.

  • Ensure credentials are all per-user, time based, and never copy-pasted through screens. We heavily leverage open standards, like OpenID Connect.

  • Leverage a single set of credentials is carried through not just Kubernetes, but all add-ons provided with Kube360, including dashboards, log indexing, and Continuous Deployment.

  • Provide easy command line access to the Kubernetes cluster (and, in the case of Amazon, all AWS services) leveraging secure and easy credential acquisition.

We strongly believe by making the right thing easy, we will foster an environment where people will more readily do the right thing. We also believe that making the cluster an open book for the entire organization, including developers, operators, and executives, we can build trust within an organization.


Since initial development and release, we've already seen some requests for features that we had not anticipated so early on.

The first was support for deploying Windows Containers. We have previously deployed hybrid Linux/Windows clusters but had always kept the Windows workloads on traditional hosting. That was for a simple reason: our clients historically had not yet embraced Windows Containers. At this point, Kube360 fully supports hybrid Windows/Linux clusters. And we have deployed such production workloads for our clients.

On-prem was next. We've seen far more rapid adoption of "bleeding edge" technology among cloud users. However, at this point, Kubernetes is not bleeding edge. We're seeing the interest in on-prem increase drastically. We're happy to say that on-prem is now a first-class citizen in the Kube360 world, together with AWS and Azure.

The final surprise was multicluster management. Historically, we have supported clients operating on different cloud providers. But typically, deployments within a single operating group focused on a single cluster within a single provider. We're beginning to see a stronger move towards multicluster management across an organization. This fits in nicely with our vision of democratizing access to clusters. We have begun offering high level tooling for viewing status across multiple Kube360 clusters, regardless of where they are hosted.

The future

We are continuing to build up the feature set around Kube360. While our roadmap will be influenced by client requests, some of our short-term goals include:

  • Support for additional cloud providers, in particular Google Cloud.

  • GUI management tools for permissions management. Our authentication and authorization story is solid but managing RBAC permissions within Kubernetes is still non-trivial. We want to make this process as easy as possible.

  • To ease Kubernetes migrations, we intend to include basic scaffolding tools to address common application deployment cases.

  • And finally, we hope to expand our multicluster management tooling to provide better insights and resolution tools.

Learn more

If you're looking to make a move to Kubernetes or are interested in seeing if you can reduce your Kubernetes maintenance costs with a move to a managed product offering, please contact us for more information. We'd love to tell you more about what Kube360 has to offer, demo the product on a live cluster, and discuss possible deployment options.

Learn more about Kube360

December 11, 2020 12:00 AM

December 10, 2020

Chris Penner

Simpler and safer API design using GADTs

Simpler and safer API design using GADTs

Hey folks! Today we'll be talking about GADTs, that is, "Generalized Abstract Data Types". As the name implies, they're just like Haskell's normal data types, but the generalized bit adds a few new features! They aren't actually too tough to use once you understand a few principles.

A lot of the writing out there regarding GADTs is pretty high-level research and academia, in contrast, today I'm going to show off a relatively practical and simple use-case. In this post we'll take a look at a very real example where we can leveraged GADTs in a real-world Haskell library to build a simple and expressive end-user interface.

We'll be designing a library for CSV manipulation. I used all of the following techniques to design the interface for my lens-csv library. Let's get started!

Here's a teensy CSV that we'll work with throughout the rest of the post. Any time you see input used in examples, assume it's this CSV.


In its essence, a CSV is really just a list of rows and each row is just a list of columns. That's pretty much it! Any other meaning, even something as benign as "this column contains numbers" isn't tracked in the CSV itself.

This means we can model the data in a CSV using a simple type like [[String]], so far, so simple! There's a bit of a catch here though. Although it's clear to us humans that Name,Age,Home is the header row for this CSV, there's no marker in the CSV itself to indicate that! It's up to the user of the library to specify whether to treat the first row of a CSV as a header or not, and herein lies our challenge!

Depending on whether the CSV has a header row or not, the user of our library will want to reference the CSV columns by a either a column name or column number respectively. In a dynamic language (like Python) this is easily handled, we would provide separate methods for indexing columns by either header name or column number, and it would be the programmer's job to keep track of when to use which. In a strongly-typed language like Haskell however, we prefer to prevent such mistakes at compile time. Effectively, we want to give the programmer jigsaw pieces that only fit together in a way that works!

For the sake of pedagogy our miniature CSV library will perform the following tasks:

  • Decode a CSV string into a structured type
  • Get all the values in a given row

An Initial Approach

First things first we'll need a decode function to parse the CSV into a more structured type. In a production environment you'd likely use performant types like ByteString and Vector, but for our toy parser we'll stick to the types provided by the Prelude.

Since this is a post about GADTs and not CSVs encodings, we won't worry about escaping or quoting here. We can just do the naive thing and split our rows into cells on every comma. The Prelude, unfortunately, provides lines and words, but doesn't provide a more generic splitting function, so I'll whip one up to suit our needs.

Here's a function which splits a string on commas in such a way that each "cell" is separated in the resulting list.

splitOn :: Eq a => a -> [a] -> [[a]]
splitOn splitter = foldr go [[]]
    go char xs
      -- If the current character is our "split" character create a new partition
      | splitter == char = []:xs
      -- Otherwise we can add the next char to the current cell
      | otherwise = case xs of
          (cell:rest) -> (char:cell):rest
          [] -> [[char]]

We can try it out to ensure it works as expected:

>>> splitOn ',' "a,b,c"

-- Remember that CSV cells might be empty and we need it to handle that:
>>> splitOn ',' ",,"

Now we'll write a type to represent our CSV structure, it will have a constructor for a CSV with headers, one for a CSV without headers:

data CSV =
      -- A CSV with headers has named columns
      NamedCsv [String] [[String]]
      -- A CSV without headers has numbered columns
    | NumberedCsv [[String]]
    deriving (Show, Eq)

Great, now we can write our first attempt of a decoding function. The implementation isn't really important here, so just focus on the type!

decode :: Bool -- Whether to parse a header row or not
       -> String -- The csv file
       -> Maybe CSV -- We'll return "Nothing" if anything is wrong

And here's the implementation just in case you're following along at home:

-- Parse a header row
decode True input =
    case splitOn ',' <$> lines input of
        (headers:rows) -> Just (NamedCsv headers rows)
        [] -> Nothing
-- No header row
decode False input =
  let rows = splitOn ',' <$> lines input
   in Just (NumberedCsv rows)

Simple enough; we create a CSV with the correct constructor based on whether we expect headers or not.

Now let's write a function to get a whole column of the CSV. Here's where things get a bit more interesting:

getColumnByNumber :: CSV -> Int    -> Maybe [String]
getColumnByName   :: CSV -> String -> Maybe [String]

Since each type of CSV takes a different index type we need two different functions in order to do effectively the same thing; let's see the implementations:

-- A safe indexing function to get elements by index.
-- This is strangely missing from the Prelude... 🤔
safeIndex :: Int -> [a] -> Maybe a
safeIndex i = lookup i . zip [0..]

-- Get all values of a column by the column index
getColumnByNumber :: Int -> CSV -> Maybe [String]
getColumnByNumber columnIndex (NumberedCsv rows) =
    -- Fail if a column is missing from any row
    traverse (safeIndex columnIndex) rows
getColumnByNumber columnIndex (NamedCsv _ rows) =
    traverse (safeIndex columnIndex) rows

-- Get all values of a column by the column name
getColumnByName :: String -> CSV -> Maybe [String]
getColumnByName  _ (NumberedCsv _) = Nothing
getColumnByName columnName (NamedCsv headers rows) = do
    -- Get the column index from the headers
    columnIndex <- elemIndex columnName headers
    -- Lookup the column from each row, failing if the column is missing from any row
    traverse (safeIndex columnIndex) rows

This works of course, but it feels like we're programming in a dynamic language! If you try to get a column by name from a numbered CSV we know it will ALWAYS fail, so why do we even allow the programmer to express that? Certainly it should fail to typecheck instead!

>>> decode True input >>= getColumnByName "Name"
Just ["Luke","Leia","Han"]

-- We'll get 'Nothing' no matter what if we index a numbered csv by name!
>>> decode False input >>= getColumnByName "Name"

The problem here becomes even more pronounced when we write a function like getHeaders. Which type signature should it have?

This one:

getHeaders :: CSV -> [String]

Or this one?

getHeaders :: CSV -> Maybe [String]

We could pick the first signature and always return the empty list [] if someone mistakenly tries to get the headers of a numbered CSV, but that seems a bit disingenuous; It's common to check the number of columns in a CSV by counting the headers, and it's not that every numbered CSV has zero columns! If we go with the latter signature it properly handles the failure case of calling getHeaders on a numbered CSV, but we know that getting the headers from a NamedCSV should never fail, so in that case we're adding a bit of unnecessary overhead, all callers will have to unwrap Maybe in that case no matter what 😬.

In order to fix this issue we'll need to go back to the drawing board and see if we can keep track of whether our CSV has headers inside its type.

Differentiating CSVs using types

I promise we'll get to using GADTs soon, but let's look at the "simple" approach that I suspect most folks would try next and see where it ends up so we can motivate the need for GADTs.

The goal is to prevent the user from calling "header" specific methods on a CSV that doesn't have headers. The simplest thing to do is provide two separate decode methods which return completely different concrete result types:

decodeWithoutHeaders :: String -> Maybe [[String]]
decodeWithHeaders    :: String -> Maybe ([String], [[String]])

Next we could would implement:

getColumnByNumber :: Int -> [[String]]             -> Maybe [String]
getColumnByName   :: Int -> ([String], [[String]]) -> Maybe [String]

This solves the problem at hand, if we decode a CSV without headers we'll have a [[String]] value, and can't pass that into getColumnByName. However, there are a few minor annoyances with this approach. Notice how we can no longer use getColumnByNumber to get a column by number on a CSV which has headers? Of course we could could snd it into [[String]] first, but converting between types everywhere is annoying and also means we can't write code which is polymorphic over both kinds of CSV. Ideally we would have a single set of functions which was smart about which type of CSV so it could do the right thing while also ensuring type-safety.

Some readers are likely thinking "Hrmmm, a group of functions polymorphic over a type? Sounds like a typeclass!" and you'd be right! As it turns out, this is roughly the approach that the popular cassava library takes to its library design.

cassava is more record-centric than the library we're designing, so it provides separate typeclasses for named and unnamed record types; ToNamedRecord, FromNamedRecord, and their numbered variants ToRecord and FromRecord. In our case we'll be defining different typeclass instances for the CSV itself.

Here's the rough idea:

{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE InstanceSigs #-}

class IsCSV c where
  -- A type family to specify the "indexing" type of the CSV
  type Index c :: Type
  -- Try parsing a CSV of the appropriate type
  decode :: String -> Maybe c

  getColumnByIndex  :: Index c -> c -> Maybe [String]
  getColumnByNumber :: Int     -> c -> Maybe [String]
  getRow :: Int -> Row c

Let's talk about the Index type family. Numbered CSVs are indexed by an Int, while Named CSVs are indexed by a String. We can use the Index associated type family to specify a different type of Index for each typeclass instance.

The headerless CSV is pretty easy to implement:

instance IsCSV [[String]] where
  type Index [[String]] = Int
  -- You can re-purpose the earlier decoder here.
  decode = ...

  -- The Index type is Int, so we index by Int here:
  getColumnByIndex :: Int -> [[String]] -> Maybe [String]
  getColumnByIndex n rows = traverse (safeIndex n) rows

  -- Since the index is an Int we can re-use the other implementation
  getColumnByNumber :: Int -> [[String]] -> Maybe [String]
  getColumnByNumber = getColumnByIndex

Now an instance for a CSV with headers:

instance IsCSV ([String], [[String]]) where
  -- We can index a column by the header name
  type Index ([String], [[String]]) = String
  decode = ...

  -- The 'index' for this type of CSV is a String
  getColumnByIndex :: String -> ([String], [[String]]) -> Maybe [String]
  getColumnByIndex columnName (headers, rows) = do
    columnIndex <- elemIndex columnName headers
    traverse (safeIndex columnIndex) rows
  -- We can still index a Headered CSV by column number
  getColumnByNumber :: Int -> ([String], [[String]]) -> Maybe [String]
  getColumnByNumber n = getColumnByNumber n . snd

This works out pretty well, here's how it looks to use it:

>>> decode input >>= getColumnByIndex ("Name" :: String)
<interactive>:99:36: error:
    • Couldn't match type ‘Index c0’ with ‘String’
      Expected type: Index c0
        Actual type: String
      The type variable ‘c0’ is ambiguous
    • In the first argument of ‘getColumnByIndex’, namely
      In the second argument of ‘(>>=)’, namely
        ‘getColumnByIndex "Name"’
      In the expression:
        decode input >>= getColumnByIndex "Name"

Uh oh... one issue with type classes is that GHC might not know which which instance to use in certain situations!

We can help out GHC with a type hint, but it's a bit annoying and the error message isn't always so clear!

>>> decode @([String], [[String]]) input >>= getColumnByIndex "Name"
Just ["Luke","Leia","Han"]

-- Or we can define a type alias to clean it up a smidge
>>> type Named = ([String], [[String]])
>>> decode @Named input >>= getColumnByIndex "Name"
Just ["Luke","Leia","Han"]

This works out okay, it's by no means "unusable", but let's take a look at how GADTs can allow us to encourage better error messages while also making it easier to read, and reduce the required boilerplate all at once!

The GADT approach

Before we use them for CSVs let's get a quick primer on GADTs, if you're well-acquainted already feel free to skip to the next section.

GADTs, a.k.a. Generalized Abstract Data Types, bring a few upgrades over regular Haskell data types. Just in case you haven't seen one before, let's compare the regular Maybe definition to its GADT version.

Here's how Maybe is written using standard data syntax:

data Maybe a =
  | Just a

When we turn on GADTs we can write the exact same type like this instead:

data Maybe a where
  Nothing :: Maybe a
  Just :: a -> Maybe a

This slightly different syntax, which looks a bit foreign at first, is really just spelling out the type of constructors as though they were functions!

Compare the definition with the type of each constructor:

>>> :t Nothing
Nothing :: Maybe a
>>> :t Just
Just :: a -> Maybe a

Each argument to the function represents a "slot" in the constructor.

But of course there's more than just the definition syntax! Why use GADTs? They bring a few upgrades over regular data definitions. GADTs are most often used for their ability to include constraints over polymorphic types in their constructor definitions. This means you can write a type like this:


data HasEq a where
  HasEq :: Eq a => a -> HasEq a

Where the Eq a constraint gets "baked in" to the constructor such that we can then write a function like this:

checkEq :: HasEq a -> HasEq a -> Bool
checkEq (HasEq one) (HasEq two) = one == two

We don't need to include an Eq a constraint in the type because GHC knows that it's impossible to construct HasEq without one, and it carries that constraint with the value in the constructor!

In this post we'll be using a technique which follows (perhaps unintuitively) from this; take a look at this type:

data IntOrString a where
  AnInt :: Int -> IntOrString Int
  AString :: String -> IntOrString String

Notice how each constructor fills in a value for the polymorphic a type? E.g. IntOrString Int where a is now Int? GHC can use this information when it's matching constructors to types. It lets us write a silly function like this:

toInt :: IntOrString Int -> Int
toInt (AnInt n) = n

Again, this doesn't seem too interesting, but there's something unique here. It looks like I've got an incomplete implementation for toInt; it lacks a case for the AString constructor! However, GHC is smart enough to realize that any values produced using the AString constructor MUST have the type IntOrString String, and so it knows that I don't need to handle that pattern here, in fact if I do provide a pattern match on it, GHC will display an "inaccessible code" warning!

The really nifty thing is that we can choose whether to be polymorphic over the argument or not in each function definition and GHC will know which patterns can appear in each case. This means we can just as easily write this function:

toString :: IntOrString a -> String
toString (AnInt n) = show n
toString (AString s) = s

Since a might be Int OR String we need to provide an implementation for both constructors here, but note that EVEN in the polymorphic case we still know the type of the value stored in each constructor, we know that AnInt holds an Int and AString holds a String.

If you're a bit confused, or just generally unconvinced, try writing IntOrString, toInt and toString in a type-safe manner using a regular data constructor, it's a good exercise (it won't work 😉). Make sure you have -Wall turned on as well. .

GADTs and CSVs

After that diversion, let's dive into writing a new CSV type!

{-# LANGUAGE StandaloneDeriving #-}

data CSV index where
  NamedCsv    :: [String] -> [[String]] -> CSV String
  NumberedCsv ::             [[String]] -> CSV Int

-- A side-effect of using GADTs is that we need to use standalone deriving 
-- for our instances.
deriving instance Show (CSV i)
deriving instance Eq   (CSV i)

This type has two constructors, one for a CSV with headers and one without. We're specifying a polymorphic index type variable and saying that CSVs with headers are specifically indexed by String and CSVs without headers are indexed by Int. Notice that it's okay for us to specify a specific type for the index parameter even though it's a phantom-type (i.e. we don't actually store the index type inside our structure anywhere).

Let's implement our CSV functions again and see how they look.

We still need the end-user to specify whether to parse headers or not, but we can use another GADT to reflect their choice in the type, and propagate that to the resulting CSV. Here's what a CSV selector type looks like where each constructor carries some type information with it (i.e. whether the resulting CSV is either String or Int indexed).

data CSVType i where
  Named :: CSVType String
  Numbered :: CSVType Int

deriving instance Show (CSVType i)
deriving instance Eq (CSVType i)

Now we can write decode like this:

decode :: CSVType i -> String -> Maybe (CSV i)
decode Named s = case splitOn ',' <$> lines s of
    (h:xs) -> Just $ NamedCsv h xs
    _ -> Nothing
decode Numbered s = Just . NumberedCsv . fmap (splitOn ',') . lines $ s

By accepting CSVType as an argument it acts as a proxy for the type information we need. We can provide then provide a separate implementation for each csv-type easily, and the index type provided on the CSVType option is propagated to the result, thus determining the type of the output CSV too!

Now for getColumnByIndex and getColumnByNumber; in the typeclass version we needed to provide an implementation for each class instance, using GADTs we can collapse everything down to a single implementation for function.

Here's getColumnByIndex:

getColumnByIndex :: i -> CSV i -> Maybe [String]
getColumnByIndex  columnName (NamedCsv headers rows) = do
    columnIndex <- elemIndex columnName headers
    traverse (safeIndex columnIndex) rows
getColumnByIndex n (NumberedCsv rows) = traverse (safeIndex n) rows

The type signature says, if you give me the index type which matches the index to the CSV you provide, I can get you that column if it exists. It's smarter than it looks!

Even though the GADT constructor comes after the first argument, by pattern matching on it we can determine the type of i, and we then know that the first argument must match that i type. So when we match on NamedCsv the first argument is a String, and when we match on NumberedCsv it's guaranteed to be an Int

In the original "simple" CSV implementation you could try indexing into a numbered CSV with a String header and it would always return a Nothing, now it's actually a type error; we've prevented a whole failure mode!

-- Decode our input into a CSV with numbered columns
>>> let Just result = decode Numbered input
>>> result
NumberedCsv [
-- Here's what happens if we try to write the wrong index type!
>>> getColumnByIndex "Name" result
    • Couldn't match type ‘Int’ with ‘String’
      Expected type: CSV String
        Actual type: CSV Int

It works fine if provide an index which matches the way we decoded:

-- By number using `Numbered`
>>> decode Numbered input >>= getColumnByIndex 0
Just ["Name","Luke","Leia","Han"]
-- ...Or by header name using `Named`
>>> decode Named input >>= getColumnByIndex "Name"
Just ["Luke","Leia","Han"]

When indexing by number we can ignore the index type of the CSV entirely, since we know we can index either a Named or Numbered CSV by column number regardless.

getColumnByNumber :: Int -> CSV i -> Maybe [String]
getColumnByNumber n (NamedCsv _ rows) = traverse (safeIndex n) rows
getColumnByNumber n (NumberedCsv rows) = traverse (safeIndex n) rows

In an earlier attempt we ran into problems writing getHeaders, since we knew intuitively that it should always be safe to return the headers from a "Named" csv, but we needed to introduce a Maybe into the type since we couldn't be sure of the type of the CSV argument!

Now that the CSV has the index as part of the type we can solve that handily by restricting the possible inputs the correct CSV type:

getHeaders :: CSV String -> [String]
getHeaders (NamedCsv headers _) = headers

We don't need to match on NumberedCsv, since it has type CSV Int, and that omission allows us to remove the need for a Maybe from the signature. Pretty slick!

This is the brilliance of GADTs in this approach, we can be general when we want to be general, or specific when we want to be specific.

The interfaces provided by each approach look relatively similar at the end of the day. The typeclass signatures have a fully polymorphic variable with a type constraint AND a type family, whereas the GADT signatures are simpler, including only a polymorphic index type, and the consumers of the library won't need to know anything about GADTs in order to use it.

The typeclass approach:

decode :: IsCSV c => String -> Maybe c
getColumnByIndex :: IsCSV c => Index c -> c -> Maybe [String]
getColumnByNumber :: IsCSV c => Int -> c -> Maybe [String]
getHeaders :: IsCSV c => c -> Maybe [String]

The GADT approach:

decode :: CSVType i -> String -> Maybe (CSV i)
getColumnByIndex :: i -> CSV i -> Maybe [String]
getColumnByNumber :: Int -> CSV i -> Maybe [String]
getHeaders :: CSV String -> [String]

Though similar, I find the GADT version easier to understand as a consumer, everything you need to know is available to you, and you can look up the CSV type to learn more about how to build one, or which types are available.

The GADT types also result in simpler type errors when something goes wrong.

Here's one common problem with the typeclass approach, decode has a polymorphic result and getColumnByIndex has a polymorphic argument, GHC can't figure out what the intermediate type should be if we string them together:

>>> decode input >>= getColumnByIndex "Name"
    • Couldn't match type ‘Index c0’ with ‘String’
      Expected type: Index c0
        Actual type: String
      The type variable ‘c0’ is ambiguous
    • In the first argument of ‘getColumnByIndex’, namely
        ‘("Hi" :: String)’
      In the second argument of ‘(>>=)’, namely
        ‘getColumnByIndex ("Hi" :: String)’
      In the expression:
        decode input >>= getColumnByIndex ("Hi" :: String)

We can fix this with an explicit type application, but that requires us to know the underlying type that implements the instance.

>>> type Named = ([String], [[String]])
>>> decode @Named input >>= getColumnByIndex "Name"
Just ["Luke","Leia","Han"]

If we mismatch the index type here, even when providing an explicit type annotation, we get a slightly confusing error since it still mentions a type family:

>>> decode @Named input >>= getColumnByIndex (1 :: Int)
    • Couldn't match type ‘Int’ with ‘String’
      Expected type: Index Named
        Actual type: Int
    • In the first argument of ‘getColumnByIndex’, namely ‘(1 :: Int)’
      In the second argument of ‘(>>=)’, namely
        ‘getColumnByIndex (1 :: Int)’
      In the expression:
        decode @Named input >>= getColumnByIndex (1 :: Int)

Compare these to the errors generated by the GADT approach; first we'll chain decode with getColumnByIndex:

>>> decode Named input >>= getColumnByIndex "Name"
Just ["Luke","Leia","Han"]

There's no ambiguity here! We only have a single CSV type to choose, and the "index" type variable is fully determined by the Named argument. Very nice!

What if we try to index by number instead?

>>> decode Named input >>= getColumnByIndex (1 :: Int)
    • Couldn't match type ‘String’ with ‘Int’
      Expected type: CSV String -> Maybe [String]
        Actual type: CSV Int -> Maybe [String]
    • In the second argument of ‘(>>=)’, namely
        ‘getColumnByIndex (1 :: Int)’
      In the expression:
        decode Named input >>= getColumnByIndex (1 :: Int)
      In an equation for ‘it’:
          it = decode Named input >>= getColumnByIndex (1 :: Int)

It clearly outlines the expected and actual types:

      Expected type: CSV String -> Maybe [String]
        Actual type: CSV Int -> Maybe [String]

Which should be enough for the user to spot their mistake and patch it up.

Next steps

Still unconvinced? Try taking it a step further!

Try writing getRow and getColumn functions for both the typeclass and GADT approaches. The row that's returned should support type-safe index by String or Int depending on the type of the source CSV.

E.g. the GADT version should look like this:

>>> decode Named input >>= getRow 1 >>= getColumn "Name"
Just "Leia"
>>> decode Numbered input >>= getRow 1 >>= getColumn 0
Just "Leia"

You'll likely run into a rough patch or two when specifying different Row result types in the typeclass approach (but it's certainly possible, good luck!)


This was just a peek at how typeclasses and GADTs can sometimes overlap in the design space. When trying to decide whether to use GADTs or a typeclass for a given problem, try asking the following question:

Will users of my library need to define instances for their own datatypes?

If the answer is no, a GADT is often clearer, cleaner, and has better type inference properties than the equivalent typeclass approach!

For a more in-depth "real world" example of this technique in action check out my lens-csv library. It provides lensy combinators for interacting with either of named or numbered CSVs in a streaming fashion, and uses the GADT approach to (I believe) great effect.

Enjoy playing around!

Hopefully you learned something 🤞! Did you know I'm currently writing a book? It's all about Lenses and Optics! It takes you all the way from beginner to optics-wizard and it's currently in early access! Consider supporting it, and more posts like this one by pledging on my Patreon page! It takes quite a bit of work to put these things together, if I managed to teach your something or even just entertain you for a minute or two maybe send a few bucks my way for a coffee? Cheers! �

Become a Patron!

December 10, 2020 12:00 AM

December 09, 2020

Philip Wadler

A Year of Radical No's

Sue Fletcher-Watson describes her plan for A Year of Radical No's and follows up with Nine Months of Saying No – an update. Thanks to Vashti Galpin for the pointer!

So my main fear was that this Strategic Leadership Course would try to feed me time management tips, taking up 6 precious days of my time, when what I need is just LESS WORK. Thank the lord, far from it. ... One session left a particularly strong impression on me.  We spent some focused time considering the work-life balance challenges of another person on the course, culminating in offering them some advice. My advice? Say No, for a whole year, to everything new. Conferences, training, collaborations, journal reviews, student supervisions, the whole lot. Their response? Laughter.  None of us could imagine doing such a thing.

by Philip Wadler ( at December 09, 2020 02:20 PM